Speakers: Charlotte Willner (Director, Trust and Safety Association), Chair – (CW); Tracey Breeden, Match Group – (TB); Richard Pursey, SafeToNet – (RP); Azmina Dhrodia, World Wide Web Foundation – (AZ)
Speakers: Suki Fuller, Analytical Storyteller – (SF); Rt Hon Oliver Dowden, MP – (OD); Clara Tsao, Entrepreneur and DWeb Technologist – (CT); Rishad Tobaccowala, Publicis Group – (RT); Julie Cordua, Thorn – (JC); Marc Antoine Durand, Yubo – (MD); Weszt Hart, Riot Games – (WH); Rachael Franklin, EA – (RF)
Speakers: Christina Michalos QC, Barrister at 5RB and Queens Council and Author – (CM); Professor Victoria Nash, Oxford Internet Institute – (VN); Professor Lorna Woods OBE, University of Essex – (LW); Simon Saunders, Ofcom – (SS); Iain Corby, Age Verification Providers Association – (IC)
IS Hello everyone and welcome to what I’m sure is going to be a fascinating discussion about how user safety can be built into our enterprise architectures and our products. My own company Cyan started out working with law enforcement on online child abuse and when we started extending into the area of online safety as well, we felt like there were lots of great one to one conversations taking place, where we really needed collective ones. So, I got involved in forming OSTIA, the Online Safety Tech Industry Association, which I now chair.
One of our key aims is to provide a voice of hope by showing what can be done with safety technology so I’m delighted to have with me today a panel of leaders from companies that are taking a front running position on safety and from members of the online safety tech sector to talk about what deploying safety tech really looks like so I hope our conversation will be wide ranging and explore some of the key issues – just some of those we might cover include how online services can be designed or redesigned to put safety at their heart rather than just adding it as an afterthought; deployment models, whether we are deploying safety tech on premises, in clouds, what some of the trade-offs might be between performance, privacy and security; creating positive feedback looks between human judgement and what technology can do on its own; mitigating mistakes where no technology is perfect – how do we ensure errors in false positives, in particular, aren’t disruptive to user experiences; and building layers of process from first lines of defence through to final arbitration, and how different approaches work in different places.
Safety tech is constantly changing and our panellists may be able to shed some light on what is a baseline requirement today, what’s needed in a higher risk environment and, just as importantly, and what tech is coming and we can expect to see in widespread use in the near future.
Without further ado, I’m going to briefly introduce our four panellists and then we’ll let them in turn introduce themselves properly and tell us a little bit about their areas of interest. So, our panellists are Emma Rosell, the CTO of NetClean, Remy Malan the Vice President of Trust and Safety at Roblox, Chris Priebe, Founder and Exec Chair of Two Hat, and Michele Banks, the Chief Technology Officer of Sentropy. Welcome all, we’ll start with you Emma.
ER Thank you Ian, I’m really happy to be here. So, when you decide to take action to prevent child sexual abuse material in your organisation or in your product a common concern is the risk of false positives. For us at NetClean the most common question we get is the one about the holiday pictures. So, our customers ask us “so what about my holiday pictures the ones with my children playing on the beach, would you flag those?” the answer is, of course, no and I will get to why. To be able to act on the problem with csam without the fear of false positive the use of hashing technology is a good starting point. The hash is like a finger print or file, which means that every file has unique hash value; this means that from a privacy perspective, this technology is very safe to use as long as the hashes themselves can be trusted. So, if you get a match, you will know that you have found previously identified and classified csam. Besides that, you will have the ability to get a match and act on that without actually having to look at the material. This technology is also fast, which means you can search through a lot of material in a short amount of time. But, with all technologies of course, there are also weaknesses and the most obvious one is the same as the strength of the technology. So, you will only find material that is previously identified by law enforcement professionals. This means that you don’t have to worry about those holiday photographs I talked about in the beginning, but it also means that new or previously unseen material will not be found.
So, matching with hashing technology is safe and it is fast but, depending on the situation, the next step could be to apply AI tools to identify additional csam to be able to build a stronger case. Thank you.
IS Thank you very much Emma. Remy, would you like to go next?
RM Yes, thank you Ian, a pleasure to be here today and thank you very much for inviting me to join you on the panel. Roblox is a company that builds its own technology but in many ways we’re a user of safety tech and for us we have to think about how we apply safety in our organisation. I’ll make a few remarks about this – it’s a very deep subject and you can go quite far into safety. I would start by saying, do think of it as a multi-dimensional problem as you think about how you want to bring safety into your organisation, you’ll need to think about the processes you use, the people, the staff as well as the technology so this is a classic people, process and systems type of situation where you’re going to be looking at this across multiple threads. Safety itself is also something that is a layered topic; there’s typically no one thing to do to make your platform perfectly safe. So, you have to look at all the different vectors that you have where there might be different safety issues and think about how you’re going to create a layered environment. For example, you may want to use both technology and people, where the two systems together, the tech and the people create a layered process for you. So, these are some of the things you’re going to think about as you think about deploying. And finally, as you think about safety, I think Ian remarked about this at the beginning, safety is something you should think about how you could build into what you’re doing, so safety isn’t something you should bolt on afterwards. Like most other security topics, it’s usually much better to design it as part of your process. So that’s another thing to think about which is as you think about building your business, whether its services you’re going to offer to a community, whether its products, think about how you would actually build safety in and how safety requirements become part of the functional requirements of what you’re going to deliver. I think that’s the best way to look at it. I do think also you can differentiate with safety; safety doesn’t have to be something that’s thought of as an add-on or extra cost, you can actually use that very effectively to differentiate what you’re doing from others who you may be competing with in your particular market place. I look forward to the conversation. Thank you.
IS Thank you Remy, great to hear some of the themes from the opening plenary session coming through there as well. Chris.
CP Hello. Thanks for introducing me, Ian. I’m the Founder and Executive Chairman of Two Hat security – we provide chat filtering technology for some of the largest companies in the world. I’ve been a programmer for over 20 years building safety tools and ways that people can add user generated content. I helped my brother build a game called Club Penguin – he ended up growing that to 300million users and my role was mostly building out the safety back end and moderating tools. I believe that that platform was successful because parents could trust it. They knew that when they saw their kid playing that game the logo was in the top left corner, they knew that it was going to be ok it built that trust and affirmation. After Disney bought them, at the end of basically asking me to hack everything with the Mickey Mouse logo on it, so I went into security really heavily, and I loved it, but something began to happen in the internet which was very scary which was as the internet was growing, people began to do really nasty things to each other online and I remember watching one story of a young girl named Amanda Todd who was using the internet and she shared some inappropriate pictures of herself and they kept being pushed open to her school and she felt she was trapped, she felt she couldn’t escape, and she ended up committing suicide. It was a terrible tragic story and I was talking to her Mum and her Mum asked me well couldn’t something have been done and I thought about that and I said yes, we as technology providers, we probably should be doing something. We’ve got to figure out how to do it. We didn’t know how at the time but we’ve spent the last eight years trying to solve that problem and I believe we’ve made great strides towards it. So, when we look at that I created a company called Two Hat security. One Hat was to find the destructive users and to stop the pain that they’re causing; and the second was to find the positive users, because the majority of users are highly, highly positive and we can’t forget about them, and how do we promote those users, so that our other users feel welcome and it sets the tone of the community? I’m pleased to say that after eight years we now processed one trillion messages last year. From an engineering perspective that’s really exciting from a scale challenge but since we’re talking a lot to technical people, I wanted to go back to the human side of that one trillion messages is really one trillion human interactions. That’s a case that someone says Hello, it’s such a trivial word – hello, its just a few characters long but it’s so incredibly important, especially during COVID and other things when we feel isolated and disconnected, where we have a chance to connect with another human being, we have to get these messages right because if we falsely create all of these false positives it’s going to force people to feel more isolated and less connected. The other thing that happens is you’ve got to get the other side right, if we screw up on it, these human interactions are being destroyed and people are being bullied, they’re being sexually harassed, they’re feeling like they don’t belong anywhere. These are huge problems that we need to solve and I look forward to diving deeper into this in this panel.
IS Thank you very much indeed Chris and interesting that you’ve been on both sides of this, working on a platform and now in safety tech. Michele?
MB Thank you Ian for having me, nice to meet everyone. I’m one of the co-founders and the CTO of Sentropy. We’re a relatively new company, building technologies that use machine learning to detect harmful content and we also provide tools that allow enterprises to both analyse and moderate it. So, for me, its really exciting to see more companies enter into this space of online abuse and specifically to see those with machine learning and data expertise leveraging their skills for social good. I’d really like to open with some thoughts on why I think that using machine learning and AI can help supercharge safety tech. As everybody in this room knows, detecting online harassment requires a very complex understanding of many different types of signals from the behavioural abuser’s signals but most importantly I think we would say it’s the content itself the actual language and the visuals that people are using. Abuse detection specifically presents some new challenges for content understanding online and first I’d say this is the rate at which language changes – it is growing at an incredible rate. One study I read estimated that thousands of new definitions are added to urban dictionary every week. What this means, unfortunately, is that there are increasingly more ways for people to express abuse online and it’s really difficult to keep pace with the knowledge without help from AI assisted workflows and having a human expert in the loop.
Another thing we see is what we like to call adversarial behaviour and this is when people are really determined to generate offensive content – they slightly alter their writing to get around certain content filters. They could use techniques like, for instance, intentionally misspelling a word, a previously harmless word, to mean something hateful and they can still find a way to invade our intent to harm. In my experience, machine learning systems are really helpful at generalising to unseen behaviours even when this kind of activity is present. Lastly, I’ll just say that we found that while many platforms are dedicated to protecting and removing many of the same abusive behaviours, things such as hate speech, bullying, misinformation, there is very little agreement between them about what actually constitutes a violation, despite everyone’s best effort to evolve policy and have humans studying this unfortunate type of behaviour. What we see as tolerance for abuse is actually determined based on the norms of communities. A really simple example would be a community for children is going to have much different criteria than one for mature audiences. So, machine learning in our workflow has really helped us maximise abuse detection so that it works for every unique community. I’ll end here but really looking forward to discussing and taking more of your questions later on.
IS So, thank you very much everybody for introducing yourselves. We’ve all got different perspectives but I’d be really interested to hear your thoughts on the benefits to brands that adopt a strong approach to safety tech.
MB Ian, I’d like to say something on that one. Our team has been really shocked but not surprised to see the prevalence of online abuse and the impact to the health of both individuals and the businesses – it’s overwhelming. By some recent studies we’ve discovered that approximately 1 in 3 Americans and even more teenagers, 3 out of 5, have suffered online harassment. For teens, when this happens, the risk of suicide dramatically increases once they’ve been harassed and for the business impact it’s even more interesting, we’ve heard from companies who say they’ve lost between 10-15% of their user base when abusive actors are active on that platform. 30% of people leave a platform when they’re getting directly harassed and another 13% leave just when they see someone else getting harassed. So, I’d say that being proactive about this as soon as you can is a really good way to provide an insurance policy for your business.
CP I think Ian, if you go deeper into that, when you go online, your first experience is critically important. So, if you walk into a room and everyone says ‘go whatever yourself, I don’t want you here, your ethnic group is not allowed, etcetera’ that impacts two things in fact – whether you want to stay, a bunch of people just leave and the second thing that happens is that it causes you to behave, if you stay, you end up behaving in a similar fashion. So, it propagates the same behaviour. Conversely, if you come into a community and people are welcoming you, they’re helping you get aligned, they’re introducing you to how the feature work, they’re helping you participate, that’s a community you want to stay at. They say in the gaming industry, I should have quit this game years ago but I stay because my friends are here and we have to build places of belonging, places where people feel welcome and accepted. I think that’s the biggest benefit, is you get communities where, for instance Roadblocks, where people stay for years, they keep playing, my kids love that game and that’s where their friends are and they just keep doing it.
IS So, I’m hearing that there are very tangible benefits to addressing the problems of online safety in terms of building your communities which, at the end of the day, is vital to just about every online business. But you’ve all talked about different aspects of the threat from online child abuse to bullying and we also hear about issues with terrorism, racism, extremism, it’s a pretty complicated world. How can we help people to make sense of this complex threat landscape?
RM Ian, I’d be glad to take a look at that because it’s something we have to do of course. I think that Michele had also put her finger on one of the issues here, particularly when you look at things related to language, language is extremely malleable in that words change, new slang, new memes emerge all the time, and there are many different ways to say things, right, so if you’re trying to say good things or bad things you can always find a different way to say something to get your point across. And, so, anyone who is working in a safety environment where there’s things like language or user generated content that’s going to be uploaded, has to be constantly thinking about how are the ways that people might work around the existing safety systems? How is language, how are memes in society changing, where something which was ok before is no longer ok? I think if you think back over the last 12 months in the COVID era, we saw many situations where societies under stress decided when something wasn’t ok anymore and so I think you have to constantly be looking at what’s happening in the larger society and you need a way to evaluate what’s happening on your own platform. You need feedback mechanisms inside of your own safety systems to be able to tell you when things are no longer working or when the environment has changed enough that you yourself need to now start making changes to your safety tech.
IS That’s a theme you touched on Michele, the constant change and constant improvement
MB Yes, there’s another thing I’d like to touch on though that I think is pretty important for how can we help people understand what is going on here, and this is like a personal passion of mine, I think that when we started Sentropy and when we started talking to people, one of the things we found is that there’s no standardisation or definitions that people can use to understand what all the behaviours are and what they actually mean. So we started an open source typology of abuse definitions, we work with experts in the field outside of the technology sector – people in civil society, people in health and wellbeing, and people who are trying to understand extremism online outside of the technology sphere, so one of the things I think we can really do across different enterprises and companies is to do our best to define and share definitions of what abuse actually means, what is a positive and what is a negative example of such things and that would, I think, have the benefit of helping the humans who are part of this process to be able to understand the content and then we can also then help provide more insights into what kind of behaviours are going on online, so, it would be really useful if you could see specifically ok, there’s a spike in anti-Asian sentiment going on on my platform, or teenagers dealing with self-harm and suicide. So, I think that by really engaging each other and policy makers to define and numerate this landscape, I think that would be very helpful.
IS Thanks Michele. Chris, I think you have some thoughts about how people can start to structure their approaches to online threats?
CP Yes, absolutely, you have to treat it like building in safety by design. So, for instance, if you build a nightclub and you have two floors to it and you have a balcony on the second floor and you don’t bother putting in a railing and some guy gets pushed off the balcony in a drunken binge and lands on the ground, there’s a responsibility to that so when we build our communities and our community settings there are certain safeguards to put in place and we should do it as a multi-layered approach so that if they breach layer one you’ve got a second layer to fall back to. So your layer one just simply tell people what you expect like that’s not hardly a technological challenge it’s just ‘we don’t want you to be mean on this platform’ or in the positive ‘we want you to be kind to each other’. Then layer two have like a basic filter; so, if you know that certain phrases like, this one here is ‘encouraging people to commit suicide’ you know that’s a known bad quantity, you can mark those ones down and it never happens, you don’t even have to trust AI to go and figure it out because it’s a known bad thing you never want to happen. Then the third one is some people try and break your system. They’ll just try, and try, and try, it becomes a game to themselves so you have to have a reputation-based system so that if people continue to try, you’re going to be on to their game and increase the severity of the system based on how often they try. Then the fourth one is that for every single place where you create a place for user generated content, whether it’s a title of something or a comment or a username or all the other places, you also need to create a place where people can report it. And, by George, don’t use email because if your users are taking a screen shot of your page and sending it to you by email you might as well use carrier pigeons, I mean that has got to be the worst way of doing it and telling your users you don’t give a ‘rip’ about them. And then if you’re doing all these reports you’re going to need a way to manage that and that’s where AI becomes super powerful because you can look at your past decisions, train AI to look at your current decisions and say, well, I don’t need to deal with these obvious, blatant cases, use my humans for human good and, by the way, you have to have humans because humans are creating the problem for you and as soon as you deploy that AI model we just talked about it’s going to be obsolete because people will change their behaviour. So, you’ve got to have your humans in the loop constantly and continuously. And then, finally, your last layer of defence is you have to run into stats, so put it all into your visibility so that your board and other executives can see I invested this much money in safety tech, look at how many more users have joined and stayed and helped, look at the difference it has made to our community.
IS I love that model and I also love that ultimately, and I think this sometimes gets lost in amongst the technology is that the problems we are trying to solve are things that humans are doing to each other and ultimately that probably means that we need humans in the loop to help us find a solution. We’ve talked a little bit about changing behaviour over time but how important are demographic, cultural, geography issues in identifying harms, are problems different in different parts of the world? Emma, I guess some of the harms you’re targeting are pretty universal, right?
ER Yes, exactly what I was going to say. I think the problem that we are addressing with csam that’s a global problem, that’s very clear to us, that we also have to work globally to solve this kind of problem so, definitely, I would say it’s the same over the whole world.
IS Does that create difficulties for training artificial intelligence Michele? Can you take the training you’ve done for one online language and apply it to another language, or when you move to another language are you essentially starting over?
MB We are not essentially starting over. Some of the concepts do translate quite well but others, like hate speech for example, are much more nuanced and require some amount of customisation for that target demographic. So, we are somewhere in the middle. Certain things are easier than others and we will always consult language experts to ensure that we are not taking an English-centric view on everything.
CP We had a different experience than that. So, we went down the way of language translation and we ended up abandoning it because there’s such a distinct difference between languages. We hired one person for each of the 20 languages we support because, like in English a lot of squares are based like upon body parts when, in France, it’s about the church so that whether you can make some sacrilegious comment about some tabernacle or whatever else so how people approached it was quite different but I think there’s probably something there with, like, using machine translation to help you but you have to have those local experts who understand that culture and the nuance of it.
MB Totally agree, yes.
IS Remy, how does that fit with your experience?
RM I think it’s very similar. You’re hearing that some things are the same, for example, in Emma’s comments about, say, graphic imagery, that doesn’t need any, there’s no linguistic element that you need to translate and an offensive image will be an offensive image and will have a fairly universal notion of offense to it but, as Chris and Michele were pointing out because languages are inherently very flexible and embody different ways to say things yes you do find that you can be offensive in different languages using different concepts, you know, whether it’s profanity or other things that are just considered offensive in the society that uses that language, you know you may find that there are just expressions that in some languages are just innocuous but in others are particularly pointed and not pleasant, and that’s very contextual. So, it leads into one of the issues here that understanding the context of the content you’re evaluating is super important, right? This is very clear in language but it also applies in other things, there are sometimes imagery or memes that look innocent on the surface but when you double click into it you realise oh no, this is actually coded, this meme has more meaning to it. A great example would be Pepe the frog, you know, the cartoon frog which became associated with nationalistic and racist memes, but the frog, itself, started life as a very innocent cartoon character which was co-opted into something which became offensive. So that’s an example of something where the context becomes really important to be able to evaluate.
IS So, we’ve talked a bit about processing language and that’s one piece of context but to what extent do solutions need to use information that isn’t necessarily embedded in the particular chat or the particular image they’re looking at? How do you bring in that context? Are you looking at other information with the system, like user IDs, or ages, or patterns of communication, or are solutions mostly focused on the content and the particular exchange?
RM I would say that ultimately to be effective and have a fully functional safety system you must be able to evaluate multiple lines of evidence, if you will, or multiple lines of data. This does include what is the actual content itself and if that’s blatant enough you don’t really need to evaluate anything else, but I think like we’ve talked about earlier in the panel there are things like the reputation of the people involved, there may be other aspects like what are the societal aspects – where in the world are these people and are there any societal contexts to be aware of so, yes, I think in general, to do safety really, really well you have to be able to introduce context and you have to think of it as a multidimensional problem that you’re solving. I mentioned this at the beginning, that the deeper you get into this the more you realise that you need to be very holistic in your approach, usually a point solution isn’t usually going to be the best – it might be good to get you started but as you get deeper into safety you have to start becoming far more holistic in how you approach it.
CP So, here’s a funny story, when I was back at Club Penguin, the characters could put on different shirts and they could collect them and, you know, it would be a big show, big fashion show idea and well you’ve got these kids they’ve been playing and a lot of them really, really young, and they’d go and say ‘you’ve got a really nice shirt’ and they spelt it wrong and they dropped the ‘r’ and it became a different word in English which is problematic. Now we had a policy which said that if you used a four-letter word like that you were automatically banned from the game for 48 hours, instantly. Now this got me thinking – I walked down the hall and at the customer service area I listened to people on the phone and these kids are balling their eyes out, crying ‘I didn’t mean it’ ‘I don’t even know what it means’ and so I came to realise that reputation is absolutely critically important, so context and reputation, so this kid has been playing for two years, never done anything wrong, so they accidentally typed in a four-letter word, just don’t show the word and move on, we don’t need to go and overreact, we just need to pay attention and we can deal with a lot of our false positive and our false negatives by diving into that reputational piece because it works in both directions. Someone who writes the word in there and then adds a dot – and Michele you gave a great example of adversary text, you know they put the dot in there and then they use a Russian character for c, they use an emoji for a different character, and then they write it upside down, then they do it backwards and upside-down, and they just keep going and there’s just no end to it, well, gosh, if you know the first four patterns, when they get to the tenth or the twelfth pattern you’re already onto their case, so you can deal with a lot of those complexities in that way.
IS So that’s a couple of great examples of how you can deal with the times technology gets it wrong, one is to mitigate by just blocking out a word if there’s good reputation and the other was perhaps the less desirable, dealing with an upset child in a call centre. I’d love to hear some more thoughts on how you deal in your companies with the times the technology gets it wrong, without ending up with the upset child on the phone.
RM I think Ian, good topic, and one that’s really something we think about a lot, so in a lot of our systems we combine both technology and humans in the loop. You know, there are a few places where we let technology be the more prevalent but in anything of more consequence we have a lot of human oversight and part of that is to help us manage both the false positives but also the false negatives right because false positives can be disruptive to people of course because you’re interfering with their enjoyment but at the same time a false negative means you’ve let something through you shouldn’t have right, so from a safety perspective that also has consequences so, our view on this is, you know, we’re not at a point at which you should have fully automated systems except in very specific circumstances, and you should plan to invest humans in the loop where they will be involved in actual decision making, you certainly want humans involved in a supervisory way, in a very tight sort of inner loop of supervision of any technology where you may literally be working with and changing your tech every single day. We have a variety of these systems at Roblox where we have people involved in every situation; we also have others where we have people who, within a 24 hours window, humans will have touched the technology and made an adjustment several times, and then there are other systems which run longer term where they don’t require that level of adjustment. So, again, it’ll come down to what is the specific technology? Is it text filtering? Is it image analysis? Is it other things you’re looking at and how difficult is it for you to manage that fine line between being safe enough but not too safe that you now start to become disruptive with too many false positives or other things, so, there’s also a bit of judgement that’s applied and we think that humans in that loop is the right way to go.
MB One way we have also addressed this problem is, as we’ve engaged with different communities, different platforms, being a third party in this situation is that, I touched on this a little before, not everybody has the same pain tolerance for what they want to allow and not allow. You know, we’ve had customers say ‘oh, this is gamer chat, this is the way people speak and we’re ok with people writing ‘kys’ which is short for kill yourself but, you know, most communities and a child community probably wouldn’t want that through. So, the more we can build into solutions places where moderation systems are giving feedback at that level so you can say I chose not to moderate that or I did let it through on purpose, that becomes a really useful signal for us so that we’re not, you know, making the same mistakes over and over again and, yeh, censoring things that shouldn’t be.
IS It feels to me like at one end of the spectrum we’ve got companies that are really, really focussed on user experience, they’re operating at real scale and, like you were talking about Remy, they can afford to have a really sophisticated system and integrated system in house but there are also a lot of smaller platforms, smaller companies, there’s new platforms, new games, new messaging apps emerging constantly as well as a contingent and, you know, some of us are in this space of companies who are providing, not necessarily complete safety tech solutions but at least point solutions that deal with significant parts of the problem and then, you know, there are companies that you can outsource some or all of your safety operation to. For people who are coming to this and for who safety tech is a fairly new idea, I’d be really interested to hear your thoughts on some of those different approaches. Remy you’ve already sold me on the ideal of a completely integrated approach but do you bring in specialist bits of technology from outside to build into this?
RM Yes, I think a great topic Ian, so, I think this is one where you should approach this in the traditional or the way we sometimes talk about it when you build new systems, the crawl, walk, run approach where, to get started, you need to start with something and you need to build from that. You know, for us, because we look at this very holistically, we look at all sources of safety tech, you know, we’re fortunate we can build a lot of things ourselves, in house, we’re a large software company so that’s an option for us but it’s also an option for us to look at third-party solutions and to sue those where appropriate. We also like to take advantage of, and I’ve alluded to it a little bit here, there are communities where they’re interested in safety, so finding people who might be working on open source is an option if you’re getting started and you’re trying to make some decisions about build versus buy and how can I get started with maybe a fairly small investment. The other thing also is that there are a great many people in the safety community who are very happy to share their own experiences. It’s very easy to find like-minded communities inside of safety, there are a number of organisations where you can go and actually have very open conversations. We’re very fortunate I think in the sense that a lot of people perceive safety as a fundamental thing and not something that’s a competitive issue, so it means there are opportunities to go and learn from the best practices of other companies and many people are quite willing to speak about that so there are ways to get educated without having to run the experiment yourself.
So, I would say, look at what’s available commercially, look at what’s available open source, look at what communities there are where you can learn from others who have been down the path ahead of you – I think all of those can be very useful.
IS Thanks Remy. Emma, I guess you’re working in an area where part of the strength of your solution is that the data that drives it comes from many different sources, right, and that would be hard for any one company to replicate inhouse.
ER Yes, that’s true, and of course our sources is one of our strengths at NetClean, so it’s really important to us. I’d also like to add some things about what Remy said because I’m fairly new to this business but what I realised is that our customers, they talk to each other and they talk to each other a lot so I think there is really strong community and I would say that you don’t see that in other sectors I think so…
IS That’s certainly been one of the joys for me coming into this space.
We’ve just got two minutes left now and I’ve really enjoyed this conversation. I think it’s absolutely wonderful that we’re having it today; this is clearly a growing area. Just in the last two minutes does anybody have one sentence that they would like to share for things that they would like the audience to take away that we’ve not managed to cover?
In that case, I guess I would encourage anybody in the audience who is interested in finding more about this topic, all of the speakers are, their profiles appear on the conference site. I’m sure you’re able to contact them through that if you want to find out more. I’m involved in OSTIA, the Online Safety Tech Industry Association. Our website at ostia.org.uk is out there – feel free to get in touch if you’d like to be part of the organisation, part of the conversation, or if you’d like us to put you in touch with any of the speakers or any of the members.
I’m immensely excited that this event is taking place. It feels like we’re making huge progress in the safety tech sector in a very short amount of time. You know, just a couple of years ago there really weren’t a lot of these collective conversations happening and today we have this wonderful event bringing together speakers and ideas from around the world.
So, I think that is us just about out of time. I think you now need to navigate your way back to the main room for a fantastic closing plenary session where there are quite a few people who you’ve not heard from yet and who I’m sure you don’t want to miss. So, thank you very much to all of the panellists, thank you Emma, Remy, Chris, Michele and thank you very much indeed for joining us for this session.