Safety Tech 2021: Safety by design transcript

Speakers: Charlotte Willner (Director, Trust and Safety Association), Chair – (CW); Tracey Breeden, Match Group – (TB); Richard Pursey, SafeToNet – (RP); Azmina Dhrodia, World Wide Web Foundation – (AZ)


CW      Hello and thank you for joining us at Safety Tech 2021 my name is Charlotte Willner, this is the safety by design breakout panel so if that's the one you meant to come to, you're in the right place. This panel is about safety by design every design decision a product team makes can have an impact on the safety and well-being of the people ultimately using those products. As the Executive Director of the Trust and Safety Professional Association, I know those decisions also have an impact on the people moderating those products too. Today we've invited three panellists to discuss various approaches to safety by design and how products brands and creators can ensure that their process is fair and is inclusive. First, I’d like to introduce Tracey Breeden who is a VP of Trust and Safety at Match Group, and I think she's going to join us here soon. There we are hi Tracey. And then I’d like to introduce Azmina Dhrodia, she's Senior Policy Manager at the World Wide Web Foundation, hi Azmina. 

AZ        Hi

CW      And then finally Richard Pursey who is CEO of SafeToNet. 

RP       Hello everyone

CW      Hi Richard. Thank you all for being here today, various time zones, so thank you especially for that. I'd like to start off by just asking you, each to tell us a little bit about yourself, your work and share some perspective on how you think about safety by design. And then after your remarks I’d like to guide us through a few questions that I certainly have and then open the question up or open the session up to audience questions - so sound good?

TB       Sounds great

CW      Fantastic, Tracey could you lead us off sort of who are you? Why are you here and what does safety by design mean to you?

TB      You got it. Well my name is Tracey Breeden, I lead, of course, safety trust and safety at Match Group, and that involves also overseeing not just trust and safety and social advocacy for Match Group but also trusted safety across our portfolio of dating apps. I recently joined the Match Group about six months ago and previously I served as Uber's Head of Women's Safety and Gender-based Violence programmes for four and a half years and then before that I was a former Police Officer and Investigator for over 15 years and my subject matter expertise really lies in sexual assault, domestic violence and safety of groups that face marginalisation. I've worked with over hundreds of safety groups and advocacy organisations and help create safety products, safety strategy across six continents. You know when we talk about safety design and how I feel about it one it's first and foremost a priority I think I can really speak to some of the approaches and frameworks that really help organisations weave safety culture and design into the fabric of their organisation, and ultimately that creates safer products and safer communities overall for all of us, but it's certainly a priority I certainly do not believe that any organisation can up level what they're doing around safety without safety without a safety design approach and framework in place, and happy to discuss some of the tips and thoughts I have around the best approaches and framework that work for that. 

CW      Fantastic, thank you. Next up we have Azmina Dhrodia. Azmina, could you talk to us a little bit about what safety by design means to you and the work you've been up to in this space?

AD       Yeah sure and so thank you for having me. So I’ve been working on the issue of women's safety over the past few years, both through civil society organisations and but also from the perspective of working at a tech start up that was designing safety tools, so my work and my background has really been understanding around, has really been around understanding how women experience gender-based violence and online abuse in online spaces and I started working on this issue about five years ago when I was at Amnesty International focusing on women's experiences of abuse on social media platforms in particular, and what became very clear very quickly to me is that in these online spaces you know, despite being an incredible place for people to sort of connect and share information and network and participate in communities are also places where women can experience a vast amount of hate and threats against them, simply because of their gender and who they are, and for us and for me when talking about the issue of safety online, and women's safety online, it's really important to acknowledge that you know women of colour, and black women, in particular, and other group marginalised groups are really at the forefront of experiencing online harms. And then after Amnesty, I actually joined a small tech start up where we were building tools to help people filter out unwanted and harassing content on their social media feeds so it was really interesting putting some of that that research into practice in terms of actually designing tools to improve people's safety. And then right now I’m at the web foundation, and we are doing some really exciting work where we're tackling the issue of online abuse against swimming but using against women but using human-centric design thinking and really centring women's experiences and the co-creation of solutions to this problem so you know for those of you that don't know the Web Foundation was co-founded in 2009 by Tim Berners-Lee, Rosemary Lee. Tim Berners-Lee, as many of you will know is the inventor of the World Wide Web, and you know he and the organisation are incredibly committed to ensuring that we have a web that works for everyone and that's a safe and empowering space, so I think when it comes to safety and design, I think what's really unique about the approach that we're taking is that it's truly multi-stakeholder. So, you know, we have started off with a series of consultations looking at online gender-based violence, talking to different groups, working on the issue, focusing on specific women that are disproportionately impacted, and really bringing together you know the TikTok, Google, Twitter, Facebook, Instagram – you know the world's largest social media platforms, but also other civil society organisations and experts working on this issue, and then in this next stage of the work we're going to be using those insights and that evidence to develop a series of design workshops, where we unpack this evidence and we co-create solutions with those that are most impacted but also with the tech companies so we can create something tangible and design something that can work for everybody, but that's also realistic when it comes to you know limitations on tech stack or tech architecture. 

CW      Fantastic, thank you. And then I just I’ve gotten notes from our fantastic facilitator here for people just coming in, what is this session? Before we move to Richard. So, this is the Safety by Design breakout session. We have Tracey Breeden, Azmina Dhrodia and Richard Pursey with us to talk about issues of safety by design and we've just been going around sort of talking through who are these folks, and what is their interest in this field. Richard we're gonna move to you, talk to us about the work that you've been doing around safety by design. Sort of what is it that's brought you here?

RP       Thank you Charlotte, and hi everyone. I mean it's such an honour to be part of what is effectively the world's first conference dedicated to safety tech. Yeah, we've got, we must thank and applaud the work that DCMS has done to bring this conference together and I think it marks a pivotal moment actually, in what is an incredibly exciting time in the safety tech industry. I started SafeToNet with my wife about seven years ago, feels like a hundred years ago I must tell you, and it was when online harms was really seen as a social problem that couldn't be fixed by technology. And it was simply something you had to put up with if you wanted to socialise and interact online. It just came with the territory. But I think all of that is changing as there's some stunning technology emerging to tackle the thorny issues such as sexual predation, bullying, violence and more so, hopefully, this conference will lead us and take us to the point where it won't be long before online harms becomes a thing of the past and we sort of collectively scratch our heads remembering how life used to be without safety tech. So, we started SafeToNet some seven years ago and we're tackling the issue of online predation of children. And we're talking about bullying, abuse, aggression, sextortion, those sorts of things. UNICEF will tell you there's over 750 million children online at any one time and of course they're sort of all vulnerable. They're naïve, they're easy to trick. And so, we embarked on this and developed artificial intelligence to be able to contextualise what a child is doing on their device, their phone or their tablet. And then to run that technology on the device using the processing power of the phone to filter harmful content before any damage is done. 

So, from my perspective I'm here really to talk about safety technology in itself, you by default that's what I do for a living and as far as I'm concerned, integrating safety tech into a company's products is no longer a nice to have it's a must-have. I mean without it you're exposing your users to horrific harm, harm that will affect them for the rest of their lives. Now we know that legislation will of course help you drive your safety by design approach, but hopefully your motivation will be deeper than that. Being safe is a fundamental human right that we should all protect cherish and champion. But you know safety tech isn't just about minimising risk. It can also boost your corporate value and give you competitive edge. I know that for a fact. The safety tech industry is booming, especially here in the UK. And if you adopt a safety by design approach you will gain competitive advantage of that, there's no doubt in my mind and you'll increase barriers to entry too. It's a fast-emerging industry that's attracting investors from around the world and they increasingly see it as a mandatory requirement in any tech stack. So, from my perspective I've got a few, sort of, battle scars when it comes to developing safety tech, and we see ourselves as pioneers to an extent. And my strong advice is that you add safety to your company messaging, be proud a bit, proud of it, brag about it, emphasise it, but moreover ensure that you deliver it. And so, if you're thinking of building your own safety tech stack then there's some lessons to learn and some obstacles to navigate. The good news is that others have gone before you and are happy to share their learnings because this is a collaborative requirement no single company or organisation can tackle online harms on their own. So, whilst preparing for today, I sort of reflected on some of the mountains that satan has had to climb to get to where it is today. We're now safeguarding children in over 110 countries around the world and I guess the sort of three lessons that I wanted to share with you if you're responsible for safety by design in your product set. So firstly, ensure you link safety by design with privacy by design, the two go hand in hand and especially if your product is to be used by children, as SafeToNet so ensure you understand the fine detail of privacy directives, such as GDPR, fine detail I'm talking about here. In particular check out your obligations where they relate to special category data, such as sexual content, religious and political content. Secondly, continually assess the efficacy of your algorithms. You will be measured on this. If you're in the field of AI, ensure you understand automated decision making – ADM – it’s a key element of GDPR and you'll need to master that from the outset. And finally, be alert to the ever-changing legislative landscape in the UK, in particular, lookout for traps within the Investigatory Powers Act, the Computer Misuse Act, and the Defamation act. They all play a part in your approach to safety tech. 

So, I know that might all sound a little bit daunting, but don't be put off at the very least you can learn from others and even adopt their technology. For help, I'd guide you to the Online Safety Tech Industry Association – OSTIA – it’s got over 100 brilliant fast-growing safety tech companies, who can help you with your approach and design and build process. The technology you are looking for is almost certainly already out there from real time on device messaging video filters through the cloud-based solutions that detect threats from disinformation through to radicalisation and more. So finally push ahead with confidence and vigour it's so so important safety tech. Safeguard your users the rewards are huge and not least the wonderful feeling you get when you safeguard a fellow human being from harm. It's a such a powerful motivator so, so thank you and I'll welcome this discussion. 

CW      Thank you Richard, and for those of you just joining us, this is the Safety by Design panel. We are talking about the design decisions that a product makes, our product team makes, and the way that those decisions then impact the outcome on people, your end users, and your moderators as well. We have three folks with us today who all work in safety by design in various ways and so, this is the point of the panel where I'd like to, I'd like to ask some of the questions that I have!  Richard, you had mentioned, you know, it's very important it's sort of non-negotiable at this point, for products to have safety, yes, we really believe in safety, it's very important, and I think it's fair to say that overall, very few founders, very few you know designers go out there and say I'm here to make an unsafe product today, right? Like in general that's something that many folks agree on we want things to be safe, but I'm sure in our careers we can all point to times where you know inadvertently some design decisions were made that ended up with perhaps unexpected consequences when it came to the safety of the users or the moderators. I'm wondering if any of you have some examples you can share of, in your time in the field, either a decision that you or your team or your company made, that actually sort of accidentally ended up being an unsafe one or vice versa perhaps on the silver lining side something that you put in place that you didn't realise was going to have the positive impact that it did and just illustrating for our audience just how even the smallest of decisions can really end up having big real world consequences 

RP       Well I could certainly begin there I’ve got battle scars. As I said I could sadly probably talk for about five days, and all the mistakes that we've made along the way. I'll never forget our CEFO and I we were called into a meeting with our lawyers, we've got some pretty heavyweight lawyers in London and we told them about how our technology worked, and one of the things we were doing is we were intercepting incoming messages without the authority of the person that had sent it in the first place. We were doing it for good, you know, it's a social impact, we were doing it to safeguard children. Until of course the lawyers said to Ted and I, you realise you could go to prison for doing that. And so what seemed a pretty obvious thing to do - why wouldn't you be allowed to do that? And it just put the fear of god in me. There were five of these lawyers the suits walked in you know, they had the watches, they had the shoes and everything else. And I don't think I've ever broken out in such a sweat in my whole life. So, we realised that this is a complex landscape and legislation is different depending where you are around the world, and you really need to be alert to that. 

TB       I’d like to add to that as well and talk a little bit about I think you know one of the things I think also just kind of taking a step back as product teams are designed as we're thinking about what recommendations we give, what kind of you know missteps have happened or what's been done right. I think the most important thing is organisations need experts when it comes to safety in those buildings working. There are very few companies that have Head of Safety's teams, even people who are running safety teams have no safety expertise, and so they go forward with design without having that expertise whether, it's internal and also bringing those experts in. And I'm not talking about just donating money and having partnerships, in quotes, I'm talking about having strategic partnerships of bringing those groups in to help the team. 

The organisations that are doing it right have those people in place walking along that journey with you when you're also creating safety products. To give you an example of, you know, one of the things that with the past organisation I was with is they were moving forward with an ID verification process not thinking about how that may impact the transgender community. How difficult it is for them to you know have an ID that is reflective of the gender that they are choosing, and how difficult a user experience that would create for them and not thinking through that process. However, if you have somebody who has that expertise on your team, who brings that inclusion expertise already, you're less likely to make that misstep as you're going along, or you're working with an organisation. And taking a step back, you need an assessment that's done. 

Organisations need to have a safety by design assessment completed so you know where you're starting from and where you're going. One of the organisations, one of the government organisations that has a great assessment tool out there is the e-safety commission in Australia. So, government organisations there's a lot of advocates out there, there are people that have been working in this space, like myself, who have done many assessments and understand the steps that an organisation should take in order to assess what are your how are your product teams designed to begin with? Are they, was your infrastructure set up to be a detriment to safety product design or is it set up to actually elevate and help you achieve those consequences that you want, which is a safer community? So, those are things I think that are important. I think just an example of how you know something can really work out in a positive way, and I think Tinder has a great example of this when they just started designing some of their AI and machine learning models around keywords when it was first designed for to try to capture harassment. It was set up in a way where it was capturing words and phrases that were not harassment because they did little things like not putting spaces either at the side or the front of a keyword to capture things, so it would capture things and create a poor user experience but they were able to fix that and then also up levelled what they were doing to create some technology called “does this bother you?” to capture harassment, preventively, and ask this person if they felt it was harassment and encouraging them to report because the other thing we know is that people, things are underreported women are not coming forward and reporting why? Because they're the way they're treated and responded to. So, what are we doing proactively to encourage people to report, to encourage people to recognise harassment, so I think that's a positive example of how they stepped into that space not really thinking not really knowing how positive it would be and have had a tremendous positive response from it so-so a little bit about structure assessment, certainly a real life example for you but I think it's important for organisations to understand they need to do a proper assessment and they need to be structured and setup in a way that will not be a detriment to safety product design. 

AD       I can jump in as well quickly and so yeah, I would think it's a really great question and what was really interesting when I joined a tech start-up and in terms of what Tracey was saying about bringing experts in I’d always worked on the issue of women’s safety from a policy side and from a sort of advocacy side of things. But since I was brought in to head up the user research and it was a really really small team I was working very closely with the product designers and the engineers and because I had just completed a bunch of research on the issue of online abuse against women and you know we had done some fantastic studies with some AI companies to look at trends of online abuse, you know, we found that black women, politicians and journalists in the UK and the US were 84% more likely to receive online abuse than white women for example. So, as the person that was in charge of user research I was well equipped with the knowledge to know that if we wanted to actually design tools to keep people safe that I needed to speak to the groups that were most marginalised and who were disproportionately at risk of you know on experiencing online harms and it really felt that because was leading that research and talking to the right folks that they were also helping to truly design what the product looked like and you know I think the really interesting part as well was that because we were building a tool for, safety tools and safety filters, for an app that already existed and on top of an existing app you could see how clearly the safety features that we were developing hadn't either been like considered or prioritised or perhaps wasn’t compatible with the tech stack but all of a sudden we were building tools that really got to the heart and the needs of the users that were disproportionately impacted. And obviously that was because we had that expertise in house to identify that so I think in terms of ensuring that you have the right people in the room when making those design decisions, it's really, really important. And I think you know I was talking about the design workshops that we're doing at the Web Foundation and these workshops that we're holding will be the pilot for something called the Web Foundations Tech Policy Design Labs and that's where we’re really bringing this multi-stakeholder approach where you bring together the companies, the civil society experts, the independent experts and researchers and the users themselves to tackle these tricky products and policy solutions think it's important to recognise that not every tech company will be able to always bring in this expertise in-house but there's lots of other folks in this wider ecosystem that have this expertise and that are willing to work together to achieve those solutions. So yeah, think that's really important to consider as well. 

CW      That’s great and that actually brings me straight to my next question I think we have time for one more of my questions before we move to the to the audience questions. You know as you just mentioned and Tracey as you were talking about earlier, one of the best ways to ensure that you have an inclusive product is to include people in the way that you build your product and that sounds of course that's another one where I think people think “well yeah you know I want to include people” sure, but I think we see it time and again across all types of products like that is that is not something that happens even though people think they've taken the steps to make that happen. To the panel what are your specific recommendations for folks coming to this panel on like self-checking that? How are you actually getting accountability on making sure you have you've gone through a process where you're considering diversity, you are considering inclusion you’re considering fairness and baking that into your entire process? What are those self-check mechanisms you’d like see? 

TB       Yeah I think you I think you hit on it, sure you know I say that you know if you're really going to prioritise safety you need to prioritise women, people of colour, marginalised communities first and foremost, because they’re disproportionately impacted by safety. When you prioritise those groups you’re going to create safety for everyone. It’s just going to be an automatic and yes you say how do you hold people accountable, how do you have that checklist. 

Well one, you need that your organisations have to have somebody whether it's a member of the DNI team or it's a member of the safety team you have to have somebody internal who has that expertise who can create those design and weave those kind of check lists into those design frameworks. Somebody who you're setting up times where they're reviewing that product design as you're going forward because they're going to have that inclusive lens that other people are not going to have when they’re looking at that. So, creating a standard and a policy and leaving that into it, not just a checklist but having an individual or person who is a part of that process and do they need to be there day in and day out while the product design – no, but they probably need to see the proposal of the product design and bring them in on that journey as you’re taking it. You, people should also think about you know I can't afford to go hire somebody well you have ERG groups, look to the groups that have the expertise to create a working product group to be a part of that process.  I’ve done that in the past as well before I was able to bring and actual get approval for a head count to have somebody with that expertise in our building all the time, was to create those working groups with that expertise and maybe even an external organisation to help create, bring that external organisation to help create what that assessment tool would look like or how you, in your safety design safety by design product assessment tool, how you weave that inclusion into that and they can help design that for you. 

RP       I can add to that as well from technical perspective and ensuring that you don't introduce bias into your AI algorithms, which is pretty tricky when you think about it. Of course, it depends on your aspirations and whether you are trading nationally or internationally, globally, but the approach that we took in safeguarding children was to involve children from the outset and from a really diverse range of backgrounds and cultural ethnic beliefs and positions and so on. As well as introducing international children to the whole sort of feature set design but also crucially in AI when you are in the field of data labelling, we give our labellers and our classifiers the freedom to interpret data in the way that that they would naturally interpret it rather than guiding them and specifically and telling them how to annotate date and to label it and so on, and so you have to allow huge freedom in your construction of safety tech to ensure that those biases don’t exist. We've engaged with well in excess now of 2,000 children of different ages and backgrounds and I’ve said. And it’s utterly fascinating the insights that you get just as human beings sitting on the periphery watching some of the discussions play out and to ensure that you do get that diversity and that lack of balance. It is all about involving as many people as you possibly can. 

AD       Yeah I think it's a really great question and I think you know when I when I talk about where when I think about diversity and inclusion and safety by design, I always like to take a step back first and I just think it's so important that when we're talking and thinking about safety by design that we acknowledge and remember that like all of these forms of discrimination or violence or abuse that are experienced by a specific group in online spaces it isn't new, right? It's a continuum of what these people face offline and it's just manifesting in a new space, and you know, just, just like we think about like diversity and inclusion and our physical and offline interactions, we need to think about how these issues play out online, they don't just you know, my skin colour and my gender doesn’t just disappear when I when open up an app on my phone. You know we all have different identities and we know that but all of these different identities mean that we have different experiences and therefore we may have different needs and I think sometimes what can be missing or perhaps isn’t given enough attention to is this explicit consideration of how these different identities whether it’s your gender or your race or your sexuality or the combination of all of those you know any protected characteristic really can play a part in the user experience or in the user journey and then also that these experiences of inequality that are faced offline, they don't just manifest online they can be exacerbated right they can be amplified online just because of the way online spaces are, depending on the space that we're talking about. 

So I think like that really leads to what I was talking about in my last answer about the, using the research and then sort of using that to arm yourself with what are the considerations that you need to think of when introducing a new product feature, updating a new policy, because if we don't consider these experiences that the consequences are very, very harmful right? We're seeing for example droves of women leaving online spaces because they either experience abuse for speaking out or they see someone else that looks like them or except for that experiences abuse for speaking out so you know it has a really huge impact on whose voices we hear, who stories whose stories we share, has a huge impact on your user base and think it also has a really big impact on whether or not your company is seen as a safe place for certain groups of people right? And that’s something that you need to think about as well and think one thing that we're doing at the Web Foundation is we’re actually pushing beyond safety by design and thinking about gender by design because I think it's so important to for designers not only to think through the safety questions, particularly through everything like all of their considerations and questions around every stage of the process and how their decision can impact the most marginalised groups, including you know including women and women with multiple intersecting identities and how are they addressing and mitigating against these potential harms and you know do they do their decisions, are they evidence-based and are they consultative in their nature and ideally, you know, that would embed safety by design principles but take it a step further as well. 

CW      Awesome, yeah, thank you. I am now going to turn to audience questions and we are getting a lot of them so I’m going to do my best to synthesise lot of the related ones the first one’ll put to the panel is a general question around balancing safety and privacy. You know one of the first questions we got was all right you know there are these products that have end-to-end encryption and that's really great from a privacy perspective but you might miss a lot of abuse that way and there are you know a few others just in general acknowledging that there is this tension between like a lot of privacy perhaps and a lot of safety. So, the question I would put to the group is well is there a tension there, and how do you how do you navigate that apparent tension you know? What are some solutions that particularly smaller companies, younger companies can be thinking about when they approach those questions? 

RP       So Charlotte there's a huge tension there, certainly in our sphere of safeguarding children. It’s balancing the absolute fundamental rights that a child has to privacy to a parent's primal desire to keep their children safe online and that’s pretty tricky. If you tell a parent that their child is being bullied, the natural question they’re going to ask is well who’s who's bullying him or her and if you're not careful and if you haven’t actually educated and guided and advised the parent on how to deal with the information that you give them, then the parent themselves can actually make it worse. 

And classically what has happened in sort of the history of parental controls technology is that parents who've misunderstood maybe the information that they've seen because it's very hard to you understand what a child is doing online, parents typically don't know who they’re talking to what they're saying what they're hearing and so on. Parents have a habit of wading in and actually causing bigger problems especially for example when they take the phone away from their child, which is these days the modern version of grounding your child, is you don't send them to their room, you actually take their phone off them. The problem with that is that you've intruded on your child’s life and of course children these days their real life actually is spent online, it isn’t actually cuddling with mum and dad on the sofa on a Saturday night watching telly. 

If you were to take the average teenager's phone off them and try and make a head or tale of what’s going on on their device you won't be able to work it out because they have multiple simultaneous discussions on on multiple apps, all at the same time, they use tribal language and so on. But it is so so crucial that we’ve learned in the safeguarding landscape to defend the child's rights to privacy. It’s so, so important to do that because if you don't, you can drive them underground. We’ve found a balance in the way our technology works that we never let mum or dad see what the child is actually saying or what they're messaging or the imagery that they're looking at, but instead we guide them with what we call a safety indicator that tells them of their child's proximity to risk without actually telling them exactly what’s going on and we do feel that part of this whole privacy safety landscapes is to educate mums and dads as much as it is to keep children safe online. It’s crucial part of the safeguarding landscape. 

CW      Thank you, if there are any other comments on safety and privacy, we can cover those otherwise we got more questions, so don’t want to cut anybody off but we’ve also got loads of audience questions. All right let's keep moving, one very insight question here, I think, and one very close to my heart which is examples of metrics we use when evaluating success. So this person says how do you build metrics when you don't know what safety incidents you've missed, necessarily and I know often in trust and safety we talk about well if we do a good job nobody knows and nobody cares and you make one mistake and everybody cares right? So, what are some examples in your experiences of metrics that have really worked from both the perhaps a business perspective and a user safety perspective? Are we actually making an impact here? Are we actually making this a safer space?

TB       I think that's a great question I think it’s also one of the most difficult things to do is to identify what those metrics are for safety, and you know online you know we can see a lot of things like bots and how difficult how easy that is to create metrics around but how much you know think about how difficult it is for not only online behaviour but to create metrics around anything that may happen offline, that may be connected to a platform. It can be very difficult one of the things I want people to see too is when you create metrics - first of all you have to have good data, you have to understand what's happening on the platform, what are we seeing what are the trends around what we're seeing, not just with product but with the actual problems that are surfacing or may not be surfacing. We also have to go out external not only are what are we seeing on our platform but what are people in the community seeing? What are experts seeing? What does law enforcement see? What are women's groups? What are you know social justice organisations seeing? What are the trends that are arising in our communities, so that we can really understand the problem? You can't until you have clean data, until you set up a system of defining safety of, defining those categories of harm and actually starting to track those and understand those then it's very difficult to set any type of metrics, and if you have a product such as a safety centre something like that setting up metrics around usage alone is not going to tell you a lot, it might tell you if you have low usage that you're not communicating or educating enough in that space. 

So, for people to see success and also says you want to see overall you want to be reducing incidents at the end of the day, or you may see high numbers just because you're doing the right thing, so you see high numbers of reports because people feel comfortable in reporting because they know you're taking actions, so looking at the context around that. I tend to say you need to have incident metrics but you also need have incident metrics but you also need to have sentiment metrics because it's just as important for people to feel safe as it is those actual reduction of incidents. People want to know that they're welcome, that they can come into this space and they can feel safe, so designing metrics around sentiment are just as equally as important as designing metrics around incident reduction, but you can't do any of that until you have good clean data and until you actually know what's happening and you’re tracking what's happening on your platform. 

AD       I really think Tracey you covered it so well so I don't I don't have too much more to add but I just want to sort of echo everything that you said and I think also the transparency around that data and having it publicly is really, really important, especially for you know civil society organisations or other groups that are trying to help build solutions, because so many times it's frustrating when you don't actually know exactly what's happening and, therefore, your policy recommendations are to release data versus actually, you know, trying to suggest something concrete so you know an example that I like to think of is, if let's say, you know if you are experiencing abuse on a platform and reporting is basically the recourse that you have from the company to, you know, get some so type of justice out of the the fact that you’ve experienced abuse, if you're reporting abuse and the company is letting us you know know widely as their users you know the incidence of abuse that's being reported and then that’s disaggregated by the type abuse, so let's say it's identity-based abuse and then not only the type of abuse that's reported but the outcome right because if there’s a high level of people reporting identity-based abuse and then that abuse coming back is saying it's not violation of the community terms and standards – there are one of two things I think is that your users don't actually understand the policy of what your what you're saying is identity-based abuse so you need to do more education around that or your content moderators aren't being fully informed about what you mean, but it’s so difficult to come up with something concrete when we don't have those stats and that data so I think it’s really important to have good metrics to make those transparent, to make those public and open but to also be disaggregated as possible when it comes to, you know, what you're releasing and how you're doing your analysis because then that can allow for a more like helpful and granular solutions I think. 

CW      Well you just uncovered my personal soapbox which is you are here and you are a young company you’re working at a smaller company and you’re thinking what can I do, what’s a step I can take to make things safe. Be thinking about your data, be thinking about your data structures, what you log, what you don't. Of course you know just there are implications to logging everything right? But the more data you have the easier it is going to be for you in general to figure out what's going on and to inform the public what's going on. We're seeing huge movement towards increasing transparency from tech companies, and I think that is a really, really good thing and also a really, really hard thing for a lot of companies, who when they're just starting and trying to just keep the site up, they're not thinking about and here's how specifically I architect all of my data to make sure it’s easy to pull if ever you know if we ever need to look at it right? No one's thinking like that - but you could be the person thinking of that by attending this panel because all of that just makes it a lot easier to figure out what’s happening and being able to keep people informed about what's happening. 

RP       Can I can I just add to this because think it's such an important aspect to the whole safety tech and safeguarding landscape because you do you must measure impact if you don't measure impact how can you possibly improve? And you know many safety tech companies are deploying artificial intelligence and so it's the automation of content filtering crucially in real time as well. And again, it links to my opening remarks about GDPR and understanding the more granular details behind GDPR because if you've written and deployed safety technology that automatically filters harmful content or is designed to automatically filter content you have a duty to understand whether it's doing it properly or not. It goes back to the whole point about bias and diversity and all those sorts of issues. And so, you have to enter into this world of what's called automated decision making, your technology has to give the user the opportunity to object to what your technology has just done. And that opened itself that in itself opens up a really fascinating sort of psychological sort of issue because children, in particular, have a habit of complaining about anything and everything if they think it's impinging on their rights to freedom and privacy and all the rest of it.

CW      That's not just children, that’s people I believe!

RP       Well too right I would definitely be one of those, there's no doubts about that, it is so so important that you measure impact. So, yeah, technology detects threats and so goes into overdrive when it sees a threat, and it will filter that threat, in particular it does it in real time and we record that event, we don't, no human being sees what the child was doing its technology here but there's a mark that says “oh yeah this technology has just intervened” and it's done something and it’s had an impact. And so, I mean I think it’s important to also recognise in today’s world that you know what profit and purpose can co-exist that’s no bad thing, and so you will find, ultimately, if you run a brilliant safety tech business, you will ultimately generate profits and that's a sign your profits are a by-product of brilliant safeguarding. There's different ways of measuring it that you can absolutely measure the impact of your technology and you have to do it otherwise how can you possibly improve and therefore safeguard your users even more effectively. 

CW      We have just five minutes left here and so I wanted to close by just like this is like the lightning round, you know, for folks wondering coming here and saying like well you know how can learn more about this? Like I want to go out and like read some more, listen some more. I’ll just ask - is there any sort of one resource or set of resources you might point people to? Whether that’s an article you read recently or a book that you know about or a great podcaster, a great YouTube video? I can go first and it's just that mentioned I am Executive Director of the Trust and Safety Professional Association we have a sibling organisation called the Trust and Safety Foundation Project, and on the website there its TSF foundation, oh man now I'm getting on the spot here - we have case studies you can actually go to our CogX booth and see a link to those case studies and that's a way that people can kind of put themselves in the shoes off folks in tech-making decisions and understand okay what considerations are at play? What do people need to think about when it comes to trade-offs etc.? And that can be really helpful when you’re thinking through questions of safety by design.  Anything else?

RP       Yeah I would say visit the SafeToNet Foundation, and the SafeToNet foundation runs regular podcasts on this whole topic and they’re cerebrally absolutely fascinating interviews with the real, great and the good when it comes to tackling online harms and so on. And so, there's some fantastic content there but also approach OSTIA – the Online Safety Tech Industry Association. There's tons of people in there that are just driven to keep children and people in general safe online and will find a wealth of resources and expertise. 

CW      Right.

TB        And I would just echo Charlotte, your recommendations because think you have valuable resources and it's about you know corporations not working in silos, it’s about working together as industries because there's so much learning out there so organisations like yours that are bringing industries together are so valuable, and then I would just again I think mentioned it earlier I think you know the e-safety commission in Australia is very innovative in what they're doing and providing some really great information and assessment tools for organisations out there, so you just Google them, you'll be able to find those those, tools. 

AD       So you know the Web Foundation we’re still at the I was at the start but where our journey is just kicking off and earned us in terms of our work on this issue and using human cantered design thinking and design principles and at the moment on our website we have a series of blogs that have the key takeaways from our consultations on online abuse so you can find out a little bit more about the trends and the patterns of how online abuse affects different groups but as we kick off these policy design workshops, which will be happening in the next few weeks, you know we'll be releasing a report and finding some of these workshops and the whole point of these workshops is for companies to come together, actually, and to you know commit to testing and implementing some of the solutions that we’ve co-created at the UN Generation Equality Forum that celebrates the 25th anniversary of the Women's Rights Declaration so you know all of these materials will be upon the website and hopefully some of the gender by design principles and some of the tools that we'll be co-creating will also be up there as well so there’ll be some tools that can be used and shared widely so that other folks can see how to do this work in a multi-stakeholder way that centres the experiences of those that are most marginalised in these spaces online. 

CW      Fantastic, well thank you all, panellists so much for spending your time with us here today. So that’s Tracey and Azmina and Richard thank you. Thank you everybody who tuned in to check this panel out so, we hope you will enjoy the rest of your time here at Safety Tech. We're going to turn you back over to the main stage, so please join us back there, you I think you’re going to have a couple minutes take your bio break, and we'll see you there. Thank you again.

Want to know more?

Join here