Safety Tech Unconference 2020: what we learned, what’s next

On 22 September 2020, we held what was perhaps the first ever event dedicated to people working in or around safety tech – the Safety Tech Unconference. In this blog we wanted to share key threads that ran through our conversations, and next steps.

A new movement for safety tech

The aim of the Unconference was to start to bring together the 1,700 people working in safety tech with each other, and with interested people from non-profits, academia and wider industry. With nearly 200 participants from 100 organisations, it was a huge success and showed the energy and diversity of the thriving safety tech industry in the UK.

Although some safety tech companies have been around for a while, there are a large number of emerging SMEs across the safety tech sector. This means there is a real need to make connections and start conversations within the industry – particular at the moment, where one drawback of working from home is the lack of those unexpected, but serendipitous conversations we have when we meet face to face. 

The Unconference also provided a fantastic opportunity for participants to share new ideas and learn about interesting and innovative developments in safety technologies and how they are increasingly working to keep users safe – and we certainly learnt a lot. 

What we talked about

We ran the following sessions during the Unconference. All attracted a lively discussion and many excellent, thought-provoking contributions.

  • Accounting for Cultural Diversity when planning safety tech and best practice
  • Who are we designing for? Including parent and child voices in safety tech
  • Managing self harm and suicide content
  • How to deliver user safety without compromising privacy
  • Online safety: The regulatory ‘duty to support innovation’
  • Directions for UK #safetytech
  • Empowering and Protecting Vulnerable Users online
  • Safeguarding mental health of safety tech colleagues
  • Managing online communities: where tech can help, and where it can’t
  • Frontiers in abusive content detection: where could collaboration make a difference?
  • The Age-Appropriate Design Code: safeguarding children online

Key themes across sessions

Each session was brimming with ideas, positivity, hope and potential opportunities for collaboration. If there was a common theme though, it was the relationship between people and technology: how can we use tech to combat some of the worst characteristics of the online environment, without stifling the best?There were a few key themes that ran across sessions, and these were:

Diversity, inclusion and fairness

At the start of the Unconference, participants pitched a proposal for a discussion session focusing specifically on how we can build a diversity and inclusion agenda within safety tech. Many other sessions also touched on this. Key issues considered across sessions included:

  • Diversity of staffing: how can we use HR processes to ensure that staffing within our companies sufficiently reflects the users we try to serve?;
  • Diversity of audience: how can we ensure that user research captures the needs of all users – in particular the most vulnerable?;
  • Addressing data bias: how can we detect and address potential issues of bias in the datasets used to train AI solutions, and ensure that decisions made on the basis of AI are explainable?

Properly understanding audience needs

Only through ensuring that we have a thorough understanding of audience (user) needs can we see where technology can best complement human moderation and oversight.Groups discussed the need to understand:

  • How might we truly understand the end to end experience of vulnerable user groups, including ‘real-world’ as well as digital interactions, so that safety tech companies, social and/or gaming platforms can design ‘safer’ experiences;
  • What sources of insight currently exist, and where are there gaps – for example, where does insight into user needs and journeys exist but is not sufficiently visible? Or does it not exist?
  • What might the potential solutions be – for example, is there value in designing a shared central resource that pulls together user insight, which digital ‘makers’ (of safety tech and products) can draw on to design their products?

Combining humans and machines: community moderation and management

It takes a lot of work to create and manage a strong and engaged online community. The challenge is that what humans do well – empathise, understand context, make and communicate informed decisions  – doesn’t scale. And while technologies such as AI and machine learning can scale, detecting patterns across vast swathes of content, it’s not as good at the human stuff. 

  • How can we go further than “reactive” moderation- ‘Taking stuff down’ is a blunt weapon that does not tackle the cause of the problem. How might moderation tools better support the type of individual customer relationship-building that is a feature of brands interaction on social media channels?
  • Where can the introduction of tech actively help users – Groups heard positive examples in which the deliberate use of chatbots had been effective in nudging vulnerable people – for example those considering self-harm – to speak to a helpline.
  • What does good look like – The Samaritans led a fascinating session on their new suicide and self-harm moderation guidelines. How can we best publicise these, and understand other areas in which moderators need support?

Privacy, data and legal issues

Working on online harms means that companies are frequently needing to deal with tricky issues of privacy, legality of content, and commercial considerations. These can be hard for any company to navigate, but particularly SMEs with scarce resources. A recent blogpost by Unconference speaker Neil Brown sketches out some of the key issues.

  • Balancing user safety and privacy is always a challenge, but it can be done – and tools such as BBC Own It are helping to show the way. 
  • The introduction of the ICO’s age-appropriate design code was also seen by many participants as a positive step, with far-reaching consequences – the ICO’s discussion session on the Code at the Unconference was highly praised by participants. 
  • A shared baseline of guidance may be needed in key areas to support growth, for example illustrating what permissible data sharing under the GDPR looks like.

Next steps (aka, “if we make this thing too big, we’ll never solve it”)

Addressing online harms is hard. The challenge is massive and ever-changing; as is the potential for tech solutions to address the most toxic behaviours. This can make it difficult to know where to start. 

Yet, as one of the participants observed – “if we make this thing too big, we’ll never solve it”.

So – over the next few months, we’ll be running standalone events focusing on each of the above main topics. We’ll give each topic a bit more room to breathe than we were able to at the Unconference. 

And around each, we’ll look to establish a small group of people, willing to investigate the area further and to work together to investigate practical, achievable next steps. 

If that sounds interesting, check out our events page for more information.

Articles you might like