Diversity, inclusion and fairness in safety tech: expert panel provides key insights

Turing Institute, FutureGov and SafeToNet inspire attendees and provide practical advice in the first themed event held by the Safety Tech Innovation Network.

Online harms disproportionately affect some of the most marginalised groups in society. So as the ‘safety tech’ industry grows, it’s absolutely vital to ensure those designing the tech have access to the tools and guidance needed to help them design products that reflect the needs and experiences of diverse groups. 

On 5 November 2020, a set of expert panellists came together to provide Safety Tech Innovation Network members with a snapshot of the issues, practical recommendations – and to echo the call for diversity, inclusion and fairness to be values central to safety tech.

Head over to the KTN website to see a recording of the event or watch below. This features:

A summary of resources and key insights from each of the talks is provided below.

If you are interested in helping us develop a plan for how we can develop the safety tech diversity and inclusion community of practice further, then drop us a line at safetytech@dcms.gov.uk.

Building inclusive technology and diverse workplaces through data | Dr Erin Young

Erin’s research at the Turing Institute examines the gendered practices that shape, deploy and govern AI systems and their applications. 

In her talk [06:21-15:00], Erin argues that technology is not neutral – rather, it’s political, and inscribed with the values of those that are developing the technology. This holds for data too – existing offline inequalities can become encoded, perpetuated and increased within AI systems. 

Erin’s recommendations for how we can work with data to put diversity and inclusion at the core of safety tech are:

  • Improve access to quality, diverse data, and invest in strong, transparent and ethical infrastructures (ie foster secure datasharing through collaborations and partnerships). Missing or incomplete datasets will not increase our understanding of online safety, and organisations should seek to address this. The Turing Diversity Dashboard is a tool that organisations can use for monitoring diversity and inclusion within workplaces, using the workflow tools Github and Slack.  
  • Greater awareness of ‘social’ processes in the development of (safety) tech is needed. There is a growing field of technical methods for de-biasing data – for example IBM’s Fairness 360 framework – but further research into the social context is also needed to complement these.
  • Ensure that new data-driven technologies are developed within a regulatory framework that prioritises fairness, accountability, explainability and transparency. The Turing Institute’s recommendations on this theme are included in its response to the Online Harms White Paper consultation.

Inclusive Design – Why Empathy Isn’t Enough | Lily Dart

As Experience Director of Futuregov, Lily Dart has developed inclusive ways of working, hiring practices and design approaches to increase the diversity of teams and maintain an effective and scalable culture.

In her talk [15:30-24:00], Lily argues that designers of web products want to understand enough about users to make decisions on their behalf. But this understanding can take place be on different levels – 

  • Sympathy – feelings of pity or sorrow for someone else’s misfortune; ‘empathy labs’ often work on this model.
  • Empathy – an ability to understand and share the feelings of another. 
  • Lived experience- the experience and choices of a given person, and the knowledge that they gain from them. Only by incorporating this into the middle of the design process, can products be truly inclusive; 

Lily suggested three practical approaches in which safety tech companies could incorporate lived experience within the design process:

  • Team structure – Consider who is in the team designing the product or service. Do they have sympathy, empathy or lived experience? Do they represent the group you’re design for? If you’re doing research, who’s interpreting this? To support inclusion, companies should:
    • Prioritise having a diverse team;
    • Invite missing empathetic perspectives into the design process;
    • Train someone with lived experience to work as part of the team
  • Combat bias. Consider what assumptions do you hold about the problem or solution. What bias might these assumptions represent? Do you understand how the affected users currently solve their own problems, and why? To support inclusion, companies should:
    • Unpack assumptions and recognise bias upfront;
    • Plan opportunities to check your understanding with affected groups;
    • Devolve the power of decision-making to affected groups where possible.
  • Go broad. Consider whether you are relying on a single person to represent the views of an entire community. Have you designed your process to recognise differing perspectives and intersectionality? Companies should:
    • Use co-design workshops to invite diverse groups of people into key parts of the design process;
    • Allow them to frame problems, prioritise and ideate solutions that work for them;
    • Create (and pay) a panel of people from the affected groups to consult on decisions moving forward.

How SafeToNet builds diversity and inclusion into its teams and products | Nyasha Chavunduka and Emily Nicholas

UK #SafetyTech company SafeToNet have delivered real change by reflecting diversity and inclusion throughout their business, including into hiring, user research and use of AI.

Nyasha Chavunduka, SafeToNet Head of HR and Finance, discussed the materials that SafeToNet had to support diversity within their organisation – for example, guidance on hiring practices. These materials will not work on their own, Nyasha said, without ensuring that  empathy is part of company culture. While people are attracted to work for SafeToNet for many reasons, what keeps them around is being part of an environment in which respect and dignity are appreciated; 84% of staff feel management treated with fairness and respect. Nyasha shared insights into how this was achieved – for example, by senior management modelling openness and transparency.

Emily Nicholass, SafeToNet Head of CyberPsychology, discussed how the organisation embeds diversity within the product development cycle. For example, SafeToNet: 

  • uses its Youth Advisory Board to codesign products. At the advisory board, they speak to young people about their experiences, and get (often brutal) feedback. Children can also help SafeToNet develop their key priorities – for example, around privacy, and the need to provide mechanisms through which children can give direct feedback. 
  • ensures data used to train machine learning systems is as diverse and inclusive as possible. For example, for supervised learning  – in which annotators label the data that underpins AI detection mechanisms – SafeToNet employs a diverse range of annotators, who are able to customise and add their own insights so they can justify decisions that AI makes. This helps to ensure SafeToNet’s tools are flexible, and can accommodate the needs of different markets. 

Articles you might like

  • Online Safety Tech: 2022 Recap

    The online safety tech sector continued to grow substantially in the UK and across the globe in 2022. We wanted to take this opportunity to recap some of the key highlights.

    Online Safety Tech Graphic
  • Lessons from Innovation in Safety Tech: The Data Protection Perspective

    The Information Commissioner’s Office (ICO) Innovation Hub recently provided advice to UK Government’s Safety Tech Challenge Fund participants on key areas of data protection compliance. In this blogpost, ICO shares lessons learned from its participation. The lessons are important to note for developers of Safety Tech and platforms considering the adoption of relevant technology.

    Safety Tech Challenge Fund logo
  • Safety Tech 101: How to identify and mitigate disinformation

    In our final Safety Tech 101 event of the series, Ryan Jackson, (Manager, Safety Tech Innovation Network) caught up with Ant Cousins (CEO, Factmata) about how technology can identify and flag online content that includes false, misleading or harmful narratives.

    Magnifying glass near grey laptop