Safety Tech Challenge Fund – Supplier Showcase

The first showcase of solutions being developed as part of the Safety Tech Challenge Fund took place on 3 February 2022. 

The goal of the Safety Tech Challenge Fund is to support the development of innovative technology that can scan for, detect and flag child abuse material in an end-to-end encrypted environment while respecting user privacy. 

In the first project showcase, which took place on 3 February 2022, the five suppliers chosen to take part in the fund presented updates on work in progress to an audience that included Chris Philp, DCMS Minister for Technology and the Digital Economy.

Opening remarks

In his opening remarks, Minister Philp welcomed the role of the Challenge Fund in stimulating partnerships between start-ups, academic organisations and civil society organisations, and emphasised the role of online safety technology in enabling a strong, sustainable digital economy. 

Following the presentation from suppliers, expert panellists provided reflections across the projects. Panellists were:

Dr Ian Levy, Technical Director at National Cyber Security Centre (NCSC), emphasised how these solutions demonstrated ways in which it could be possible to prevent illegal content from being uploaded into an E2EE environment without passing on personal information: 

What we will get out of this [Challenge Fund], even though it is early days, is really understanding the primary, secondary and third effects of these sorts of technologies and identify the changes we will have to make to the supporting ecosystem, such as how to manage the databases of CSAM and make sure the solutions are tracking the correct content. These things are critical to the acceptance of these solutions.

Dr Ian Levy (NCSC)

Stephen Bonner, Executive Director of Regulatory Futures and Innovation at the Information Commissioner’s Office (ICO),  outlined ICO’s support for the challenge fund, and for working with funded projects to ensure the protection of both user safety and privacy: 

We have had some great supplier sessions, the engagement with all of the teams has been really positive. Suppliers have been given the opportunity to ask questions about the data protection requirements for these systems and the fact that we are seeing this attention from the suppliers themselves is a positive sign of their awareness of the art of the possible when it comes to Safety Tech being able to meet the demands of the user and public in supporting trust in the technology to protect children.

Stephen Bonner (ICO)

Christian Papaleontiou, Deputy Director, Tackling Child Sexual Abuse Unit in the Home Office shared how the fund is a demonstrative case in which technology and policy are working in parallel to protect children online:

The Home Office is focused on tackling child sexual abuse in all of its manifestations. We recognise that to address this complex problem we need a multidisciplinary and multifaceted approach. This proper collaborative approach is exemplified through the Safety Tech Challenge Fund.

Christian Papaleontiou (Home Office)

Dr Claudia Peersman, evaluation lead for the National Research Centre on Privacy, Harm Reduction and Adversarial Influence Online (REPHRAIN), outlined the independent evaluation approach taken to ensure that there was transparency in the way that projects were being assessed, and lessons learned:

At the REPHRAIN centre, we have set up a team of independent evaluators and experts in the fields of online child protection, cyber security, privacy-enhancing technologies, machine learning, and AI so that we can ensure the rigour of process and the learnings will be shared. Our first step will be to develop the evaluation criteria, which we will then publish for the public to review and to provide feedback. Then we [REPHRAIN] will collate this feedback and publish our finalised criteria before using it to evaluate each solution. 

Dr Claudia Peersman (REPHRAIN)

Audience Questions and Answers

Following the expert panel, the session opened to questions from the audience. Responses to these questions are set out below.

Q: What risk assessment has been done to identify how each developer’s solution might be abused, and what safeguards are being adopted in response to those risks? 

A: The Safety Tech Challenge Fund is anchored on a set of Technical Principles which are used to guide proof of concepts as they are being researched and developed, which include the risks of solutions being targeted by malicious users. 

At the end of the programme, a full independent evaluation report will be published which will share findings on evaluation for each proof of concept. The evaluation criteria will be published for public comment during March 2022, ahead of project assessment. 

Additionally, throughout the Challenge Fund, suppliers have engaged in discussions with regulators, academics, and technical experts to understand how best to adhere to technical principles by understanding risks from technical, technical, social and operational perspectives. 

Q: With regards to circumvention, is the UK Government planning to ban open-source projects, or criminalise the use of non-compliant software, in the Online Safety Bill? 

A: The draft Online Safety Bill does not ban or criminalise any particular type of technology. The Safety Tech Challenge Fund is deliberately technology agnostic. 

Q: Do any suppliers see the possible deployment of their tech at the Operating System level, assuming adoption/cooperation by major Operating System developers? What are the panellists/suppliers’ views of market viability? 

A: Solutions being researched and developed as part of the Safety Tech Challenge Fund are at the proof of concept stage. Assessing the deployability of tech at the Operating Systems level will depend on specific use cases and technical approaches as desired by platform providers and partners. Some examples as pursued by the fund include on-device based technologies such as the development of independent messaging applications with CSAM moderation technology built-in to be downloaded onto Operating Systems. 

The fund demonstrates how there are workable solutions ready for partnerships to further research and eventually deploy to the market. As noted there is room for all solutions and approaches as the market is diverse and therefore solutions will have to fit the specific needs of the partner. 

Q: How long might it take to scale these ideas up to the commercial level? 

A: There are therefore too many areas of uncertainty to provide a robust timescale and it is important to note that any time scale would also need to reflect that one market-ready solution will not be universally functional across different platform architectures. 

To reduce risk in a meaningful way across the whole landscape a suite of tools that operate in different ways will be preferable, and each will face different technical and privacy challenges (and therefore different timescales).

Q: Are app/Operating Systems developers and consumers interested more in particular solutions vs others, such as server-side actions vs. entirely client-side implementation, reporting to law enforcement or other types of interventions, emphasis on known vs. first-generation content, etc? 

A: Solutions being developed as part of the Safety Tech Challenge Fund are at the proof of concept phase. These solutions are guided by a set of Technical Principles and aim to be feasible in implementation. Given the complexity of CSEA and the diversity of the market, the fund recognises not one solution will be favoured over others in the market and an array of solutions will be needed to address CSEA at scale. 

Recognising the necessity to have multiple technical approaches to fit the diverse market, the Safety Tech Challenge Fund has emphasised the importance of being technologically agnostic when selecting suppliers to receive funding.  As part of the Challenge, Definition suppliers were asked to: “make innovative use of technology to enable more effective detection and/or prevention of sexually explicit images or videos of children. Within scope are tools which can identify, block or report either new or previously known child sexual abuse material, based on AI, hash-based detection or other techniques”

At the end of the fund, research into use cases will be conducted to further explore the applicability of solutions developed as part of the Challenge Fund.   

Additional responses to written questions 

Q: Are the Safety Tech Challenge Fund Suppliers conducting and publishing results of security and threat reviews? 

A: The Safety Tech Challenge Fund is anchored on a set of Technical Principles which are used to guide proof of concepts as they are being researched and developed, which include the risks of solutions being targeted by malicious users. 

At the end of the programme, a full independent evaluation report will be published which will share findings on evaluation for each proof of concept. The evaluation criteria were published for public comment during March 2022, ahead of project assessment. 

Q: How can we promise the limited use of technology will only be applied to detecting Child Sexual Abuse Material? 

A: The Challenge Definition called for innovative solutions which explicitly detected for Child Sexual Abuse Material (CSAM):  “All proposals must make innovative use of technology to enable more effective detection and/or prevention of sexually explicit images or videos of children.”

As solutions are being developed, the Technical Principles and Challenge Definition act as guiding principles for suppliers to develop transparent processes with clear purposes of detecting CSAM.  

Q: What is the process for accuracy testing and assessing solutions for false positives, or tricky photos like historical photos? 

A: The approach to accuracy testing varies from project to project, according to the specific technical approach set out in each proof of concept. However, all testing is guided by the approach set out in the Technical Principles, and developed in discussions with tech and privacy experts. The outputs of each project will be assessed by the independent academic assessors.

Q: How do these technologies deploy at an international scale effectively and securely?

A: The Challenge Fund’s Technical Principles reflect the need of solutions to scale and run on large numbers of systems, and identify a large set of offenders. In part, this involves discussion of the potential international dimensions of deployment, and this will be examined during development of proofs of concept. 

Q: How does the technology moderate content that may be mistaken as CSAM, but is not? 

A: The Safety Tech Challenge Fund explicitly aims to fund innovative solutions that detect Child Sexual Abuse Material (CSAM) in the end to end encrypted environment whilst respecting user privacy. The first Technical Principle directly addresses the need to accurately detect CSAM: “The [solution] highly reliably identifies illegal images, highly reliably ignores other  images, and reports positive detection to a reporting service.” 

Solutions were selected on the feasibility of their solution in detecting CSAM, including those that presented processes for accuracy testing in regards to detecting nudity, children, and known CSAM imagery. 

Different approaches taken by suppliers include:

  • Split hash matching to detect known imagery;
  • Combination of nudity detection and age estimation using AI classifiers to detect unknown imagery; 
  • Building an E2EE platform with a built-in age verification and an AI classifier based CSAM detection model to detect known CSAM; and
  • Packaging a software development kit (SDK) that uses AI classifiers to detect live and moving video imagery via the camera applications on the device. 

Articles you might like