Lessons from Innovation in Safety Tech: The Data Protection Perspective

The Information Commissioner’s Office (ICO) Innovation Hub recently provided advice to UK Government’s Safety Tech Challenge Fund participants on key areas of data protection compliance. In this blogpost, ICO shares lessons learned from its participation. The lessons are important to note for developers of Safety Tech and platforms considering the adoption of relevant technology.

The Safety Tech Challenge Fund aims to explore and evaluate proofs of concept in detecting and flagging Child Sexual Abuse Material (CSAM) while respecting end-to-end encrypted environments and individuals’ right to privacy. The ICO seeks to support the adoption of data protection by design and default within the solutions and understand how these proposed proofs of concept operate within the law.

The solutions in the Fund are designed to function on platforms which operate end-to-end encryption. The use of end-to-end encryption is itself a form of safety, providing users with integrity and confidentiality of their data. The participants’ task in the Fund was to show how their products do not trade one type of online risk for another, such as loss of integrity and confidentiality in order to monitor images sent on a platform.

Fund participants adopted a variety of approaches, from client-server matching of hashes of known CSAM, through to models developed to detect and respond to unknown images by estimating age and detecting nudity. All the innovations shared a common goal; detect CSAM while best maintaining the integrity and confidentiality of personal data and meeting the standards of data protection law in the United Kingdom.

For these technologies to be adopted, controllers (platforms and providers using safety tech solutions) must address all principles and standards of data protection law. The Challenge Fund was a place for exploration of prototypes and proofs of concept. The solutions were not assessed against every data protection standard because there was not a live processing environment to do so. But for those questions that could be addressed within the Challenge Fund, we are encouraged by the suppliers’ efforts on data protection.

Participants have worked to minimise any collection of information beyond that necessary to detect CSAM. They have also introduced necessary safeguards into their systems to make sure these systems are accountable to individuals, a task of huge importance. These efforts to establish ‘data protection by design and default’ are what responsible innovation looks like.

Consent

The solutions proposed to scan imagery prior to its encryption for transmission. To achieve that, all the technologies in the Fund aimed to access data held on an individual’s device, or to install software on an individual’s device. User consent for these practices is required under Regulation 6 of the Privacy and Electronic Communications Regulations unless exemptions apply.

Obtaining consent is a requirement for those platforms and providers (like an app developer or smartphone provider) that look to adopt such solutions. However, it is also important that safety tech suppliers show an understanding of how their systems affect a user’s experience. Suppliers should provide information about the processing to platforms and providers installing such safety tech solutions. Those providers should present individuals with meaningful information to allow them to decide whether to adopt the safety technology.

Purpose-driven and data minimised

There is no one-size-fits-all approach for these types of safety technologies. Some look to detect known content, others unknown content. Some focus on how to securely report detected content to a moderator, while others focus on teaching users about the risks of sharing nude images. These are fundamentally different purposes and means, and the data processing should reflect that.

Controllers (platforms and providers) must ensure that the personal data they are processing is adequate, relevant, and limited to what is necessary to achieve the specific purpose. This is known as the data minimisation standard. For those developing safety tech solutions, it is important to demonstrate how your solution meets the standard in its processing.

For example, in an artificial intelligence model that seeks to detect unknown CSAM through identifying nudity and estimating age, the risk of capturing irrelevant lawful adult nudity must be mitigated. One approach to this could involve building the system to first assess for age in the image, followed by nudity.

Suppliers should begin by identifying the minimum amount of personal data needed to fulfil the specific purpose (detecting CSAM) and develop the processing from there. Suppliers should avoid seeing the task of CSAM detection in end-to-end encrypted networks  as merely adding a new module to the end of an existing processing operation. Otherwise, the processing risks being inadequate and drawing in unnecessary data.

Meaningful human oversight

Developers of algorithmic models for detecting unknown images must carefully consider the role of meaningful human oversight. The data protection rules on solely automated decision-making, set out in Article 22 of the UK General Data Protection Regulation, require that users must be able to obtain human intervention of a solely automated decision with a legal or similarly significant effect.

Decisions with a legal or similarly significant effect include the automated reporting of decisions to law enforcement. Reporting was outside the scope of the Challenge Fund. However, it was an area raised by suppliers and in the context of unknown image detection human oversight is an area that must be carefully considered.

More work needs to be done by safety tech suppliers producing unknown image detection solutions on false positives to reduce the harm caused by the unwarranted intrusion. False positives were present in these models during the Challenge Fund. Meaningful human oversight can help to mitigate the risk of criminalising individuals for sharing innocent images.

Meaningful human oversight and obtaining human intervention have similarities in terms of what an unknown image detection system should produce for human understanding. The difference arises in the purpose of the technology and the timing of the human’s involvement. Meaningful human oversight relates to technologies whose purpose is to support or enhance an individual in making a decision (e.g. detecting an image to flag to a human member of the moderation team for assessment as unknown CSAM). Obtaining human intervention relates to a technology that determines what to report to an authority, without human input, and occurs when an individual requests the review of a solely automated decision (e.g. an automated decision to report to a law enforcement authority).

For both meaningful human oversight and obtaining human intervention, human reviewers must have active involvement in checking the system’s decisions. This includes the authority and competence to go against an incorrect decision. Finally, reviewers must be able to weigh up the decision, considering all available input data and consider other factors.

For example, the system could highlight areas that weighed heavily in favour of flagging the content as CSAM. Understanding the basis for the flag will help the reviewer to play an active role and avoid routinely agreeing with the automated decision.

The Challenge Fund has been an important phase of industry and regulatory collaboration on safety tech. As a regulator we will continue to be agile, collaborative and proactive, working alongside other regulators such as Ofcom, and engaging with government, industry, and civil society. We will seek to shape the development and impacts, both online and offline, of these technologies to uphold the safety and privacy of citizens.

Articles you might like