What is Safety Tech?

Across the world a new wave of companies are developing a huge variety of innovative products and services that help businesses better protect their online users from harm.

This page describes how the right tech can help protect your users and your brand to:

  • Block illegal content
  • Detect toxic content and behaviour
  • Deliver kid-safe online experiences
  • Identify and mitigate disinformation
  • Protect devices and networks

Types of Safety Tech

Block illegal content

Technology can be used to hunt out and remove this content, and even prevent it from being uploaded in the first place. A network of trusted organisations work to give illegal images unique digital fingerprints or ‘hashes’ which can be collected in blocklists. Companies can then deploy a customised block list and search their image libraries automatically for matching or near-matching images, removing them and preventing further instances from being uploaded.

Use of block listing tech provides a trusted, reliable and cost-effective way for any organisation to protect its systems, users and moderators against upload, download or viewing of known illegal imagery and videos. It can be deployed at server, product or system level. Organisations frequently combine block listing tech with AI-based tools that can detect new (i.e. not previously-known) illegal content.

Detect toxic content and behaviour

Toxic interactions are driving users away from online communities and brands. An estimated 1 in 5 users abandon online networks, profiles or games because of harassment. This has costs to business as well as to the wellbeing of users. Brand reputation suffers, and company moderation teams can become overwhelmed, trying to deal with thousands of incidents per day with outdated tools.

Technology can provide a solution. Artificial Intelligence-driven products and services are helping companies recognize and respond to toxicity in real time, with a high degree of accuracy. Used by many of the world’s biggest brands, these moderation tools can detect content or user behaviour which is illegal, harmful or outside a company’s terms of service. This includes detection of harassment and threats, promotion of suicide or self-harm, child endangerment and violent extremism.

These tools are there to support human moderators, to help them recognise the most urgent risks, and give them the information they need to protect their brands and communities. This helps to minimise moderator exposure to traumatising material and to reduce workload; the introduction of AI-based tools to support moderators has been shown to reduce moderator workload by as much as 70%.

Deliver kid-safe online experiences

Age assurance technologies help companies assess the age of their users so they can offer a tailored user experience. In particular, these technologies can be used to ensure that children do not access features or content aimed at adult audiences, and also to help protect child-focused online communities. 

‘Age verification’ solutions offer the fullest range of levels of confidence in user age. These rely, for example, on the user supplying one or more official credentials to a third party company, which then issues proof of age to the services they are accessing. The standards for this are guided by BSI’s Online age checking Code of Practice PAS 1296.

Other age assurance products can use and combine a variety of other data sources such as biometric and behavioural data to provide companies with information on the likely age bands of service users, while protecting user privacy.

Identify and mitigate disinformation

The spread of inaccurate, false information online can ruin lives. It can promote hate, damage people’s health, and affect trust in democracy. Company brand and reputation can be damaged in a matter of minutes. 

Many of the world’s top brands are using safety technology to protect their users against false content while on their platforms, and also to look across the web to detect and monitor the spread of any disinformation relating specifically to their company. 

These technologies can detect a range of inauthentic behaviours and content, from coordinated accounts created or taken over to push specific messages, to botnets and other elements that are indicative or large scale disinformation campaigns. They can also help to identify manipulated media, such as doctored photos. 

Companies using these technologies can build on this analysis to moderate content, remove violations of their rules or add warnings and flags to harmful material. Apps and plugins aimed at users can also help build audience resilience to both misinformation and disinformation by advising them on the reliability of a website or news article as well as steer them towards trustworthy content. 

Protect devices and networks

Web filtering technologies work by regulating the web traffic that is accessible on a device, location or network, and decides what is allowed to pass through.

These services allow users to choose which types of content are allowable and appropriate, and exclude others. For example, schools may opt to block gambling and other adult sites. They also include ‘blocklists’ of known bad domains or URLs. The most modern filters can also scan websites for undesirable content in real-time. 

In an education setting, filtering technologies can be used to send safeguarding alerts to Designated Safeguarding Leaders. Guidance for schools on choosing filtering and monitoring software can be found on the SWGfL website.

Recent years have also seen significant advances made in ‘device-level’ protection – apps or features that sit on a child’s mobile phone that can be used to create a safer online environment for them, at the point of interaction. These use AI to detect and filter in real-time signs of bullying, sexual risk, abuse and aggression as they type, alerting the child as they veer towards risk.

Find out more about our Network

Subscribe to our newsletter to find out more about the Safety Tech Innovation Network and what we’re doing.