What are the emerging trends for safety tech?

As digital consumption continues to boom, safety tech considerations are fast becoming vital and central components of innovation in technology. But what could the future look like?

Technologies that support online safety are developing at a rapid pace, giving companies increasingly sophisticated capabilities to detect and address harmful content or behaviour. 

We asked four industry leaders to help us look into the future, and to predict how safety tech might continue to evolve. Spanning key topics, including: digital resilience, decentralisation, data availability, and increased adoption, our discussions highlighted the potential for safety technologies to continue to adapt and grow, ensuring they are prepared for future risks, and how we use the internet. 

‘Safety tech 2.0’ and digital resilience

 Vicki Shotbolt, Founder and CEO, Parent Zone

“Safety tech 1.0 focuses on filtering, blocking and controlling what feels like a scary place to allow children to explore. What I’m really interested in is thinking about how we, as an industry, move beyond this and start to think about safety tech as a facilitator of children’s exploration.” 

Focusing on digital resilience, to understand and manage the risks of interacting online, Vicki believes that it is important to give children the tools that encourage a resilience-based approach to online learning. 

“You have to balance opportunity with safety, you have to give children the assets that they need to be able to flourish online; anything that restricts, bans or demonises the internet, does the opposite of supporting this and building resilience.”   

Appealing to parents and educators, the ask is to switch the focus from blocking access, to educating children on how to use the internet safely. Vicki is keen to promote a more positive and open approach, using platforms that have the right safety tech incorporated into their design and, in turn, facilitate safer online experiences, resulting in the online environment being a less scary place for children to navigate and learn. 

“The endgame of ‘Safety tech 2.0’ is not only the teaching of digital resilience, it is the point where safety tech providers and platform designers are unified; where technical architects and UX designers consider safety tech in every step of development.”

A decentralised web
Adam Hadley, Founder and Director, 
Tech Against Terrorism

Over the next five years two opposing pressures will shape the continued development of the internet: a drive for privacy from the public and increased pressure from governments for tech companies to police content on their behalf. This dynamic will likely result in increased demand for end-to-end-encryption (E2EE) technologies by the public and increased adoption of decentralised tech – the so-called “DWeb”. 

Adam predicts that content moderation will only become more challenging as use of the internet fragments further across a greater number of platforms and technologies:

“Violent extremists are likely to exploit emerging technologies and in some cases develop their own platforms to evade content moderation. Just as the safety tech community figures out how to deal with criminal content on conventional platforms, the threat is likely to move to platforms where content moderation is difficult if not impossible to do.”

Adam highlights this is a cause for concern for the smallest platforms:

“Currently most terrorist content is shared on the smallest platforms that have the least capacity and capability to deal with this complex challenge. Without removing the causes of terrorist content – terrorists – content will inevitably migrate around the internet and move to smaller, emerging platforms.”

Adam stresses that tech solutions can only go so far to solve underlying problems in society:

“Unless we focus on the root causes of terrorist content online we will face a never-ending task of suppressing content. As a society we need to make some big decisions about just how far we want to go to securitise society: the future of the internet is at stake.  With the emerging wave of global tech regulation we should remain alert to the risk of governments delegating too much responsibility to tech companies to deal with underlying problems in society. Done badly, the risk is that regulation will result in a patchwork of global laws that have irreversible unintended consequences.”

Greater adoption of safety tech
Ian Stevenson, CEO, 
Cyan Forensics

“This time last year, if we had been talking about an online safety tech industry, you would have got an awful lot of blank looks. Now there are a lot of people in the UK who will know what we are talking about and that is starting to spread internationally.” 

In the last year, Ian has seen more organisations adopting safety tech and reacting to what’s happening on platforms; withdrawing ad spend when platforms do not act on removing or preventing dangerous content.

This raises concerns as to why there are not standards to safeguard the use of technology, as well as government legislation.” 

In recent months, social media platforms have been telling the wider population what online content should be removed. Ian uses Twitter banning Trump as a high-profile example of action taken by a platform.

“Technology companies should not be telling us what content needs to be removed but we, as a society, should be telling technology companies what should be regulated; it should stem from society and be mandated by law.

Technology is an enabler but fundamentally these are human problems and, therefore, there aren’t technology solutions. Technology solutions can help enforce human decisions about what we should be doing, but you can’t just tell technology to ‘stop hate speech’, you humans have to define ‘hate speech’ first.”

Addressing access to data
Tom Drew OBE, Head of Counter Terrorism, 
Faculty.AI 

“AI is becoming a bigger part of safety tech innovation. However, before the industry can reach a point where AI is more universally assisting in flagging and removing online harms, there is a need for large quantities of quality data to be available”

Tom predicts that in the immediate future, greater data gathering and sharing is likely to be a priority focus for the safety tech industry in order to train AI to become a more beneficial tool. Tom believes that there are a range of legal, ethical and structural barriers to be overcome to enable data sharing for many forms of online harm.

“If you show an algorithm enough examples of child groomers’ conversations with victims, there will be signals in that data on which a model can be built to flag to a moderator in a chat room that a conversation looks troubled. From there, online community managers and moderators can make a judgement on the conversation and take appropriate actions. 

However, the challenge with many types of online harm, and child grooming specifically, is a lack of enough consistently labelled data on which to build AI models. This is something to which we have to collectively find solutions: breaking down barriers to sharing existing datasets and thinking imaginatively – but ethically – about the generation of synthetic datasets where sufficient quantities of harm data does not exist.”

What are your thoughts on the future of safety tech? Sign up to attend our event Safety Tech Innovation Challenges: the next steps on 25 February.

Articles you might like