Stronger communities and increased engagement: how your business could benefit from investing in safety tech

Users are increasingly being turned off by online toxicity. This brings costs to business, as well as to society. In this article we look at the return on investment in online safety, and how enabling safer online communities could boost business revenues by up to 30%.

Toxicity is turning one in five people away from engaging online

The internet has enabled huge advances in society and, for many, the experience has been positive. More than 80% of people think that the internet has made life better for ‘people like them’.
Yet for many, the story is not entirely positive. Recent analysis of more than 100bn lines of chat suggests that while the vast majority of interactions (82%) were healthy, 3% were actively harmful, and the remaining 15% nuanced and falling somewhere in between. 

This underlying level of abuse is causing around one in five people to disengage entirely from online communities:

  • 19% of adults have stopped playing certain online games as a result of in-game harassment;
  • 21% of people stopped using social media after facing online harassment.

Public opinion is also hardening against harmful interactions, and the companies that enable it. More than one in four people say they would stop using a platform if it kept allowing harmful content. And many people do not believe that tech companies are designing their products and services with users’ best interests in mind. 

Harmful content has real costs to business, as well as to society

These levels of toxicity are having an impact on companies’ bottom line, as well as on user health and wellbeing. Companies of all sizes are beginning to take action. A recent quote from Chris Buzzo, Chief Marketing Officer at Electronic Arts, one of the largest gaming companies, says it all:

“In a study that we did, 58% of players say they experienced some form of toxic or disruptive behaviour in the last year. And it is one of the primary reasons that they choose not to play. So we don’t really need much more evidence than that to say that toxicity is ruining play.”

Action is becoming an economic, as well as a moral, imperative. No company wants to lose engagement with an audience it has taken a lot of time, effort and money to acquire. In fact, in purely financial terms, engaged users can be worth up to 20 times more to a company than non-engaged users. For example, Canadian company Two Hat found that the LTV (lifetime value) of players who did not participate in public chat was about $57, compared to the $563 LTV of players who did engage. The LTV of those who participated in private group chats was even higher, at more than $1,200.

Toxicity also takes its toll on tech workers. Not many people want to stay working on products they can see are causing actual harm to society. More than a quarter (28%) of tech workers in the UK have seen decisions made about a technology that they felt could have negative consequences for people or society, the UK thinktank Doteveryone recently found. 

Nearly one in five (18%) of these workers went on to leave their companies as a result. The number of employees leaving due to ethical concerns is even higher – 27% – for employees working specifically on AI. Given that every technology worker that leaves a company does so at a cost of £30,000, this in itself imposes a significant cost.

Safety tech helps to prevent harm, and helps human moderators do what they do best

Given the above, it’s not hard to see the economic, as well as the social, arguments for investing in online safety.

Choosing, and using, the right technology is just one part of the mix. Other design elements are being considered within the emerging discipline of ‘safety by design’, which is currently gaining momentum in particular in the UK and Australia.

The right choice of tech can help to prevent harms happening in the first place, helping to nudge users into rethinking potentially abusive content before they click ‘send’. A recent study by OpenWeb, for example, found an overall 12.5% lift in civil and thoughtful comments after using the ‘nudge’ technique to challenge toxic behaviour, and an increase in community approval rate of up to 4.5%.

Safety tech can also significantly reduce moderator workload by blocking known illegal content, spotting known patterns of abuse, and by helping to prioritise the most urgent and important cases in moderators’ work queues. This frees human moderators up to focus on what they do best – understand context, empathise and make fair, nuanced decisions, and communicate the results. It also helps to reduce the physical and emotional burden on moderators themselves.

Case studies of the impact of safety tach include:

  • adding a proactive chat filter to a popular mobile game delivered an 88% decrease in moderation volume, decreasing user-generated reports of abusive behaviour from 500 to 50 cases a day. 
  • emerging social media platform Popjam found that using safety tech to help moderate images and content helped to reduce moderator workload by 65%
  • use of tech to detect toxicity in voice chat helped a game studio identify and address the worst offences on their platform, leading to a 50% decrease in negative reviews and an overall playtime increase of almost 10%

How to invest in online safety – next steps

The evidence suggests that any company seeking to maintain safe, healthy online communities should check whether they have the right tech in place to support moderators. 

There are products and services available to help you block known illegal content, deliver kid-safe experiences, detect known patterns of abuse, and fight disinformation. You can choose from free tools provided by some of the biggest international tech companies, to bespoke and supported solutions from specialist safety tech companies in the UK and overseas. 

To find out more, check out our guide to safety tech, and directory of providers.

[Many thanks to Qasim Shafi for his help researching this article]

Articles you might like