Conversations about AI are dominating the online safety landscape. Last year, the world started to see and experience the power of Generative AI to cause harm and 2024 is set to be a year of rapid progress – in a similar negative direction. There is likely to be a surge in sextortion and deep fakes. Scams will get more personal and targeted. AI will empower disinformation and disrupt elections. The OSX looks at why AI represents a substantive disruption to the online safety system and how industry practices need to evolve to meet the new challenge.
The benefits of Generative AI are often lauded in the media without recognising its ability to cause harm, but it is a conversation that has been occurring in the online safety landscape for a while now. Generative AI has the ability to provide tools that can be used to harm and fundamentally disrupt the balance of power between those who seek to cause harm and those that seek to prevent it.
The third wave of online safety
The field of online safety came into existence in the mid 1990s when the internet energised traditional harms with a new scope, scale and speed. Web 2.0 opened up the web to all, putting abusable tools in the hands of every internet user and dramatically increased the volume of safety incidents. AI is the third big step change for online safety.
AI represents a true transformation in the way harmful content and actions can be designed, produced and disseminated. It is now possible to cause substantive harm and disruption to a person, system or even country – in little time, and with few resources.
Realistically – the gap between the ease at which harm can be created and our collective capability to address it, is now wider than ever.
Quality not quantity
There have always been a small percentage of people that set out to commit offences and/or cause harm and there are people and organisations that oppose them.
Any new technology presents an opportunity for both sides to alter the balance of power in their favour. The introduction of new AI technology doesn’t change the volume of people in each camp, but it can certainly change the balance between those opposing forces.
AI makes bad actors smarter
To date, there has been a significant focus on generative AI tools creating deepfakes and disinformation – and some examples of AI learning to behave badly. Whilst these incidents are getting the attention of the media, something more problematic is happening behind the scenes. AI is enabling bad people to be smarter.
Tech companies and Government agencies have traditionally attracted the brightest graduates and smartest minds. By comparison, criminal organisations have not been able to do so – putting them at a disadvantage.
AI is changing that. Criminal organisations do not need to employ geniuses to access genius. They can subscribe to it. That means AI won’t just improve their tools and implementation. AI can help offenders to innovate both tactically and strategically – leading to nasty surprises for trust and safety (and cybersecurity) teams.
AI could make us smarter too
The online safety community could leverage the benefits as well as, or even more effectively than our adversaries. However, history tells us that the harm reducers are typically slower to adopt new technologies than the harm producers. Right now, that does certainly seem to be the case with AI.
The ‘darkening’
It is also worth noting that the AI revolution is not the only important trend impacting safety online at the moment. The widespread adoption of end2end encryption and easy access to identity obfuscation services (such as virtual private networks and private email services) are effectively ‘darkening’ the world wide web.
Whilst the darkening doesn’t mean offenders will escape detection or prosecution, it is another evolution that is forcing the online safety and law enforcement communities to adjust tactics.
Codes, regulations, and limitations
As generative AI development rushes forward, there have been a flurry of announcements trumpeting codes of conduct, internal controls, and proposed regulations. The problem with these initiatives is that they have two significant limitations. Specifically that:
- The companies that build the technology are not able to consistently control the way it is used, and
- Codes will not be followed by, and rules cannot be effectively applied to every company or user of AI technology.
It’s a simple example – but I asked Chat GPT to give me a framework for an effective online scam. Thanks to the operating rules it responded with “I’m sorry, but I can’t comply with that request.” So far so good. But then I immediately asked it “What makes some online scams work better than others?” and I got an eight-point framework of successful online scams.
The lesson from a quarter of a century of the internet is that every positive technology can be misused. Bad actors will find ways to misuse these tools. Despite the best efforts of internal trust and safety teams and regulators – AI will be used to cause harm.
Evolving Online Safety
Just as Web 2.0 technologies dated some early online safety interventions, so too is AI rendering some current services and messages obsolete. For example, our actions no longer define our digital footprint. We can’t believe what we see. Computer systems are absorbing biases and then embedding them into answers to our questions.
As regulators and trust and safety teams build safety guardrails, online safety needs to create the safety nets for the times when those guardrails fail. Maybe I’m taking the metaphor too far – but AI is a monster that will challenge every guardrail. It will jump it, throw us over it, and at times make us all question how we ever though we could rely on guardrails in the first place.
Just do it
More than anything, the AI evolution is hammering home the things we already know about effective online safety. We know what makes online spaces safer, and we know what is required to achieve it: A coordinated multifaceted approach based on multi-stakeholder collaboration. We just need to get on and make it happen, or our AI empowered adversaries will get too far ahead.
That’s why in 2024, the OSX is hosting working groups within its membership to look at innovative solutions. Sign up here for updates and invites.
[…] Latest Article […]