Frameworks, Global, Opinion|

AI is disrupting everything. Whilst this is a challenge for the online safety industry, it also presents an opportunity to transform as a field of practice. AI has created a point of inflection that we can leverage in various ways to develop a more effective and self-sustaining online safety ecosystem. By recognising online spaces as complex systems and approaching safety accordingly, we could make substantial gains in harm reduction outcomes.

The strategic opportunity of AI

AI provides new challenges and opportunities for online safety at a tactical level which I’ve written about previously. The rise of AI also represents a strategic opportunity to empower a significant reset in online safety.

The perception of AI as a dangerous tool has been reinforced through decades of science fiction movies, from Skynet starting nuclear war in The Terminator to sentinels turning humans into batteries in The Matrix.  It is undeniable that AI empowered technologies are now causing real harm to people. Whilst we might not yet fear robots autonomously breaking the Three Laws of Robotics – most people would accept that we are at a point of inflection.

This is therefore the time when people are most likely to accept that we need to do things differently.

All we need to do statements and the simplifying of online safety

If you ask a room full of people why online spaces aren’t safe now, you’ll hear some common responses. Some will point to technology companies being more profit motivated or untrustworthy. Others will point to regulators not having the powers they need. Some will point to users making bad decisions, even when they are aware of the risk of doing so.

These views invariably lead to ‘all we need to do’ statements. All we need to do is regulate big tech more aggressively. All we need to do is educate users better. All we need is block this, prevent that and restrict them.  ‘All we need to do” statements in turn lead to “all they need to” statements and an increasing us and them mentality.

‘All we need to do’ statements make good slogans, but do not make good solutions. Despite this, ‘All we need’ solutions have increasingly found their way into online safety efforts.

Then AI came along people started saying ‘it’s complicated’ again.

In truth, it was always complicated

For each user, getting and being online is increasing simple. But ‘online’ is actually a dynamic and interconnected network comprising of numerous technologies, rules, processes and people that interact with each other in nonlinear ways. Online spaces often give rise to emergent properties or behaviors that could not be easily predicted from the characteristics of individual elements alone.

That means ‘Online’ meets the definition of a complex system. If we accept that online spaces are complex, then we can accept that solving problems in those spaces will be complex. That means no more ‘all we need to do’ statements.

So, all we need is…

Solving problems in any complex system requires a multifaceted approach that acknowledges the interconnectedness of its components. That means understanding the underlying dynamics, identifying key leverage points, and implementing interventions that account for emergent behaviors.

So, all we need to do is collaborate across disciplines and to leverage the diverse perspectives and expertise to address systemic challenges. Given the dynamic nature of online, all we need is adaptive strategies and continuous monitoring so we can iteratively refine solutions over time.

From collaboration to cooperation

If we accept online is a complex environment then multistakeholder collaboration isn’t just better, it is necessary.

Tech companies interface with users on their systems and have unique access to information about those users experiences or behaviors. Governments control law enforcement and can connect physical services to its citizens. NGOs can provide flexibility and agility – bridging gaps and wrapping services around people. And of course – those users, citizens and people are all the same person. 

Collaboration is not a new concept in online safety – but the sort of collaboration required to solve problems in complex spaces is more substantive than exchanging ideas at events or committing to shared goals. What we’re really talking about is more than collaborating, it is cooperating.

A changed approach to transparency

Cooperation also requires a changed approach to transparency – from all sectors. Transparency has increasingly become a control and compliance activity, but it should focus on creating shared information that enables partners to align their activity with each others.

A balanced approach is a sustainable approach

One of the discussions that AI has reignited is the need to balance between control for user safety and openness to allow for innovation. This is not just about economic prosperity. AI can literally save lives.

This too is a challenge that has existed since the late 1990s. Instead of deliberately considering how to balance these requirements, the internet was first constructed in a way that originally leaned completely towards open innovation – and then more latterly Governments tried to retrospectively impose centralised controls in the name of safety.

There is an approach to safety online that leans into those same models that fuel innovation. Some of it already happens organically with the growth of an outsourced trust and safety industry. It would be possible to lean further into that with incentives and tax credits for trust and safety investments and development.

With the right settings – online safety could become self sustaining within the technology ecosystem rather than constantly absorbing resources attempting to work against it.

Setting expectations of mitigation, not elimination

It has never been realistic to tout harm elimination online. No technology or process has proven robust enough to eliminate harm online. The reason for that lies in the complexity of the system. Complex systems resist and adapt to control efforts.

The online safety community has been pretty good at accepting and communicating that reality – but the public, and politicians haven’t always accepted it. AI is helping to change perspectives. It is easier to accept that a system that generates content cannot be controlled but is instead steered in the safest way possible.

There will always be cases that cannot be resolved, and there will always be critics that will claim those cases prove the system is ineffective – and argue against safety efforts.  The objective for the online safety community must be to identify and continue to invest in the interventions that most reduce harm online.

Guided by values and ethics

Many of the proposed AI codes and soft regulatory efforts are constructed around a set of values and ethics. Those same values and ethics can be applied to algorithm design, protection of children on your platform, or handling of data.

There will always be some variation in the way those values are applied. We shouldn’t allow these variations to create division and stall wider progress. This means organisations would not be punished for applying them differently, as long as they attempt to apply them reasonably.

Alternatively

Of course – the debate about controlling AI is in full swing and there is nothing to say that wise heads will prevail. It is possible we’ll fail to apply twenty-five years on online safety experience to risk mitigation on AI – let alone grasp the opportunity the AI revolution to rethink online safety. In which case in a few years time, somebody will say “All we need to do is turn Skynet off” and we all know how that ends.

Leave a Reply

Close Search Window