Frameworks, New Zealand, Regulation|

The New Zealand Safer Online Services and Media Platforms review, formerly the Content Regulatory Review, has been closed with the release of a document summarising submissions on the proposed approach. In this article, we review the history of this proposal and the nine themes from the submissions as we know several countries are considering changing their regulatory system and the findings are interesting reading to anyone working in online safety.

How it ended

On April 29 2024, the Department of Internal Affairs (DIA)| Te Tari Taiwhenua released a summary report on to the Safer Online Services and Media Platforms review – and noted that the report would conclude the project. According to a statement released, the project is not considered a ministerial priority for the current Government.

How it started

A broad review of the Aotearoa New Zealand media content regulatory system was initiated in 2019 in response to concerns about rising online harms, outdated legislation, and an inconsistent approach to dealing with harmful content.

The Safer Online Services and Media Platforms proposal was released in June 2023. It proposed to regulate social media and traditional media platforms through a series of codes enforced by a new industry regulator. No other options were put forward – and it was not clear what other options for advancing online safety were considered or might have been considered.  

Our view in 2023

The Online Safety Exchange provided a submission in July 2023. In it we suggested that the best way to advance the online safety experience for New Zealanders was to strengthen and grow an ecosystem of online safety actors and components with an agency appointed into a clear coordinating role – but we did not see the need for a new regulator.    

Our view in 2024

Specialist online safety regulators are a new(ish) phenomenon and there are only eight full members of the Global Online Safety Regulators network. Australia’s eSafety Commissioner and OFCOM have been making headlines taking on big tech of late – but it will be some time before we know how those efforts will flow through to create meaningful change at the user level.

It remains to be seen whether a converged super regulator model – or multiple smaller more specifically focused regulators and agencies are more effective at improving online safety outcomes within each territory.

Despite my preference for a coordinated ecosystem approach – I am disappointed to see the project terminated. I would have preferred to see the work continued in some form as the issues that initiated the project still exist – and are increasing in volume and impact.    

The other 20,000 submissions

There were 20,280 other submissions on the Safer Online Services and Media Platforms proposal – although 19,509 of those were ‘template submissions’. The majority of organisational submissions were considered mostly positive – whilst individual submissions empowered by a Free Speech Union campaign were overwhelmingly negative

The Feedback  

The DIA summarised the submissions on the Safer Online Services and Media Platforms into the following nine themes:  

One: More work is needed to make definitions clearer, particularly definitions for ‘harmful content’ and ‘platforms’

The two new components put forward in the proposal were the establishment of a new regulator, and enforceable codes of practice. Around them, there were a number of less clearly defined concepts and parameters. Once established, the regulator could take the lead in defining those details – which (to some extent) OFCOM is doing as it works to implement the UK Online Safety Act.

OFCOM is an established regulator with an established reputation. By contrast, New Zealand was proposing a brand new regulator. It is easy for people to imagine (and to promote) a worst case scenario for a powerful regulator that does not yet exist.

Two: A regulator should lead code development processes, not the industry

Different jurisdictions have developed regulatory codes under different models. At the two ends of the spectrum are: industry self-developing codes which are then ‘accepted’ by the regulator, or for the regulator to develop codes that are applied to industry.

Both industry and non-industry submitters prefer a model where the development of codes is led by the regulator in consultation with other organisations – although the different sectors would prefer to see themselves in the co-pilot seat!

Three: A diverse range of people and groups outside of government should be included in the development of the regulator and codes of practice

Online safety effects nearly everybody – and as a result, many organisations and people would consider themselves to be stakeholders in online safety. Given the range of views and expectations of safety online – capturing and distilling those views (as the DIA has done) is a significant undertaking.  

The submissions highlight the divergent and sometime opposing range of views on online safety matters, and no solution could satisfy everybody. An ecosystem approach is more likely to find an acceptable balance organically, and avoids that waterfall project moment where divergent views have to be resolved to allow progress.      

Four: The focus should be on regulating online platforms and social media, rather than traditional media

The inclusion of traditional media was indicative of the ambition of the project to “create a single regulatory framework that would reduce the risk of harmful content for consumers”. Every New Zealander online today faces a range of risks and challenges – many of which are grouped together simply because information technology is the delivery platform. Whilst there are many commonalities between various challenges and their solutions, addressing each challenge also requires its own specialist expertise.

Any attempt to group so many issues and platforms together will invariably create a situation where some components will struggle to fit.

Five: Submitters recognised the importance of protecting freedom of expression

Too many online safety discussions dissolve into unreconcilable debates about finding the right balance of freedom of expression against harm reduction. These debates invariably revolve around treatment of “awful but lawful” content – and are unreconcilable at a system level because the answer is always “it depends”.  

Whilst it is true that an ecosystem of agencies could effectively kill freedom of expression with a thousand cuts, it avoids the creation of a single agency with the power to do that damage in a single blow.

Six: Children and young people should be recognised as key stakeholder groups

Young people have often been identified as a special group to justify stronger regulation and protective measures – but have less often been treated as true stakeholders. This has led to a flurry of ban, block and banish style regulatory efforts that research shows are not always in the best interests of the wellbeing of young people.

Seven: There is support for creating a clear and accessible complaints process

It can be difficult to know where to report online content and incidents. One possible solution is to create a single reporting point for all online harms but in practice this doesn’t work well because the range of issues and agencies is just to varied.

I prefer a no wrong door approach, but that requires each organisation to be clear on their role, and the role of other agencies. This is one area where a coordinating agency could add substantial value.

Eight: Education initiatives should be accessible and adequately funded

Of course.

Nine: Agencies that may be impacted directly by the proposals had concerns about the impact on their respective areas and wanted a new regulatory system to align with existing frameworks and international policies

The closure of existing organisations is not a reason in and of itself to resist the creation of a more effective one – but on the flip side, replacing an old organisation with a new one does not guarantee improvement. Existing agencies have experience and networks of partners that might not easily transfer to the new regulator.

Where to now?

The Safer Online Services Project is concluded – with no expectation of further work. I’m sure local lawmakers will continue to observe the progress of online safety regulators – and especially the Australia eSafety Commissioner as a possible model for Aotearoa New Zealand.  

But for now, the Government has stepped back from the issue and the responsibility for advancing online safety returns to the agencies working in that space. As a Kiwi with a long history of working in online safety in this country, I’ll be watching what happens next with a keen interest.

Leave a Reply

Close Search Window