Opinion, Politics, Regulation, USA|

Recent rulings from The Supreme Court of the United States generates interesting questions for the online safety community to ponder. Here we explore the construct of the relevant act and look at how establishing a new category of service provision could help us move forward and develop effective regulation for social media. 

Two (more) cases and twenty-six years

The Supreme Court of the United States (SCOTUS) recently ruled on two cases related to Section 230 of the Communications Decency Act involving Twitter and Google. Those rulings left in place the broad liability shield that has protected tech companies from being held responsible for their users’ posts.

Any ruling limiting the protection under Section 230 would have sent shockwaves around the world because of the significant role that American companies play in the internet ecosystem. SCOTUS was able to sidestep the s230 issue for now, but interest in weakening the liability shield continues.

Enacted in 1996, Section 230 reflects an environment where the roles of publisher and intermediary were clearly delineated. That binary choice has applied ever since – but the internet of 2023 does not work that way. To effectively regulate social media, we need to start by creating a category of activity that actually reflects the way they function.

S230 can be modernised, and social media regulated – without breaking the internet.

A sidestep

SCOTUS ruled that the cases did not meet the thresholds for liability under the Anti-Terrorism Act and therefore did not have to include any direct ruling on the S230 liability shield.   

Whilst the Court didn’t rule directly on Section 230, it did note that the systems used by Twitter and Google to deliver content to users did not equate to actions deliberately taken by Twitter or Google to promote that specific content. This decision recognises that algorithmic content promotion and publishing are not the same thing.

No provider is a publisher

The most often quoted part of Section 230 is (c)(i) states “No provider or user of an interactive computer service shall be treated as a publisher or speaker of any information provided by another information content provider.”

That isn’t the entire text of Section 230. Parts (a) an (b) provide background and explain the policy drivers. Part (c) also includes a section on civil liability. Part (d) obliges providers to notify users of available parental protections. Part (e) limits the effect on other laws and (f) provides definitions to four key terms used in the section.

A policy delivered… and still to be delivered

The first two components of the policy section refer to promoting the continued development of the internet and a competitive free market. In 2022, the US tech sector was worth an estimated $1.8 trillion USD, employing 12 million workers across 585,000 companies. Policy achieved.

The next two items in the policy section refer to the promotion of safety technologies and systems – and removal of disincentives for their use. It would be hard to argue this policy objective has been met – although there is a rapidly developing commercial safety technology industry that might yet deliver more of those outcomes thanks to that competitive free market.

Item 5 in the policy section refers to the “vigorous enforcement” of federal criminal laws. Nothing has been vigorously enforced on the internet.

The devil is in the detail – or lack of it

In Section (f), the Act defines four terms: internet, access software provider, interactive computer service and information content provider. Access software providers are the companies, interactive computer services are their products, and anybody that produces content are information content providers.

As an example: Meta is an access software provider, Facebook is an interactive computer service, and you and I are information content providers. Ipso facto – Meta is not liable (as a publisher) for the content you or I post on Facebook.

We’re not in 1996 any more Toto

The definition for Access Software Provider covers any technology that moves (or even refuses to move) content. It includes software or enabling tools that: filter, screen, allow, disallow, pick, choose, analyze, digest, transmit, receive, display, forward, cache, search, subset, organise, reorganise or translate content.

And there is no other option. You either are an access service provider, or you are not.

Don’t blame the postie

In 1996 that made sense. The internet was clearly delineated into end points (publishers and consumers) and separate intermediaries that moved the content between them. You don’t want the postie opening your mail. But equally, you can’t then blame the postie for what is in your mail.

Social media companies don’t write the mail, but they don’t just deliver it either. If social media was a postie – it would read you mail, chose other mail it thinks you would like, and then deliver it to you.

That influence over what content you see feels a lot like publishing. The difference is that users heavily, and directly (knowingly or unknowingly) influence those decisions. We feed the algorithms, and they feed us. This means social media shouldn’t absorb the full liability of publishers, but they shouldn’t avoid it completely either.

Square peg. Two round holes.      

A lot of energy, time and money has been invested in deciding how much, and under what circumstances social media can be considered to be a publisher, or an intermediary. Meanwhile, the harms continue. These latest SCOTUS decisions haven’t broken anything, but they haven’t fixed it either.

Rather than spend more energy trying to force social media into an existing box, or carving out exceptions through case law – it is time to recognise a new category that actually reflects the services they provide. Let’s call this category “facilitator”.

Companies will often perform all three roles (publisher, carrier, and facilitator) so for legal purposes, the categories and corresponding levels of protection (or liability) apply to actions, not to companies.

The facilitator category has two defining characteristics:

  1. The content is not created by the company themselves and
  2. The company influences who that content is presented to.

If the company creates the content themselves (including through AI) it is a publisher (liable). If the company plays no role in deciding who sees the content  – then it is a carrier (not-liable). If the company takes content that they did not create and then influences who sees it – they’re a facilitator (partially liable).

Still a lot of questions

Creating a third category doesn’t immediately solve all the problems. It creates new questions that would have to be debated and resolved, and it would certainly have to be paired with a clear safe harbour scheme.

However, it would allow us to move forward and develop regulation for social media. It would keep the protection intact for true neutral carriers, doesn’t attempt to redefine the role of publisher, and allows for the development of a more nuanced liability equation for organisations that are simultaneously neither, and both, publishers and carriers.

What do you think?

One Reply to “Moving away from binary s230 world view to regulate social media”

  1. […] try to match liability against the degree to which those platforms influence what users see but they tend to clump organisations together into a handful of categories. Within each category are significantly different platforms running […]

Leave a Reply

Close Search Window