Opinion, Politics, Regulation|

The Australian government’s proposal to raise the minimum age for social media access has ignited a significant debate. While the desire to shield young people from the harmful aspects of social media is understandable, a blanket ban isn’t the best solution. Though popular, it is neither practical nor supported by research. A collaborative approach with clear, age-appropriate standards, digital literacy education, and greater industry accountability offers a more effective way to ensure safer online spaces for young people.

Popular and Plausible-Sounding

Public frustration with social media is growing. Many see a ban as a logical and simple solution to a complex problem. The idea of restricting access until young people are mature enough resonates broadly. It feels like society is taking a firm stand to protect its most vulnerable.

The perception that big tech companies aren’t doing enough to safeguard children is widespread. As a result, stricter age regulations or bans feel like a decisive action in the face of inaction. However, this simplistic appeal masks deeper issues.

Most experts will oppose it because research doesn’t support it

There will be child development, psychology, and online safety experts on both sides of the debate but those that have worked most deeply on this issue (social media and young people) will overwhelmingly oppose the ban. This is because research consistently shows that the most effective way to guide young people through the digital world is to empower them with control, agency, and education, rather than to limit access. Teaching digital literacy equips them with the tools to make informed choices and recognize online risks.

Social media also brings benefits to young people —offering opportunities for creativity, connection, and learning. Denying young people access can restrict these positive experiences. Therefore, the research recommends an approach based on guidance and guardrails– but not  cutting off access entirely.

The limited reach of regulation

Any regulation of online spaces faces significant practical challenges and in this case it is unlikely to resolve the core issues it targets because:

  • Social Media Is a Complex, Evolving Thing Defining what is included as “social media” is the first obstacle. While platforms like Instagram or TikTok may come to mind, there are countless other digital tools with social features. From gaming apps to messaging platforms, drawing a firm line between what counts as social media is increasingly difficult. As digital technology evolves, so too does the nature of what qualifies as social media.
  • There are plenty of other ways to find the same trouble: Even if clear definitions were established, young people would find alternatives. Messaging apps, gaming platforms, and websites with embedded social functions allow them to connect with peers or share content outside traditional social media platforms. Ironically, these tools may lack the safety measures mainstream platforms have, potentially leading to even greater risks.
  • Tech-Based Circumvention Tech-savvy young people already have access to tools like VPNs (Virtual Private Networks) that allow them to bypass restrictions. Emerging technologies such as AI-driven interfaces and decentralized networks will only make enforcement more challenging. Any effective regulation must account for the limitations of purely technical solutions. The reality is that bans are easy to sidestep, undermining their intended purpose.

An alternative path—Create (and then enforce) age-appropriate standards

Rather than pursuing a ban, governments should work alongside industry leaders and child development experts to create robust standards that ensure age-appropriate online environments. These guidelines should clearly define the features and safety measures suitable for different stages of a child’s development.

Once those standards are agreed, Governments can and should regulate around them – and strongly enforce them on tech platforms. By establishing these standards first, we can protect young people without severing their connection to the digital world. The goal is to create safer spaces where young users can learn, explore, and socialize, with appropriate protections in place.

The online safety community’s responsibility

It’s important to recognize that the online safety community shoulders some responsibility for the popularity of these bans. The lack of comprehensive, effective solutions to protect young people online has contributed to growing public frustration and fear. By not sufficiently addressing the legitimate concerns of parents, educators, and policymakers, we have left a gap that policies like social media bans attempt to fill.

If we want to avoid seeing more of these types of restrictive interventions, it’s time for the online safety community to redouble its efforts developing practical, scalable solutions that genuinely protect young people while preserving their ability to engage online. Now more than ever is the time to step up and lead with solutions that work.

One Reply to “Social Media Bans for Young People: Popular, but Pointless?”

  1. […] government,” and the absence of such is considered to be found in so many Western democracies. Paired with the technological savviness of youth populations, bans such as age restrictions are frequently […]

Leave a Reply

Close Search Window