As governments worldwide grapple with protecting young people online, Australia has moved to raise the minimum age for social media use to 16.
There is a case to be made in support of raising the age limit, and a case to be made against it – and there is an approach that responds to the key concerns in both arguments. Raise it – and then lower it on a case by case basis.
The Australian Government could have raised the default age for social media access at 16 – and then empowered the eSafety Commissioner to lower that age setting for each platform – based on the nature and functions of that platform, and the safety settings and capabilities of that platform.
As other countries consider whether to follow down a similar path, I’d strongly consider them to consider this approach – because any single age setting for all social media is flawed.
There is a case for higher age limits
Despite the strong public statements by individual politicians about the science being in favour of the age raising, the Explanatory Memorandum that accompanies the law change does not make definitive claims about proven adverse mental health outcomes for youth. Instead, it takes a more measured approach framing the age increase around: the arbitrary nature of the current age, community/parent preferences, a precautionary approach to potential risks, and a need to “shift the paradigm” of platform responsibility.
It is true that the current minimum age of 13 for social media wasn’t based on any research about safety or developmental appropriateness specific to social media. It comes from U.S. privacy legislation written in 1998 – before social media even existed. This does not prove 13 is too low, or too high – but simply that it wasn’t set based on any science about the impacts of social media.
The age raising arguments basically fit into five pillars:
- Vulnerable Developmental Stages Higher age limits could delay social media exposure until after periods of heightened developmental sensitivity. Early adolescence involves rapid brain development, identity formation, and changing social dynamics. There may not be any research that proves social media is bad for you, but there clearly are unhealthy aspects within in. This research from University of North Carolina found habitual checking of social media may negatively impact young adolescents’ brain development for example.
- Parents views Many parents worry about the impact of social media on their children – and surveys show the majority of parents support this new law. A clear regulatory age limit does provide parents with stronger backing for their decisions about when their children should start using social media.
- Manipulative Design Features Modern social media platforms employ sophisticated techniques to maximize user engagement – from infinite scrolling and autoplay videos to algorithmic content feeds and notification systems. Higher age limits would shield younger users from these systems during their most impressionable years.
- Clearer Obligations for Age Verification The current 13+ requirement is not driven by an Australian Law and has often been managed by simple (and literal) checkbox exercises. A new requirement, backed by significant penalties, should force platforms to take age verification more seriously and invest in more robust systems to prevent underage access.
There is a case against higher age limits
The great majority of researchers and experts that have undertaken studies into youth and social media appear to oppose a blanket raising of the age for access to social media. They point out that whilst the worsening youth mental health crisis correlates with the rise in social media – this doesn’t prove causation. And in fact, research that has deliberately compared mental health outcomes with levels of social media use have not found a negative relationship.
By raising the age to 16, many experts argue that the Government may actually worsen health outcomes for young people because of:
- Reduced Digital Participation Rights: Young people have legitimate rights to participate in digital spaces that increasingly shape culture, education, and civic life. Restricting access could hamper their ability to engage in important social and political discussions.
- Lost Beneficial Social Connections: Social media can provide valuable support networks, particularly for marginalized young people, LGBTQ+ youth, and those in remote areas. Higher age limits could cut off access to these vital community connections.
- Reduced development of Digital Resilience: Supervised, age-appropriate social media use during early teens could help young people develop critical digital literacy skills and resilience they’ll need throughout their lives.
- Widening Digital Divides: Blanket age restrictions could disproportionately affect young people from disadvantaged backgrounds who rely more heavily on social platforms for information, education, and social connection.
- Risk Displacement: Rather than protecting young people, higher age limits might push them toward less regulated, potentially riskier platforms or encourage deceptive behavior to access mainstream services. As an online safety practitioner, I can say that the history of attempts to regulate online spaces tells us that risk displacement is a certainty. That means we end up with laws making the places young people aren’t, safe.
- Reduced incentive to invest in safety With the age limit set to 16, platforms can argue their safety tools need only be fit for adults – and discontinue developing child safety tools. Given the lack of reliable, non-intrusive age-verification options, we can be sure young people will still access these platforms – and therefore be at greater risk.
There are obvious implementation Challenges
Even setting aside whether raising the age limit will improve outcomes for youth, any attempt to set a single age limit across all platforms presents several practical problems (regardless of what age is set):
- One Size Doesn’t Fit All: Services we call social media vary dramatically in their features, risks, and benefits. Any single age limit is therefore likely to be similtaneously too high, and too low. To be clear – this is not a problem specific to the 16+ age limit. It is equally wrong at the current single 13+ age limit.
- Enforcement Difficulties: Age restrictions are notoriously hard to enforce online. Young people are often technically skilled at circumventing controls or finding less regulated spaces to connect.
- Privacy Impact: Effective age verification often requires collecting more personal data – creating a tension between protection and privacy.
There is a more nuanced approach that meets both needs
Rather than a blanket ban or universal age limit, a default age limit with an option to reduce it is creates a regulatory framework that:
- Recognises different risk levels across different platforms
- Allows age limits to be adjusted based on platform safety features
- Rewards platforms that implement strong safety controls
- Maintains access to beneficial online services and
- Can adapt to emerging evidence about impacts and effectiveness
This is why I suggested empowering the eSafety Commissioner to set varying age requirements based on platform risks and safety controls. This approach would maintain strong protections while being more responsive to the complex reality of young people’s online lives.
For a platform, the opportunity to have their official age limit reduced would create stronger financial incentives to invest in safety features, content moderation, and user protection tools.
In this case – Australia really is the lucky country
To be clear, the responsibility to set a criteria for, and administer reduced age setting for specific social media platforms is not a small task. Countries considering this approach would have to consider whether they have the capability and expertise to implement it. However, Australia has an established organisation that has developed multiple online safety codes, defined and enforced basic safety expectations, set standards, run massive programs and generally proven it can do it – the Office of the eSafety Commissioner.
The eSafety Office is now a juggernaut of online safety that the Australian Government could have pointed straight at this problem. It chose not to – and one more opportunity to move on from the age of arbitrary age settings for social media was missed.