It is that time of year when your feeds and inboxes are filled with predictions for the year ahead – and you’re about to get ten more 2025 predictions from me! Most predictions are quickly forgotten (thankfully), but there is value in making predictions as it helps us to understand the forces driving change in online safety. I read a post recently criticing most predictions as too vague to be ever proven right or wrong. It suggested that predictions need to be specific to be useful – which I agree with. So, with that in mind, I will make my predictions for 2025 as specific as possible.
But first, what’s going on?
In 2024, we’ve seen a growing disconnect between safety science and policy-making, with populism and political narratives increasingly driving regulatory decisions. Data-driven correlations are being mistaken for causation in policy decisions.
Overlapping with this, there has been increased fracturing within the online safety community.
Arguments for protection or empowerment are increasingly being presented as if the answer lies fully in one or the other – rather than a nuanced combination of the two. Meanwhile, safety resources are being directed toward compliance rather than actual user protection.
The biggest question for 2025 is whether the online safety community can regroup and promote safety interventions based in science and evidence – or whether populism and politics will continue to drive decision making.
Prediction 1: AI will reset our safety vocabulary
During 2025, the vocabulary of online safety will undergo a dramatic shift as AI amplifies both risks and challenges. Our existing terminology will need turbocharged modifiers just to keep pace with reality. For example: Scope, scale and speed have long been used to describe the different dynamic of harms online – will become “supercharged scope, scale, and speed”. Safety at scale will evolve to “supernova safety systems”.
This isn’t just linguistic flourish – it reflects a fundamental shift in the magnitude of online safety challenges. Even terms like viral spread may seem quaint compared to what AI-powered distribution can achieve. We’ll need to update our vocabulary to convey the heightened stakes and unprecedented challenges that AI brings to online safety.
Prediction 2: Trust and Safety will grow, but headcount will remain flat
Despite the supercharging of online risk, the resources for safety will remain relatively constant. Trust and safety will continue to mature and professionalise but capacity growth in safety will be driven by technology and especially AI tools. The independent safety tech industry will continue to grow, but the overall headcount for trust and safety will remain flat in 2025.
Prediction 3: Users will move away from the regulated platforms
Around the world, regulators are creating safety requirements that only apply to larger platforms. At the end of 2025 those platforms will still dominate the web, but user time on platform metrics will notably decline (about five percent) as users spend more time in less regulated platforms.
The users that will more quickly embrace those less regulated platforms will often be the users that are more at risk – which is a problem in and of itself.
Prediction 4: Safety based geographic fragmentation
Some tools that have been bolted on in response to legislator action (and that were outside of the planned product development flow) will negatively impact user experience and will therefore only be implemented where required by regulation.
The most likely culprit will be tools for age verification and identify management – as tech platforms avoid fines for allowing users to breach Government mandated requirements.
This will create a different experience for users in various countries which in turn creates more work explaining how to stay safe on those platforms (because you need localised versions), more moving pieces, and more opportunity for functionality and/or security problems. Also, VPN use will go up.
Prediction 5: Something big will break
I’d say there is about a 50/50 chance there will be a fundamental cyber security and/or privacy failure in 2025 that is specifically caused by an attempt to improve safety and comply with regulations. By that I mean something that really breaks open a large platform’s security and leaks masses of user data.
This is most likely to be something related to age verification or identity management that has been coupled on to an existing system. Attempts to allow controlled access into E2E Encrypted platforms are another possible point of failure.
Prediction 6: Adultification of social media with continue
Despite young people having embraced social media most significantly, social media will be increasingly treated as if it is a product for adults. By the end of 2025, at least three more countries will have moved to raise the age of social media access following Australia’s lead – although the new age will vary between 16 and 18. Lawmakers in those countries will say the science on social media being harmful is clear – and the public will agree. The academic community will continue to argue it is not.
Younger users will lie about their age more. VPNs will be more popular. Many parents that have supported the lifting of the age will endorse their kids falsifying their age to stay on those platforms – because they believe their children are sensible.
Prediction 7: NotSocMed for youth launched
By the end of 2025, there will be an AI based platform for sharing content and updates with groups of friends aimed at the youth market which is most definitely not social media – because it has been built specifically to avoid the regulation that defines adult only social media but (and you should read this in a whispering voice), it actually will be social media.
Prediction 8: Games will be next in the spotlight
The same movement that has pushed to raise age of access to social media will turn its attention to games. After all – the popularity of electronic games also rose as mental health issues have risen. A view that violent video games are linked to real world violence is widespread despite the science to the contrary – and many parents struggle to get their kids off games and onto the things they’d like them to be doing – from outdoor sports to helping with chores! So basically, the dominos are all lined up.
By the end of 2025, some lawmakers will be quoted saying “the science on gaming is clear –they are harmful and addictive” – and regulation subjecting the major gaming platforms to fines for not stopping young people accessing games via their services will be introduced.
Prediction 9: Reduced moderation
I’ve seen a few predictions that recent regulatory moves will trigger more ‘remove by default’ moderation activity from big tech and increase the volume of moderated content. I’m predicting the opposite.
A republican dominated Government in the USA will act against the liberal bias (their words) that has seen conservative views more heavily moderated on USA based platforms. Whether or not a new law is passed or s230 is reviewed, big tech will quickly dial their moderation settings back accordingly.
I predict the rate of positive moderation outcomes on big USA based social media platforms triggered by user complaints will drop by a third. These stats can be found in many transparency reports – so it will be especially easy to check this prediction.
Prediction 10: AI will make everything boring
Posts and updates on VLOPS (very large online platforms) will be increasingly AI generated and for the next few years those AI generated posts will be obvious and monotonous. People will look for more authentic engagement and move away from the VLOPs with nicely integrated AI tools – or at least into smaller communities on them.
What do you think?
I’m curious as to what trends other online safety practitioners see emerging and which of these predictions resonate with your experience. Leave your thoughts in the comments or connect with us in the OSX Network, where we maintain a private community focused on authentic engagement and collaborative problem-solving in online safety.
The OSX Network is a bespoke platform to explore ideas, collaborate, and connect with other online safety practitioners from around the globe (and avoid the AI generated monotony).