The mirage of control

On Australia’s social media ban, the UK’s legislative illusions, and who is truly responsible for protecting children.

There is a familiar political reflex when governments feel the ground shifting beneath them: reach for a dramatic gesture. A bold line. A headline promise. Something that appears to be in control, even if the underlying architecture of control does not yet exist.

Australia has just supplied the latest example. Its new ban on social-media accounts for children under 16 – the first of its kind anywhere in the world –  arrived with all the rhetoric of national renewal. The Prime Minister spoke of childhood restored, parents “taking back power,” tech giants finally being tamed. By mid-morning, sympathetic newspapers were calling it a revolution. 

And yet, barely forty-eight hours into the ban, Sky News interviewed a cluster of teenagers in Sydney who were all – without exception – still on their feeds. VPNs, older siblings’ accounts, drawing on fake facial hair to trick age-estimation software, borrowing their parents’ logins. The state drew a line; the adolescents stepped over it.

But that does not mean it is pointless. Far from it.

The mistake is not the ban. The mistake is imagining that a ban alone can resolve a structural problem.

Because what Australia’s experiment really demonstrates is the gap between measures that signal responsibility and systems that exercise it. And that is precisely the gap the UK now finds itself trapped in – particularly as ministers insist that the Online Safety Act represents the fulfilment of their obligations to children. 

The UN Convention on the Rights of the Child could not be clearer: the responsibility for creating a safe environment for children lies with the state. Not with tech companies. Not with schools. With governments.

Yet the UK continues to behave as though it can legislate itself out of that responsibility by instructing platforms to moderate content. It cannot. Content moderation is not protection. Algorithmic nudges are not protection. Fines are not protection. The UNCRC  demands something much more fundamental: a regulatory architecture that recognises children’s developmental needs and designs the digital world around them.

The Online Safety Act does not achieve that. It manages the symptoms of harm, not its causes.

And the Scottish Government’s position is even weaker: because regulation is reserved to Westminster, Holyrood insists its hands are tied. The result is a political vacuum where both governments gesture at responsibility while neither exercises it.

Which is why the Australian debate matters to us – not because we must copy it, but because it exposes what the UK still refuses to confront.

Let us begin with the simplest question: do age-based restrictions help? Yes – not perfectly, but measurably.

Even partial compliance reduces exposure to the most harmful content for younger children. Even imperfect friction slows the rate at which pre-adolescents enter environments designed for adults. Public health has always accepted that the achievable is better than the ideal.

The Australian case already proves something important: bans are bypassed by the most digitally confident teenagers, not by the nine- or ten-year-olds who are currently being sucked into algorithmic patterns that no child should experience. Those younger children cannot route around a system they barely understand. Their protection matters.  

Neither approach is complete - one is all gesture, the other all paperwork

So the question facing the UK is not whether ‘bans work.’ That is the wrong debate. The real question is: what structures would need to exist for restrictions to work well enough? And whose job is it to build them?

Australia has introduced the ban first and will build the supporting mechanisms as it goes. That is one way to proceed, if politically risky. The UK has done the opposite: it has built a sprawling piece of legislation without adjusting the underlying design of the digital world children inhabit.

Neither approach is complete. One is all gesture, the other all paperwork. What neither currently has is a coherent child-protection architecture. 

Australia’s early difficulties are not failures but feedback. They tell us what must exist for serious child protection: reliable age-assurance, independent verification systems, algorithmic regulation, design rules that prevent commercial exploitation of children’s attention, and enforcement bodies with statutory teeth.

This is where the UK’s position becomes indefensible. 

The government claims that the Online Safety Act fulfils the UNCRC, but the Act delegates responsibility for enforcement to private platforms – the very entities whose profit models rely on maximal engagement. The state cannot claim to be complying with the Convention while outsourcing its duties to the companies that create the harm it is meant to prevent.

It is the legislative equivalent of asking tobacco companies to regulate smoking.

And because the Act focuses on content rather than design, it leaves untouched the systemic mechanisms that keep children online longer than they wish to be: recommender engines, infinite scroll, attention harvesting, persuasive interface architecture, data-driven personalisation. 

The UNCRC obliges governments to confront these structural harms. The Online Safety Act simply instructs platforms to delete things more quickly. That is not child-protection strategy. This is administrative hygiene. 

What, then, should Scotland or the wider UK do?

First, we must resist the fatalistic conclusion that ‘nothing works.’ The Australian experience shows that imperfect systems still have real value for children who would otherwise be exposed early to environments designed to addict and monetise them. But it also shows that bans cannot stand alone. They must be one component of a multi-layered framework.

This is the foundation the UK has not yet built. And this is where Scotland, in particular, should be thinking differently. The question is not whether to replicate Australia but how to watch its first iteration, learn from the failures, and refine what succeeds. Australia is the experiment; we should be the improvement.

That is how policy innovation: not through imitation, but through refinement.

There is also a cultural dimension we must confront. The UK has become accustomed to confusing legislative activity with effective governance. The Online Safety Act is politically useful – it creates the illusion of order – but legislation cannot replace strategy, and symbolism cannot replace duty.

The UNCRC tells us something uncomfortable but essential: children’s digital safety is not the responsibility of parents, schools, or platforms alone. It is the responsibility of the state. And until the UK accepts that, we will continue watching governments declare victory while the underlying harms persist.

Australia’s ban may or may not become a global template. It may succeed more slowly than expected; it may require redesign; it may expose technical challenges that will take years to resolve. That is fine. Policy experiments are allowed to be messy.

What matters is that the UK does not mistake its own legislative noise for progress.

If Australia’s systems stabilise, Scotland and the UK should adapt them. If they fail, we should improve them. If they reveal gaps in child development research, we should fill them. But we should not remain where we are, congratulating ourselves for an Act that does not meet the standard it claims to.

The real choice is simple. Governments can continue outsourcing their duty to companies that profit from children’s attention. Or they can take responsibility – and build the digital world children actually deserve.  One is political noise. The other is the UNCRC.

And the gap between the two is where policy must move next.

Previous
Previous

In 2025, nothing is as it was

Next
Next

Use energy to win independence, rather than independence to win energy