Why regulating social media is now a public health imperative

Social media is not a neutral communication tool but a behavioural system engineered to capture attention and shape development. This argues that protecting children online must be treated as a public health imperative requiring structural intervention – not piecemeal regulation.

When I was in my first year of secondary school, I secretly created a Facebook account out of boredom one evening in my bedroom. The process was effortless. I entered a false birth date, clicked confirm, and within minutes, I had access to an entirely new social universe. I was twelve. 

At the time, it felt expansive rather than dangerous. You could browse people’s photos, track social dramas in real time, message privately, comment publicly, curate albums, and observe – endlessly – the lives of others. For a generation raised between MSN and the smartphone era, it felt like stepping into something vast and thrilling. But it did not take long for the social consequences of that access to become clear. 

Everything you posted was judged. Photos were ranked. Appearances were scrutinised (does anyone remember the classic ‘like for a line’ or ‘like for a rating,’ or the meticulous, ‘like for a personality, looks, and closeness rating’?) Friendships were publicly measured through likes, comments, and exclusions. Rumours travelled faster than they ever had in playground corridors, now electrified by screens. What began as novelty quickly became performance – a careful curation of self, designed to invite approval and avoid ridicule. 

For some, the consequences were devastating. A girl I knew became the target of sustained online harassment after intimate details were shared without her consent. The scale and speed of the bullying campaign that followed would have been impossible in any previous era. She took her own life. 

That loss never left me. And it is impossible to separate memories like that from the policy work I have since become involved in. Because what children face online today is not simply an extension of what we experienced – it is an escalation.

Where early social media revolved around peer interaction, today’s platforms are structured ecosystems engineered to capture attention and shape behaviour. Image-saturated platforms such as Instagram and TikTok centre identity around visual branding and algorithmic visibility. Snapchat embeds disappearing content, streak mechanisms, and AI features into daily communication. Endless scroll, push notifications, behavioural advertising, and predictive content feeds ensure that disengagement is structurally difficult. 

These environments are not accidental. They are designed. 

A core mechanism underlying compulsive platform use is the variable ratio reinforcement schedule – a behavioural model derived from B.F. Skinner’s experimental work and widely used in gambling design. In this system, rewards (likes, comments, views, shares) are delivered unpredictably. Modified by technology platforms, users cannot anticipate when validation will arrive, which increases compulsion and reduces disengagement. 

This intermittent reinforcement pattern triggers heightened dopamine anticipation – the neurological ‘reward prediction’ response – creating one of the most persistent behavioural loops observed in psychology. The user becomes locked into cycles of checking, scrolling, and refreshing long after the rewards themselves cease to hold meaningful value.

I recognise that loop intimately. Throughout exam periods at school and later at university, I would find myself repeatedly checking my phone – ​​often when it hadn’t even buzzed – pulled back into feeds I had already exhausted. Time that should have been spent revising, sleeping, exercising, or simply being outdoors with friends was instead absorbed into compulsive scrolling. Even now, after deleting social media apps, I still feel the phantom pull to check. That attentional fragmentation has affected my academic performance more than it ever should have – a personal illustration of how deeply these behavioural systems embed themselves.

Human social platforms deepen this loop further through identity investment. Users embed their sense of self into profiles, networks, and content archives. Leaving the platform, therefore, carries social and psychological cost. For children and adolescents – who are actively forming identity and whose reward systems are neurologically more sensitive – the impact is amplified. 

This is not merely behavioural. It is neurodevelopmental. 

Research into dopaminergic overstimulation shows that repeated exposure to high-intensity digital reward environments can alter baseline pleasure thresholds. Over time, the brain downregulates dopamine transmission in an attempt to maintain equilibrium. The result is a deficit state when stimulation is removed – experienced as anxiety, irritability, insomnia, dysphoria, and emotional flatness (akin to depression). 

In simpler terms, the more time spent in hyper-stimulating digital environments, the harder it becomes to feel good offline. 

This addictive scaffold does not exist in isolation. It intensifies other harms embedded within social media ecosystems: chronic social comparison and body image anxiety, cyberbullying and relational aggression, exposure to harmful or extremist content, data extraction and behavioural profiling, and algorithmic manipulation of mood or belief. Addiction is therefore not one harm among many – it is the infrastructure that enables the others and makes young developing brains more vulnerable to harm.

Children should not be the testing ground for behavioural monetisation models. 

Children, whose prefrontal cortex development governs executive function, impulse control, and emotional regulation, are particularly vulnerable. When formative cognitive development occurs within systems designed to override self-regulation, long-term developmental impacts are inevitable.

Anyone working with young people will recognise the downstream effects. Teachers report escalating behavioural volatility, reduced attention spans, heightened anxiety, and diminished resilience. Mental health referrals among adolescents continue to rise. Increasing numbers of professionals describe a cohort of socially-withdrawn ‘bedroom dwellers’ – children whose primary relational world is mediated through screens. This is the current context in which policy responses must be judged. 

The UK Government has begun signalling concern. Recent proposals include consulting on a social media age ban, extending online safety regulation to AI chatbots, and introducing measures requiring platforms to preserve children’s data after death in cases of suspected harm. 

These steps are not insignificant. They indicate political recognition that the digital environment poses systemic risks to children. But they are not sufficient. Age bans alone are porous. International evidence already demonstrates that under-16s prohibitions are easily circumvented through VPNs, false credentials, or secondary accounts. Prohibition without structural redesign drives usage underground rather than eliminating harm.

Most importantly, bans do not address the architecture of the platforms themselves – the algorithmic amplification systems, reward loops, and behavioural targeting models that generate harm regardless of user age. If the environment remains toxic, restricting entry points does not make it safe. 

This is why our recent policy work, Saving Childhood in Scotland, argues that the crisis must be treated not as a discrete technology issue, but as a public health emergency requiring whole-system intervention. 

We cannot return to a ‘before time.’ Digital life is structurally embedded in education, friendship, creativity, and communication. The task is therefore dual: protect children from systemic digital harm, and rebuild the developmental conditions that digital life has displaced.

That requires action across multiple fronts.

First, government must accept regulatory responsibility. Under the UN Convention on the Rights of the Child now incorporated into Scots law – states hold a legal duty to ensure children’s environments are safe by design. Delegating responsibility to parents, teachers, or individual schools is neither equitable nor effective.

Second, we must embed a precautionary principle into technology governance. Innovations affecting children should be proven safe before deployment – not retrospectively regulated after harm occurs. No economic or innovation incentive justifies exposing developing brains to untested behavioural systems. 

Third, education must rebalance toward emotional and social development. In an era of algorithmic manipulation, resilience, empathy, conflict resolution, and emotional literacy are no longer ‘soft skills’ – they are protective infrastructure. Proposals such as later school starting ages, play-based early education, and embedded mental health reflect this shift and their effectiveness cannot be understated. 

Fourth, we must reinvest in real-world childhood. Outdoor play, youth clubs, sports, arts participation, and unstructured socialisation are not nostalgic luxuries – they are empirically linked to wellbeing, confidence, and relational development. Digital harm expands in proportion to the erosion of these spaces. 

Fifth, technology itself must be redesigned. Child-safe device standards, default-safety settings, algorithmic restrictions, and publisher liability for content amplification represent structural levers capable of reducing harm at scale. 

Finally, we must act for those already affected. Specialist school-based mental health practitioners, trauma-informed support, and early intervention systems are essential for a generation already navigating the consequences of unregulated digital exposure. 

None of this is anti-technology. Nor is it a call for digital abstinence. Technology offers extraordinary developmental benefits – education access, creative tools, global communication, and knowledge democratisation. But these benefits coexist with commercial infrastructures optimised not for wellbeing, but for engagement extraction.

Children should not be the testing ground for behavioural monetisation models. 

The question we must confront is not whether harm is occurring – the evidence is now overwhelming. The question is why we tolerated its emergence without precaution, and why meaningful intervention has taken so long. 

At no other point in modern history would we have permitted industries to deploy psychologically manipulative systems directly into children’s daily lives without safety testing, proper age safeguards, or developmental oversight. We would not accept it in pharmaceuticals, education, or food safety. We should not accept it in digital architecture. 

Childhood is a finite developmental window. Neurological, emotional, and social foundations formed during this period shape life trajectories decades beyond platform trends or corporate profits. Regulating social media use is therefore not a culture war issue, nor a generational panic. It is a public health imperative  – and a moral one.

If we fail to act proportionately to the evidence before us, we are not simply neglecting policy responsibility. We are permitting preventable developmental harm to continue at population scale. And history will judge that failure accordingly.

Previous
Previous

The public and politics are at right angles; this missing concept explains it

Next
Next

Managing the numbers, not the causes