Who gets to design our children’s minds?
This week’s In Common explores the mounting evidence that social media algorithms can alter political attitudes and amplify particular narratives. Designed to maximise engagement, these systems do more than entertain – they manipulate attention and belief. If they can shift adult opinion, what are they doing to children immersed in them for years?
Manipulation is defined as controlling someone or something to your own advantage, often unfairly or dishonestly. It is a word we reserve for cult leaders, abusive partners, authoritarian regimes.
It is not a word we tend to associate with apps on our phones.
For years, platform owners have insisted their feeds are neutral – that they merely ‘show users what they want to see.’ But emerging research tells a different story. Algorithms do not simply reflect public opinion. They shape it.
A recent independent study examining X found that its algorithm systematically promoted conservative political content across users’ feeds, including to those on the left. Seven weeks of exposure to this curated environment measurably shifted users’ political attitudes in a more conservative direction, particularly around policy priorities and perceptions of major political events. Crucially, the algorithm also influenced which accounts users chose to follow – meaning its effects persisted even after the curated feed was switched off.
Earlier research on Twitter, before Elon Musk’s ownership, found similar patterns of right-leaning amplification. This suggests the mechanism is not reducible to one individual’s ideology. It reflects something more structural: the power of algorithmic ranking systems to amplify certain narratives over others in pursuit of engagement.
By contrast, a large-scale study conducted with Meta during the 2020 US election found that replacing Facebook and Instagram’s algorithmic feeds with chronological ones significantly reduced user engagement – but did not produce measurable shifts in political attitudes during the study period. When the algorithm was removed, engagement fell. In other words, manipulation is what keeps us there.
Different platforms operate within different informational ecosystems and ownership incentives. But the core fact remains: feed algorithms alter what users see, who they follow, and what content spreads fastest. They are not passive mirrors. They are active editors of reality.
That is manipulation by design.
The objective is not civic education. It is engagement maximisation – because attention is monetised. The more emotionally charged, polarising, or identity-affirming the content, the longer users stay. The longer they stay, the more data is harvested and advertising space sold.
Now, remove the adult voter from the equation. Replace them with a thirteen-year-old.
Young users are not just consuming content. They are developing identities, political intuitions, emotional regulation, and social norms within algorithmically curated environments. The same systems shown to shift adult political attitudes are operating inside the developing brains of children – daily, invisibly, and at scale.
And these systems are engineered around behavioural psychology.
Social media platforms rely on variable reward schedules – the same reinforcement model used in gambling design – delivering unpredictable rewards in the form of likes, shares, and validation. This creates compulsive checking loops and heightened dopamine anticipation. Over time, high-intensity digital stimulation can alter baseline reward thresholds, making offline experiences feel comparatively dull.
“When in modern history have we permitted private corporations to run live behavioural experiments on millions of children – experiments capable of altering attention, belief, and emotional regulation – with no meaningful pre-approval process?”
Addiction is not a side effect. It is the infrastructure.
Now combine that compulsive design with algorithmic amplification capable of shifting beliefs, elevating extreme content, or privileging certain narratives over others. Children are not merely distracted by their devices. They are immersed in persuasive environments optimised for engagement rather than wellbeing.
When in modern history have we permitted private corporations to run live behavioural experiments on millions of children – experiments capable of altering attention, belief, and emotional regulation – with no meaningful pre-approval process? We regulate advertising directed at young people. We impose age classifications on films. We restrict gambling access. We require pharmaceutical safety trials. In each case, the principle is clear: children deserve heightened protection from manipulation.
Yet social media algorithms – capable of shaping attention, mood, belief, and behaviour – operate with comparatively limited structural constraint.
This is not an argument that every platform owner pursues a single political agenda. It is an argument that whoever controls the algorithm controls amplification. The mechanism is powerful enough to tilt discourse. Its direction depends on ownership incentives, commercial priorities, and potentially, political alignment.
That asymmetry should concern us regardless of party affiliation.
If seven weeks of curated exposure can measurably influence adult policy preferences, what does years of immersion do to a developing adolescent brain? That is precisely why the debate must move beyond age bans alone. Restricting access does not address the architecture of amplification. Nor does it confront the addictive scaffolding that keeps young users inside these systems in the first place.
In our new policy paper, Saving Childhood in Scotland, we argue that protecting children online must be treated as a public health priority. That means embedding a precautionary principle into technology governance, introducing enforceable design standards, and rebalancing childhood toward environments that support healthy neurodevelopment.
The question is not whether algorithms shape beliefs. The evidence increasingly shows that they can. The question is whether we are willing to allow that shaping power to operate largely unchecked on children.
Manipulation at this scale has no historical precedent. Nor does our current tolerance of it. Children should not be the testing ground for behavioural monetisation models – or for algorithmic experiments in political persuasion. If we would not allow it in food, pharmaceuticals, or gambling, we should not allow it in the digital architecture shaping the next generation.
We are not talking about screen time. We are talking about cognitive environment design. And we are allowing it to be dictated by engagement metrics.

