Born Suspect: When Safeguarding Starts Before Speech
How the UK’s Prevent programme has turned infancy into a site of surveillance and what that reveals about the future of safeguarding.
This week, Hyphen revealed that more than 200 babies and toddlers under the age of three have been referred to the UK’s Prevent counter-terrorism programme since 2016. Eighty-six of those children were under two. The majority- around seventy per cent- were flagged for “Islamist concerns.”
These are infants who can’t yet talk, walk, or comprehend the idea of ideology. Yet they are already recorded in a counter-terror database, their lives beginning with suspicion, not trust.
“I struggle to understand how this applies to someone so young,” said Alexander Gent, Chair of the National Association of Muslim Police. “Especially for children who are babies and can’t even speak or comprehend what an extremist ideology is.”
For those of us working in safeguarding and digital ethics, this news is more than alarming, it’s diagnostic. It shows how far the culture of “early intervention” has drifted from its original intent. A programme designed to protect people from harm has become a system that can label the most innocent as potential threats.
When protection becomes prediction
Prevent operates under the banner of “safeguarding.” But when safeguarding becomes pre-emptive, when it’s used to predict future danger rather than respond to actual risk- it stops being protection. It becomes surveillance disguised as care.
The Prevent duty requires teachers, health professionals, and local authorities to report individuals deemed “at risk of being drawn into terrorism.” In practice, this has created a parallel system of suspicion that captures children with no agency or comprehension, often reflecting concerns about their families, ethnicity, or religion rather than their behaviour.
This is what we mean when we say that the UK’s safeguarding model has become a pre-crime framework. It conflates vulnerability with volatility. It assumes risk can be managed through visibility — the more data, the earlier the capture, the safer we’ll be. But it doesn’t work that way. The more you stretch the net, the less meaning the data holds, and the more trust you destroy.
Neurodivergent echoes
For families of neurodivergent children, this story sounds grimly familiar. We’ve seen Prevent referrals made for children whose intense interests, literal communication, or anxiety are misread as ideological fixation.
When professionals lack training in developmental or neurodiversity-informed practice, difference itself becomes a risk marker.
This blurring of behaviour and belief isn’t accidental. It’s a structural pattern — one that grows wherever fear fills the gaps left by understanding. And once a child’s name is in the system, it can be hard to get it out again.
Rights & Security International has warned that Prevent data can persist indefinitely across “a convoluted spider’s web” of government databases — even when official policy says it should be deleted.
For a toddler, that could mean carrying an invisible file through school, adulthood, and beyond, flagged as “risk assessed” before ever learning to speak.
The datafied child
At Safe by Default, we focus on how digital systems expose vulnerable children to harm — through unfiltered platforms, addictive design, and algorithmic profiling. But what Hyphen’s reporting shows is that the logic of datafied childhood extends far beyond screens.
From predictive policing to social care analytics, we are teaching our institutions to model childhood as a set of probabilities.
We’re training them to look for red flags rather than relationships. And once that lens is normalised, once “early intervention” becomes “pre-emptive identification” it’s not a big leap from risk scores to watchlists.
The Home Office insists that Prevent referrals are “carefully assessed” and “kept strictly confidential.” But confidentiality means little when a record can survive in multiple databases indefinitely, long after the child has grown up. These are safety systems with no sunset clause.
Born into a risk architecture
Prevent’s reach into infancy exposes a deeper truth about modern safeguarding: that our systems no longer trust the human.
We are building infrastructures of suspicion that begin at birth, or before it, if you count predictive analytics in maternity and social care.
The rhetoric is always the same: better safe than sorry. But safety built on suspicion is not safety at all. It is containment.
The question we should be asking isn’t “how can a baby be radicalised?” but “how did our institutions become radicalised by fear?”
Towards genuine safeguarding
Real safeguarding doesn’t start with data; it starts with understanding.
It means listening to families rather than logging them.
It means training professionals in developmental and neurodiversity-informed practice, not counter-extremism theory.
It means designing systems that protect privacy as fiercely as they protect life.
And it means rejecting the idea that risk can be eliminated by watching children more closely.
We must create a culture where care comes before compliance, where “safe by default” means safe from misinterpretation, mislabelling, and overreach.
“These children are being stereotyped and labelled when they are innocent,” said Baroness Shaista Gohir of the Muslim Women’s Network UK. “That label could stay with them for the rest of their life.”
Prevent was meant to keep the next generation safe. Instead, it’s teaching them that the state sees them, before they can even speak, as suspects.
If we want a safer future, it starts by dismantling that lesson.
Author note:
Safe by Default is a parent- and survivor-led campaign calling for tamper-proof, developmentally informed digital safety and safeguarding systems. We advocate for upstream prevention that protects children from exploitation, misinterpretation, and over-surveillance.
Next: When the state calls parents the failsafe — what Southport reveals about blame, fear, and design

