Beyond bans and digital literacy: Why the Social Media debate is asking the wrong Question
AI, inequality, and the limits of “responsible use”
Photo by Ahmed Siddiqui on Unsplash
The UK debate about banning social media for under-16s has accelerated quickly. In the wake of serious harm and high-profile cases, restrictions feel like a necessary response. They signal urgency. They reassure the public that children’s safety is being taken seriously.
But as the debate has intensified, something important has become clearer: we are arguing about the wrong thing.
The real question is not whether social media should be banned, delayed or restricted. Nor is it simply whether children and parents need better “digital literacy”. The question is whether we are designing environments — digital, educational and social — that are capable of keeping children safe without relying on constant adult vigilance or late-stage intervention.
Digital environments have changed and bans no longer map onto reality
Much of the current policy discussion still treats “social media” as a discrete activity: something children log into, scroll through, and can be switched off.
That model is already outdated.
As practitioners and researchers have been warning, children and young people are now navigating AI-shaped environments in which:
platforms, search engines and messaging apps blend together
generative AI is embedded in tools used for schoolwork as well as leisure
chatbots, recommendation systems and feeds shape attention and meaning
irony, role-play and harmful content increasingly blur together
Classrooms, homework platforms, gaming spaces and social media are no longer separate domains. For many children, particularly those spending more time at home due to anxiety, exclusion or unmet needs, these environments form a continuous ecosystem.
In this context, platform-specific bans start to look incoherent. They regulate access to one part of an environment while leaving the rest untouched.
What the evidence is now saying publicly
Recent BBC reporting has reflected a growing consensus among researchers that the relationship between social media use and harm is mixed, contextual and uneven — and that policy responses risk running ahead of the evidence.
One recent BBC News analysis notes that while ministers are under pressure to “do something”, researchers continue to stress that harms are shaped more by design, exposure, and wider psychosocial context than by access alone. The article also highlights concerns that bans could create false reassurance while failing to address the conditions that leave some children far more vulnerable than others.
https://www.bbc.co.uk/news/articles/cpwn1vjy0y5o
This matters because it signals a shift: uncertainty is no longer confined to academic journals. It is now visible in mainstream policy discussion — even as proposals for restrictions continue to gather momentum.
Digital literacy matters but it cannot carry the weight we’re putting on it
Alongside calls for bans, there has been a parallel push for improved digital and media literacy. This is important. Children need support to navigate online spaces, understand manipulation, and make sense of what they encounter.
But there is a growing risk that digital literacy is being asked to compensate for structural neglect.
Literacy-based approaches often assume:
regular school attendance
emotionally available adults
time and capacity for conversation and reflection
a level of cognitive and sensory ease
For many children, those conditions simply don’t exist.
When literacy becomes the primary safeguard, children who are already disadvantaged are left more exposed — not less.
Poverty, exclusion and neurodiversity change the risk landscape
One of the most striking gaps in the current debate is the lack of attention to how unevenly digital risk is distributed.
Children experiencing poverty, housing instability, school exclusion, bullying, or unmet mental health needs spend more time online not because they are reckless, but because offline spaces have become unsafe, inaccessible or unavailable.
For neurodivergent children, this is intensified:
online environments can feel more predictable than face-to-face interaction
algorithms can reinforce fixations and certainty
sensory and emotional overload offline can push engagement online
In these contexts, neither bans nor literacy initiatives land equally. Restrictions are more likely to be bypassed. Guidance is more likely to feel irrelevant. And responsibility quietly shifts onto families already carrying the most strain.
Why bans and literacy both backfire without infrastructure
Evidence from youth, safeguarding and digital harms research shows a consistent pattern: when policy relies on blunt tools, young people adapt around them.
Restrictions are circumvented through fake accounts, secondary devices and private platforms. Content doesn’t disappear; it moves, often into spaces that are less visible, less moderated and less open to adult support.
At the same time, children quickly learn to distinguish between interventions designed with them and those imposed on them. When responses feel performative or out of touch, they are more likely to be mocked, gamed or ignored — not internalised.
This is not a failure of young people. It is a failure of policy design.
AI raises the stakes but also clarifies the problem
The growing role of generative AI does raise new risks:
content is harder to interpret as real, ironic or harmful
extremist or abusive ideas are less overt and more ambient
young people encounter persuasive systems rather than static material
But AI also makes something clearer: safety cannot depend on individual judgement alone.
We would not expect children to independently assess chemical safety in a laboratory, or structural safety in a building. We regulate those environments by design.
Digital environments should be no different.
What a more credible response would look like
If the aim is genuinely to reduce harm, policy needs to move beyond the false choice between bans and literacy, and towards capability and care.
That means:
Safer defaults, including limits on algorithmic amplification and recommendation loops
Clear accountability for platform design, rather than devolved responsibility to parents and schools
Early offline support, treating school absence, exclusion and distress as safeguarding signals
Adapted mental health provision, especially for neurodivergent children, rather than diagnostic deflection
Support for families, recognising that engagement requires time, trust and stability
In short, it means building environments that are safer by default, rather than environments that are risky unless constantly policed.
Moving the debate forward
Public concern about children’s online safety is justified. But urgency should not push us into solutions that feel decisive while leaving underlying conditions untouched.
Bans may reassure adults. Literacy may empower some children. But without attention to inequality, neurodiversity and the realities of contemporary digital ecosystems, neither will deliver the protection we are looking for.
The question is no longer whether children should be online. They already are — across learning, play, communication and identity.
The real question is whether we are willing to design systems — digital and social — that meet that reality with care, competence and responsibility.
Policy briefing: implications for education, online safety and safeguarding
This series has argued that bans and digital literacy alone are insufficient responses to the risks children face in contemporary digital environments. Below we set out practical policy implications for education, regulation and safeguarding — grounded in current evidence and frontline experience.
1. Education policy: AI-era safeguarding needs capacity, not just curriculum
What we’re seeing
Schools are increasingly expected to manage risks arising from AI tools, chatbots, online communities and algorithmic content — often without training, time or specialist support.
Digital literacy initiatives are frequently bolted on to already overstretched curricula, with limited evaluation of impact.
Policy implications
Digital and AI literacy must be paired with pastoral capacity, SEND expertise and safeguarding support, not delivered in isolation.
Schools need clear guidance on when digital risk signals (e.g. fixation, withdrawal, online distress) should trigger early help, not disciplinary responses.
Neurodivergent pupils and those with unmet mental health needs require adapted approaches, not generic “online safety” lessons.
Relevant links
BBC News: mixed evidence on social media harm and policy responses
https://www.bbc.co.uk/news/articles/cpwn1vjy0y5oOfcom – Children and parents: media use and attitudes
https://www.ofcom.org.uk/research-and-data/media-literacy-research/childrens
2. Online Safety Act: implementation must address design, not just content
What we’re seeing
The Online Safety Act has largely focused on content moderation and age-appropriate design, but algorithmic amplification and AI-driven systems remain under-scrutinised.
Responsibility risks being displaced onto parents and schools through guidance that assumes constant supervision.
Policy implications
Ofcom’s implementation guidance should prioritise systemic risk assessment, including:
recommendation loops
engagement-based amplification
AI-generated or synthetic content
“Safer by default” settings should be the norm, not optional add-ons.
Enforcement must consider how harms are experienced unevenly by children facing poverty, exclusion or neurodivergence.
Relevant links
Riedman Report – Algorithmic extremism with AI-powered platforms
Strengthening the OSA: our 10-point plan for Government - Online Safety Act
3. Safeguarding guidance: move beyond thresholds and late-stage crisis
What we’re seeing
Children whose distress is expressed through online behaviour are often deemed to fall “below threshold” for mental health or safeguarding intervention.
Risk is reframed as behavioural, digital or disciplinary — rather than relational or systemic.
Policy implications
Safeguarding frameworks should recognise online distress and fixation as potential indicators of unmet need, not solely as conduct issues.
Guidance should explicitly address:
displacement effects of bans and restrictions
risks created when children move into less visible online spaces
Early, relational support must be prioritised over criminalisation or crisis-only responses.
Relevant links
4. What this means in practice
If policymakers are serious about protecting children online, action needs to focus on:
Design accountability, not just individual behaviour
Support infrastructure, not symbolic restriction
Early help, not threshold escalation
Equity, recognising that risk is socially patterned
This is the difference between policy that reassures and policy that actually protects.


