by Cindy Harper, Reclaim The Net:
The consultation closes in May, but Parliament already handed ministers the power to act in February.
The British government launched a consultation this week that could require age verification for anyone using social media, gaming sites, or AI chatbots.
The consultation, titled “Growing up in the online world,” opened on March 2nd and closes May 26, 2026. It asks the public whether the government should ban under-16s from social media entirely, impose mandatory overnight curfews on platform access, restrict AI chatbot features for minors, and require platforms to disable “addictive design features” like infinite scrolling and autoplay.
TRUTH LIVES on at https://sgtreport.tv/
The government says it will respond in summer 2026, and Parliament has already handed ministers new legal powers to act on the findings without waiting for fresh primary legislation.
The Prime Minister announced those powers on February 16, weeks before the consultation even opened. The government can now move faster once it decides what it wants. What the public thinks determines the packaging, not the destination.
Technology Secretary Liz Kendall framed it this way: “The path to a good life is a great childhood, one full of love, learning, and play. That applies just as much to the online world as it does to the real one.”
The actual policy tools being considered are a different matter.
Age verification, as a mechanism, works by proving identity. Every user proves who they are.
A social media platform that must exclude under-16s must verify the age of its over-16s. That means collecting identity documents, linking browsing activity to real identities, or building infrastructure that a government can later compel to serve other purposes.
The surveillance architecture required to enforce a children’s safety law is the same architecture required to surveil adults. It gets built for one reason. It gets used for others.
Then there’s the “Help your child stay safe online” campaign site, the government launched alongside the consultation. The site includes a page directing parents to report “bullying, threats, harassment, hate speech, and content promoting self-harm or suicide” directly to platforms, with links to the reporting tools of Instagram, Snapchat, Facebook, WhatsApp, TikTok, Discord, YouTube, and Twitch.
The government, through a campaign website, is now actively encouraging parents to funnel reports of “hate speech” to the same private companies that define what hate speech is. There’s no independent standard, no legal definition that applies consistently, and no oversight of what platforms do with those reports. Just a government directing citizen complaints into Big Tech’s moderation queues and presenting that as a safety feature.
“Hate speech” is one of those categories that sounds precise until you ask who decides. Platforms decide. They always have. What the government has done here is lend its authority to that process, making Big Tech’s internal moderation systems look like public infrastructure. They are not public infrastructure. They are corporate policies, applied inconsistently, without appeal, and with no democratic accountability.


