UK policymakers designed the Online Safety Act to protect children from harmful online content by requiring online services to verify users’ ages and restrict access to material like pornography, self-harm content, and other age-inappropriate material. But the Act has had a series of high-profile failures in its first few months of implementation, as some innocuous online forums have shut down entirely to avoid liability risks, while other legitimate support communities, such as those for sexual assault survivors or people who want to quit smoking, now require government ID verification, which means users have no expectation of anonymity when accessing these sites. This domestic policy headache has also become a major international liability, with policymakers abroad criticizing UK regulators for censoring foreign companies.
One of the main problems is that the Act encourages overzealous compliance and enforcement, resulting in unnecessary content restrictions that deviate from the goals of the law. However, there are three steps Parliament and Ofcom, the UK regulator tasked with enforcing the Act, should take to address this problem.
- Give Platforms More Guidance on What Constitutes Harmful Content
The Problem: The Act does not provide sufficient guidance on which harmful, but legal, content platforms must protect children against, resulting in unnecessary content restrictions.
The Fix: The government should narrow the categories of content that it requires platforms to protect children against to a clear and unambiguous set. The Act’s list of categories includes a wide range of ill-defined content, such as “bullying content” or “content which encourages…behaviours associated with an eating disorder.” Without clear definitions and criteria for evaluating content, risk-averse online services must address any content that might potentially fall under these categories, such as inside jokes and memes or articles about diets and weight loss, to avoid running afoul of the Act. Ideally, Ofcom would immediately provide this regulatory clarity through its existing authority, but since the Act creates some of these broad categories, Parliament may need to make some of these changes.
- Give Platforms Time to Fix Problems Before Imposing Fines
The Problem: Online services face fines for any compliance failures, creating pressure to implement overly broad restrictions rather than risk regulatory action.
The Fix: Parliament should amend the Act to require Ofcom to offer a remediation period (e.g., 30-60 days) to online services and only impose penalties for continued non-compliance. While Ofcom has developed informal “alternative compliance tools,” including “compliance remediation” where it gives service providers “the opportunity to address or remedy any compliance concerns identified in lieu of opening an investigation,” the Act allows Ofcom to impose penalties immediately upon finding a breach of duties. Platforms currently don’t know when Ofcom will use these discretionary tools versus immediate enforcement action, creating uncertainty that drives unnecessary restrictions to ensure compliance. This approach has extensive precedent across UK regulation—the Communications Act 2003 requires representation periods, the FCA provides warning periods before penalties, and the CMA offers remedy opportunities before fines.
- Require Judicial Review for Restrictions on Content
The Problem: Regulators face political pressure to demonstrate they are protecting children, but bear no cost when they err on the side of excessive content restrictions.
The Fix: Parliament should amend the Act to require that Ofcom’s restrictions on content go through courts, just like other restrictions on speech rights. Courts providing independent oversight ensure that regulators’ content restrictions meet proper legal standards and proportionality requirements. Independent oversight of all content restrictions, especially for subjective “legal but harmful” content, prevents regulatory overreach. Courts can review urgent actions within 72 hours to confirm they meet legal standards or use post-action review when urgency requires more immediate action. The UK already requires court approval for ISP blocking orders under copyright law, and courts routinely handle content injunctions in defamation and privacy cases. Judges would ensure content restrictions meet proper legal standards and protect free speech rights while still allowing swift action against genuinely harmful or illegal material. This approach creates the same procedural safeguards the UK uses for other fundamental rights.
Creating A More Effective Framework for Child Protection
The Online Safety Act has worthwhile elements, especially the addition of new criminal offenses for individuals who engage in cyberflashing, intimate image abuse, or epilepsy trolling. However, the current implementation has created a problematic incentive structure where platforms go beyond what the law requires, implementing restrictions that likely contradict what their users actually want, simply to avoid the risk of regulatory penalties. These three changes leverage existing UK regulatory frameworks to create a more targeted, predictable system that actually protects children while preserving legitimate online discourse.
Image credit: Brian Twitty/Wikimedia Commons