New Ofcom regulations, introduced in July, are forcing a reckoning for online content. This overhaul of the digital landscape is part of the second phase of the Online Safety Act (OSA), which ostensibly targets content harmful to children.
First passed in October 2023, the OSA’s stated objective of protecting young people garnered widespread support from children’s charities. This was echoed by digital safety campaigners frustrated with the lack of accountability placed on Big Tech and desperate to see its power reined in.
But while these new rules may aim to protect one group, others are being put in harm’s way. Sex workers and other marginalised communities are now less able to access content on harm reduction, face huge income losses due to their accounts being targeted, and risk losing precious anonymity due to identification rules.
Minimising online harms and holding powerful tech companies to account are worthy goals. But despite fixating on explicit content, the OSA isn’t even delivering on its key aims – a recent study by the Children’s Commissioner for England showed children’s exposure to porn is higher now than before it came into effect, and algorithms continue to bombard young people with content promoting suicide and self-harm. And when that non-progress comes at the expense of other vulnerable groups of people, such as sex workers, it’s clear that it’s not fit for purpose.
We’ve heard ‘protect our children’ before
Blanket calls to ‘protect the children’ should always be treated with caution. Safeguarding young people often serves as a flimsy pretext for discriminatory legislation, and the emotive language provides useful cover for lawmakers seeking to shield themselves from criticism.
There are many examples of this in action. In the US, racial segregation during the Jim Crow era was justified in part as promoting the “best interests” of children. The US’s 1996 Communications Decency Act also has marked similarities with the OSA, which aimed to protect minors from "indecent" internet content, although most original provisions were struck down by the Supreme Court as unconstitutional.
In the UK, the notorious Section 28 prohibited the promotion of homosexuality in schools under the guise of giving children “a sound start in life”, according to then prime minister Margaret Thatcher. Section 28 was repealed across the UK from 2000 to 2003, with then prime minister David Cameron offering a public apology in 2009. In 2018, he was joined by the key architect of the law, Baroness Knight – but who maintained in her apology that her intention had only been the “wellbeing of children”.
Children need protecting, but not everything that claims to protect them does what it says on the tin
The following decade saw significant gains for LGBTQ+ communities and, until 2015, the UK was considered among the most progressive countries for queer and trans people globally. However, it has since crashed into 22nd place, with hate crime up by 112% against gay people and 186% against trans people in the last five years. Amidst a rollback of LGBTQ+ rights, the language of protecting children is once again never far away. Last year, statutory guidance was updated to require schools to teach about biological sex, advising against materials that encourage pupils to question their gender. Politicians claimed to be safeguarding children from “disturbing”, “inappropriate” and “contested views”.
Even the 2023 Illegal Migration Act, which provided for draconian measures to tackle irregular dinghy crossings on the English Channel, was framed by the government as “a way to protect vulnerable people, including children” from criminal gangs. This rhetoric has featured heavily in a recent surge of far-right demonstrations, with Tommy Robinson telling a London rally earlier this month that migrants have made “our daughters scared to walk the streets”.
Children need protecting, but not everything that claims to protect them does what it says on the tin. While the OSA has not leveraged child safety for such openly xenophobic purposes, it is causing profound harms in the name of protection.
OSA phase two
The OSA is a behemoth piece of legislation with three distinct phases of implementation. The first was completed in mid-March 2025, requiring services to conduct risk assessments and implement safety measures to tackle illegal content. The second phase focuses specifically on content harmful to children – but that is not necessarily illegal.
At the start of 2025, a statement was published by Ofcom, the government’s regulatory body for online and offline communications. This outlined requirements for in-scope user-to-user services (where content generated, uploaded, or shared by one user can be seen or "encountered" by another user) and search engines to complete an assessment on the likelihood of children accessing their service by April 2025, conduct risk assessments and implement safety measures by July 2025, and pornography services to introduce age checks by July 2025.
April 2025 also saw the release of Ofcom’s Guidance on Content Harmful to Children, which details content that platforms must act against. Included in the highest priority tier of harm is any content that is pornographic or that depicts sexual activity. In its Codes of Practice, Ofcom recommends algorithmic filtering and age verification checks to prevent underage viewers’ exposure to adult content.

Pornography isn’t the sole focus of the OSA – the highest tier of harm also includes content that promotes suicide, self-harm and eating disorders, which should be filtered and removed entirely. However, placing all explicit content in the highest possible tier of harm, along with the distinct, extensive guidance exclusively focused on it, underscores the preoccupation of legislators and campaigners. Content that depicts or encourages serious violence is ranked lower, at merely priority content. And other content that should concern any parent – promoting conspiracy theories or medical misinformation, for example – doesn’t get a look in.
Compliance and enforcement
Failure to comply comes with a hefty price tag. Ofcom has the power to issue fines of up to £18m or 10% of a company’s annual global revenue, whichever is higher. For major tech companies, this unprecedented penalty could amount to billions of pounds, and is designed to ensure that even the largest platforms with significant revenue streams outside the UK are held accountable. Severe cases of non-compliance can lead to the government blocking access to the site from the UK, and criminal liability for a company’s senior managers and executives.
Sex workers have reported a huge spike in their social media accounts being flagged or deleted for photos and even usernames deemed too provocative
While these landmark penalties are a significant concern even for big companies, enforcement isn't going to be easy. Already, the law's reach beyond the UK is being tested. In August 2025, regulators handed down the first fine to US-based company 4chan for failing to comply with statutory information requests and were quickly met with a refusal to pay the £20,000 penalty. With undisclosed penalties racking up for every day of non-compliance, a hefty bill may be waiting for the controversial company if it loses its court case in the US. A victory, on the other hand, would significantly compromise Ofcom’s authority.
Catch-all censorship
Since the new regulations came into force in July 2025, tech companies appear to have instituted automated (and very risk averse) moderation systems to ensure they are complying with the new legislation. These sweeping systems act like digital trawler nets, indiscriminately capturing content that is not harmful, or that even actively contributes to reducing harms for vulnerable or at-risk groups.
Since July, sex workers have reported a huge spike in their social media accounts being flagged or deleted for photos and even usernames deemed too provocative. Social media shadow bans (where sites reduce the visibility of accounts) or account deletion can lead to a massive loss of income, as well as years of work building a following, leaving sex workers poorer and more vulnerable. These generally happen suddenly, without warning, clear reasoning or the right to appeal.
Content moderation algorithms also appear to be targeting harm reduction efforts and educational content on issues including drug use, reproductive healthcare and support services for LGBTQ+ communities. This has been well-documented in recent years, with queer content routinely flagged as pornographic or age-inappropriate, content focused on education and recovery from eating disorders removed. For LGBTQ+ youth, who are disproportionately likely to seek support online, blocking access to vital information and community support is particularly damaging. In the name of online safety, these systems aiming to comply with Ofcom regulations are censoring the very resources that protect and inform vulnerable populations.
Much of the problem here is that coarse filters are being applied that reflect narrow, moralising outlooks, and to minimise liability over genuinely reducing harm. Sweeping systems lack the sophistication to distinguish between harmful content and not; algorithms trained to flag and delete anything related to drug use don’t distinguish between content promoting the consumption of illegal drugs and content aimed at educating people on staying safe if they do. Content creators who are queer, trans or racialised, or whose content focuses on these communities, are disproportionately targeted, with anything “queer” indiscriminately labelled as “adult”. While the exposure of young people to explicit content, especially pornography that is violent, is a legitimate concern, such sweeping labels make these filters ineffective at best and downright harmful at worst.
Sex workers lose out, again
During the drafting and debate of the OSA, leading figures in the Labour Party pushed for provisions that would criminalise all forms of sex workers’ advertising (despite the exchange of sexual services for money being technically legal in the UK). These closely resemble the notorious US FOSTA/SESTA laws which forced sex workers on to the street, estimated to be ten times more dangerous than indoor work.
Ultimately, these provisions were not included and sites where sex workers can advertise – for now – remain up. However, some are now seeking to avoid falling foul of the new regulations by demanding sex workers upload public headshots, rather than images of their bodies. For people whose safety depends on anonymity, this is a devastating blow.
While advertising sites are still allowed, a cornerstone of the OSA’s second phase is mandatory age verification for anyone accessing adult content. Since July, internet users with a UK-based IP address have been required to verify that they are over 18, through uploading a valid form of ID or another approved method such as credit card verification or email-based digital footprint analysis.
In the two weeks following the enforcement of these new rules, UK visitors to Pornhub dropped by 47% and by 10% to OnlyFans. However, all may not be as it appears. Age verification doesn’t actually provide a strong deterrent to accessing porn for tech-savvy teenagers – anyone can sidestep this requirement with a virtual private network (VPN), which disguises their location. Unsurprisingly, July’s new Ofcom guidance coincided with an explosion in VPN use, with one provider reporting a 1800% spike in downloads. So this supposed slump in UK traffic is hard to verify.
The government has had no qualms about passing other laws that cause active harm to children. As a result, deprivation levels are at a record high
What a VPN cannot do is help sex workers wishing to preserve their anonymity, circumnavigate requests to post face-out pictures, or avoid sweeping algorithmic content moderation. There is no recourse for those left at the mercy of new stipulations or censorship. The result is sex workers having to choose between their privacy and their income – if they choose to prioritise privacy, they risk losing access to the platforms they use to make money, without delivering any benefit for young people online.
Protecting whose children?
While policymakers and campaigners rally around the cry to ‘protect the children’, we should ask: which children, and from what?
We are certainly not protecting young people who are being cut off from vital online educational content on sexual and reproductive health or resources on eating disorder recovery. We are not protecting LGBTQ+ children who are blocked from accessing knowledge, community, and history. Plus, given that the majority of sex workers are mothers, threats to their income and online safety can only have knock-on harms for the dependents they work to support.
Meanwhile, the government has had no qualms about passing other laws that cause active harm to children. As a result, deprivation levels are at a record high for this century, with around one in three children living in poverty in the UK. The Conservative government’s austerity measures through the 2010s are directly linked to reduced living standards, poorer nutrition and worse health outcomes for children.
Meanwhile, Labour has shown no intention of abolishing the two-child benefit limit, a move that would lift at least 250,000 children out of poverty. The party even suspended rebel MPs who voted to scrap the cap. And its new cross-government child poverty unit has come under scrutiny for failing to include migrant children.
The risk of young people being exposed to potentially harmful content online is real. But forcing sex workers to out themselves and implementing coarse algorithmic censorship isn’t doing much to keep them safer. And when so many other actively harmful policies are being enacted, it’s difficult to believe that legislators are that concerned with protecting the children.
Third phase incoming
The third and final phase of the OSA outlines additional duties for the largest and most influential platforms. While a final list has not been published, it is highly likely to include major social media platforms, search engines and messaging services. These sites will be required to publish annual transparency reports detailing how they are meeting child safety duties and assessing how their algorithms might expose children to legal but harmful content. The third phase of rules aren’t expected to be fully in force until 2026, but will almost certainly coincide with yet another spike in content moderation and deletion for sex workers, as companies scramble to avoid any potential liability.
The OSA is the first piece of legislation designed to try to keep people safe online, and to recognise that social media companies have a duty of care to their users. The concerns and aims of the legislation are no doubt important – we all deserve to be safe online. But using legislation purely to plant an ideological flag isn’t enough. Until policymakers can collaborate effectively with marginalised groups, true harm reduction will be unattainable.
Calls to protect the children’ should not give policymakers a free pass to enact blanket legislation without challenge. It is essential that legislators engage in meaningful consultation to shape laws, and that the voices of those most at risk are not silenced or ignored in the process. In this case, that includes not just children, but also the sex workers dependent on online platforms for their income, drug users seeking harm reduction resources and LGBTQ+ people looking for mental health support. Online safety cannot be realised through an uncontested, moralising agenda that sacrifices the rights and wellbeing of marginalised communities.
Thank you to Yiğit Aydınalp from the European Sex Workers’ Rights Alliance and digital safety campaigner Adele Walton, who provided valuable context that helped inform this article.
Marin Scarlett is a sex workers’ rights and reproductive justice activist.