Claims that Online Safety Act curbs free speech ring hollow from platforms that seek to close down criticism
3 min read
Scrapping the OSA would ignore the evidence of online harms that organisations like mine have spent years uncovering: from self-harm and suicide promotion to child exploitation
The Online Safety Act gained widespread attention for the first time last month following the introduction of age verification requirements. This change in the way we access the internet reignited debate about the Act, including calls for it to be scrapped. Heeding these calls would ignore the evidence of online harms that organisations like mine have spent years uncovering: from self-harm and suicide promotion to child exploitation.
Into this debate charged the X platform, declaring the UK’s Online Safety Act an “overreach” and a threat to free expression. In a lengthy post on its Global Government Affairs page, the platform wrote that lawmakers made a decision to “increase censorship” and that “free speech will suffer”. These claims are as misleading as they are hypocritical.
The Online Safety Act is not censorship. It focuses tightly on illegal content, such as terrorism and child sexual exploitation, and protects children from content that is harmful to them, such as self-harm and eating disorders.
Nothing in the Act requires the removal of lawful speech
There is, in fact, a proactive requirement to protect freedom of expression, with the largest platforms required to continually assess the impact of their decisions on users’ free expression.
X's crusade for free speech rings hollow when you recall that the company dragged our non-profit watchdog into court for speaking up about the platform. X sued CCDH after we published research showing a startling and substantial rise in online hate following Elon Musk’s acquisition of Twitter and reinstatement of previously-banned accounts like Andrew Tate and Tommy Robinson. This lawsuit failed and was dismissed in March 2024, with the presiding judge writing unequivocally that X’s suit was “unabashedly and vociferously .. about one thing: punishing [CCDH] for their speech”.
Companies attempting to avoid responsibility (and protect their profits) often cloak themselves in lofty principles. Their hypocrisy would not merit rebuttal were it not for X’s claim to be working hard to be “in compliance” with the Online Safety Act. This compliance is in serious doubt after our research showed the extreme level of violent incitement still circulating on the platform. One year on from the 2024 riots, in which posts on X contributed to widespread disorder on British streets, our researchers found that little has changed despite the Online Safety Act coming into force in the time since. Far from ‘working hard to be in compliance’ with the Act, CCDH’s research finds no evidence of the X platform putting in place the systems and processes necessary to meet the requirements of the Act.
Ultimately, the Online Safety Act introduces a long-overdue framework to ensure social media companies like X are responsible for harms on their platforms. The Act is the product of years of parliamentary debate and public pressure – I know because I was the first witness before the bill committee way back in 2021. Despite surprise at the introduction of age verification last month, recent YouGov polling suggest that the Online Safety Act maintains widespread public support.
Calls to scrap the Online Safety Act, and X’s warnings about censorship, aren’t about protecting free expression. They’re about preserving a status quo that benefited and enriched powerful platforms at the expense of users, particularly children. When the DSIT Secretary Peter Kyle said that a whole generation of children had been failed by governments who didn’t act sooner, he was right. Scrapping it now, under pressure from platforms unwilling to follow the rules, would embed that failure.
Imran Ahmed is CEO of the Center for Countering Digital Hate