Menu
Wed, 9 October 2024

Newsletter sign-up

Subscribe now
The House Live All
Betting advertising and sponsorship benefits sport at all levels. It’s time the critics heard the facts Partner content
Culture
Culture
Culture
Culture
Press releases

After the Twitter boycott, should the government do more to clamp down on social media hate speech? 

There is currently no statutory social media regulation in the UK (PA)

11 min read

As MPs, journalists and concerned users boycott Twitter over its stance on hate speech, Eleanor Langford asks whether the Government can — and should — do more to regulate social media giants

Twitter was noticeably quieter this week as users logged off in droves to protest the site’s apparent inaction on anti-semitism. And, less than a month ago, a debate erupted over whether Facebook should censor posts by the US President. As the world increasingly moves online, moderation decisions by social media giants carry more weight than ever before. So who should be in charge of regulating digital content — private companies or the state?

“The problem at the moment is there isn't any regulation,” Labour’s Jo Stevens, the Shadow Culture Secretary, tells The House Live. “We've got a situation where we've got a hugely economically and socially powerful industry and there is little if any regulation of how they operate at all. And because they're so powerful, because they have such an impact, it's a glaring gap in terms of keeping people safe.”

Parliament has been grappling with this issue for some time. Although areas of social media fall under the remit of other bodies such as the Advertising Standards Authority and Ofcom, there is no dedicated regulator for online content. 

The EU’s e-Commerce Directive, adopted in 2000, does define some responsibilities for online companies. But, having been created before many social media sites were conceived, it has become outdated and is likely to need replacing once the UK leaves the EU.

The Government’s solution to the problem was set out in 2019 with the publication of its Online Harms White Paper. At its core was a new statutory duty of care, requiring internet platforms to protect users from certain ‘harms’ — ranging from cyberbullying to terrorist content — or face sanctions from a new independent regulator. Up until now, social media sites have largely regulated themselves. So, what’s not working?

“They say they shouldn't have that regulation, they say they're just platforms and they don't manage their content,” Stevens explains. “But actually, that's a false argument because they know they have community standards.

“So, they do remove content, they take what would otherwise be described as editorial decisions. They're not a traditional publisher in the way that we've always understood them but they certainly behave in some ways as a publisher.”

‘MANIPULATED’

And, it’s not just the things social media sites take down that concern critics, it’s also how they amplify the things they leave up.

“The thing about social media is that it’s not organic,” Tory MP Damian Collins, former chair of the Digital, Culture, Media and Sport (DCMS) select committee says. “When you go on there, you don't see just the latest thing your friends have posted. What you're seeing is something that has been selected and ranked for you by an algorithm based on things you've been interested in before, and things that people are talking about. And that can be manipulated by people that control large numbers of accounts or accounts with really big audiences.”

On top of this, Stevens says a “lack of transparency” around the algorithms that make social media so effective only compounds the problem. “Everything is kept secret,” the Labour frontbencher warns. “And without being able to penetrate that secrecy it makes it very difficult then to put things in place or suggest ideas around improvements on how algorithms work and how the content is curated and how it's served up to people and different groups.”

For groomers it's as simple as being able to refresh the page to get a fresh list of algorithmically-suggested children to contact

For children’s charity NSPCC, a major proponent of the Online Harms legislation, the fear is that this algorithmic curation could expose kids to a never-ending stream of inappropriate content, and even facilitate illegal behaviour.

“If you think for example of a vulnerable teenager who perhaps views very harmful self-harm content once or twice, the way that the algorithms will kick in means that the child every time they open their feed, they will see a fresh stream of that type of material being recommended to them,” Andy Burrows, NSPCC’s head of child safety online policy says. “That clearly is harmful. It's something that we need to see the legislation address.”

He adds: “What we've seen is offenders can exploit the existing weaknesses in terms of how platforms are run, how they're motivated […] The recommended friend suggestions that all of us will get on the likes of Facebook or Instagram, mean that for groomers it's as simple as being able to refresh the page to get a fresh list of algorithmically-suggested children to contact.”

‘SINGLE SWORD OF TRUTH’

But, while many would support the aim of protecting children from harm, there are fears that any clampdown on social media could have an impact on freedom of expression.

“[The White Paper] sets out to forge a single sword of truth and righteousness with which to assail all manner of online content from terrorist propaganda to offensive material, Graham Smith, one of the UK's leading internet and IT lawyers, wrote in 2019. “However, flying a virtuous banner is no guarantee that the army is marching in the right direction,”

Smith took particular issue with the White Paper’s claim that current regulation is “fragmented” and that broader approach to online harms was needed.  “An aversion to fragmentation is like saying that instead of the framework of criminal offences and civil liability, focused on specific kinds of conduct, that make up our mosaic of offline laws we should have a single offence of Behaving Badly.”

Flying a virtuous banner is no guarantee that the army is marching in the right direction

His concerns are shared by free speech groups such as the Open Rights Group (ORG), which branded the legislation “unrealistically vast”, and the Index on Censorship, which argued that the broad scope of regulation posed “serious risks to freedom of expression online”.

But Collins has little time for such concerns, and argues that free expression is not the real issue at stake. “I don't believe that it's a free speech issue to say you don’t have the right for your speech to be broadcast on social media,” he says.

“I think that freedom of reach is not the same thing as freedom of speech. When awful content is spreading at scale, the responsible thing is to do something.”

The Government’s planned legislation also needs to grapple with the issue of defining what a ‘harm’ even is before going after social media firms. In the real world, it’s “an established legal principle”, says Ben Bradley, head of digital regulation at industry body techUK. But it’s a concept he argues is difficult to translate into the digital realm.

You have to assign harm to someone's words, and that is much more difficult because it's subjective

“It's based around objective, measurable, physical harm,” he adds. “So, if you trip and you fall because the floor is not maintained, you can see that the floor was not maintained and that person tripped and fell and has broken their arm. That is a very accessible objective. When you apply it to speech, you have to assign harm to someone's words. And that is much more difficult because it's subjective.”

Bradley is also concerned about who the new rules will apply to, arguing that attempts to prevent harm must be proportionate to risk. When talking about social media the giants of the sector — Facebook, Twitter, Instagram, YouTube —most often come to mind. But, definitions within the legislation go beyond the big names.

“It is an online service that allows user-generated content or allows users to interact with each other,” he says. “So that includes all of the social media sites that we know, but also a lot of services that you wouldn't necessarily think would be in scope either because you wouldn't think of them as social media or you wouldn't think of them as high risk.”

Examples of these include forums such as Mumsnet and The Student Room, or online review sites such as TripAdvisor. “So, when we are designing these rules and trying to create this new framework, I think it's really important that we make sure that we don't apply a framework that works for a particular size of a company or a particular business model to everyone,” Bradley says.

‘HIT THEM WHERE THE BOTTOM LINE IS’

With these and other criticisms in mind, the Government published its response to the initial consultation on the White Paper at the start of 2020. Ofcom was touted as the preferred candidate for internet regulator, and ministers said the body —which currently regulates TV, radio and video-on-demand sectors— would “step in” where firms failed to act responsibly. Acceptable content must be clearly defined by each platform, and companies would be held responsible for consistent failures to remove that content.

Putting Ofcom in charge was welcomed by the sector, with Bradley saying it was viewed by tech firms as a “fair and collaborative and independent regulator” who was “well suited” to the role. But Labour’s Jo Stevens warns that any regulator will be redundant without proper powers.

[Social media companies] will not do anything unless it hits their profits

“The only way that you will get these companies to take responsibility and operate in a way that is ethical and prevents harm and illegal content is to hit them where the bottom line is,” she says. “They will not do anything unless it hits their profits. So if you're looking at enforcement and sanctions, it needs to be sufficiently strong to make a difference.”

The NSPCC agrees. “This is an issue of enforcement,” Burrows says. “We are talking about content that glorifies or promotes self-harm, or suicide, and content which all of the platforms are very clear does not belong on their sites to start with.

“We see a real disconnect between the terms and conditions, the community standards that all of the big tech firms operate to, and then their willingness to then be able to enforce those vigorously.”

‘URGENCY’

The coronavirus pandemic has slowed the Government’s domestic agenda, and there are concerns over whether the Online Harms Bill, which has been promised by the Autumn, will make progress at the speed called for by campaigners.

“We need to see the government commit to a timescale that recognises the urgency of the legislation,” Burrows argues.

“It’s now been over a year and counting since the White Paper was first published. And it's entirely feasible that we could see this drift into 2023 or 2024 by the time that we have a regulator up and running and making its first decisions.

“To put that into context, every single day we see 90 online related offences against children. And we knew already before the pandemic that the scale and complexity of online child abuse was only increasing.”

Meanwhile, Stevens fears that the influence of social media companies on governments is contributing to the delay. “If you look at governments around the world, you'll see ex-politicians working in the tech platforms, and you'll see people from the tech platforms in governments and in parliaments around the world,” she explains.

You see ex-politicians working in the tech platforms, and you see people from the tech platforms in government

“The delay [since 2019] has just got a smell about it of the big tech companies pushing and pushing and pushing government and trying to get things watered down. The longer the delay goes on, the more harm is caused.”

But, for Bradley from techUK, the huge implications of social media regulation mean it’s vital not to rush any new law.

“This is quite an important piece of legislation,” he says. I think it's important that we get it right. At the end of the day, this is not just about the regulation of tech platforms, but what users like you and I are saying on most platforms.”

Since the start of the pandemic, internet usage has surged to record levels in the UK. Ofcom claims that adults are now spending around four hours a day online. But, more time online has only increased fears about harmful media, with nine in 10 Brits telling regulator Ofcom that they are concerned about content on social media services.

“The pandemic is the first public health emergency in the age of mass information,” Collins says. “I think people take notice of it because what it’s demonstrated is that [regulation] is not just about politics, or Russians, or whatever else. It is about information that can affect your health and the health of your family.”
 

PoliticsHome Newsletters

Get the inside track on what MPs and Peers are talking about. Sign up to The House's morning email for the latest insight and reaction from Parliamentarians, policy-makers and organisations.

Read the most recent article written by Eleanor Langford - Who Is Going On Strike And When In February?

Categories

Culture