Left largely under-regulated, the internet is the ‘wild west’ of modern life
Online harms loom large, and for some the internet is an area of bullying, abuse, misinformation, and danger, writes Andy Frain MP. | PA Images
The Online Harms Bill will set standards for illegal content and another for 'potentially harmful content', a radical step towards regulating the spread of disinformation and abuse on the internet.
There’s a fairly good chance that anyone reading this article has spent a disproportionate amount of the last six months on the internet.
In my case this took the form of endless quizzes on Zoom, desperately trying to work out who was the top goalscorer in Euro 2004. Others were less concerned with the exploits of Milan Baros (a point if you guessed that), instead watching boxsets on Netflix, studying online remote courses, or spending all day on social media.
The pandemic has served to underline what many already suspected – the online world is no longer a luxury, but a fundamental part of life in 2020.
The internet’s ability to keep us in contact, entertain us and help us work is an obvious benefit, but there are two sides to its increasing ubiquity.
Online harms loom large, and for some the internet is an area of bullying, abuse, misinformation, and danger. The rapid development of the sector has left it under regulated, with some labelling it the “Wild West” of modern life.
Traditional media outlets in print and television have been subject to stringent regulation for decades and the lack of regulation for new media has created an unfair disparity.
Until now, government regulations across the Western world have concentrated on addressing harms caused by flagrant illegality online.
Laws that ban unlawful sexual images of minors, the sale of illicit drugs and weaponry, and sharing of copyright-infringing material, have been extended to the online environment, through laws such as the e-Commerce Directive in the European Union.
By defining a process under which platforms are required to take responsibility for such content, these laws have been relatively successful and have resulted in the broad eradication of an online illegal black market on the mainstream internet.
The line between illegality and harmful behaviour, however, is blurred and the lack of regulation has allowed disinformation and abusive behaviour to pass largely unmoderated.
The pandemic has exacerbated the recent rise of disinformation and fake news, with theories on everything from Jeffrey Epstein to 5G towers being given oxygen on mainstream online platforms like Facebook and Reddit.
The UK authorities have acknowledged this issue, and earlier the Age Appropriate Design Code was introduced in order to force online services to give children's data the highest level of protection.
The chief avenue for their plans, however, is the long-mooted Online Harms Bill. A white paper on the topic was discussed as far back as the Autumn of 2018, with the Government only publishing its consultation response this February.
That response gave a strong indication of the main parameters of the proposed legislation.
Companies, notably social media platforms like Twitter and Facebook, will have to demonstrate adherence to the new statutory “duty of care” by complying with Codes of Practice in relation to different types of online harms.
The legislation will operate in two streams, with one set of standards for illegal content and another for “potentially harmful content”. Ofcom’s remit would be extended to incorporate the enforcement of these rules and Ofcom's new chief executive has warned that hefty fines would be part of its plans.
It is important to note that these proposals are actually fairly radical - the UK is one of the first major countries to push this type of legislation, and international observers (notably in the EU) are keeping a watchful eye on the UK’s actions.
Of course, the course of true regulation never did run smooth and many are anticipating bumps in the road to come.
Chief amongst those bumps is the potential minefield of litigation.
A vast amount of content sits between the grey area of “potentially harmful” content and legitimate content. Social media companies in particular are going to find it exceptionally challenging, not to mention expensive, to test these new boundaries.
It is tempting to look at Facebook’s profit margins and say that legal fees will do little to keep them awake at night. It is only natural that much of the debate will centre around the major internet platforms, but the voice of smaller internet platforms will need to be heard. The duty of care is intended to apply to all companies that “allow, enable, facilitate users to share or discover user-generated content, or interact with each other online.”, which sounds exceptionally wide-ranging.
The government insists that the legislation will only capture a very small minority of the UK’s online businesses, but at this stage it is simply impossible to say for sure. Proportionality will be vital when enforcing the rules going forward, but some small internet businesses may baulk at the potential complexity to come.
Regulation is needed though. Traditional media outlets in print and television have been subject to stringent regulation for decades and the lack of regulation for new media has created an unfair disparity.
As more and more find their news through social media, regulation is the only way to ensure that quality and standards are upheld. There’s a reason it hasn’t been tried before however – it isn’t going to be easy.
Andy Frain is the Dods Senior Political Consultant on Digital, Culture, Media and Sport. To download the complementary report on Online Harms and Regulation click here.
Get the inside track on what MPs and Peers are talking about. Sign up to The House's morning email for the latest insight and reaction from Parliamentarians, policy-makers and organisations.