What impact is the Covid-19 infodemic having?
Covid-19 signage outside St Mary's Hospital in London, England | PA Images
Misinformation in the coronavirus crisis is putting lives at risk. Clare Lally and Lorna Christie from POST explain how the conspiracies spread and can be countered.
The volume of inaccurate information circulating around the Covid-19 outbreak has prompted a global ‘infodemic’.
Widespread misinformation has included proposed underlying causes of the virus (such as 5G radio waves), conspiracies around the actions of public bodies and unverified treatments and preventative measures.
An Ofcom survey of over 2,000 people found that, within the first week of the ‘stay at home’ measures, 46% encountered false or misleading information. Within this group, 66% reported that they were seeing Covid-19 misinformation at least once a day and 55% said that they did nothing about it.
So where does this false information come from? What are the consequences? And what can be done to counter harmful misinformation?
Sources and effects of Covid-19 misinformation
The Reuters Institute of Journalism and Oxford University recently analysed 225 items of Covid-19 misinformation and found that 88% appeared on social media. They also found that 56% of misinformation around Covid-19 appears to have been reconfigured from true information. For example, false claims that taking hot baths or using hairdryersprotects people from Covid-19 may have stemmed from NHS recommendations about washing bed linen in temperatures of 60 degrees Celsius.
Misinformation can arise from genuine misconceptions. The term ‘coronavirus’ is not limited to the virus which causes Covid-19, and instead refers to a family of viruses identified in the 1960s. A photo of a disinfectant bottle label, which claims to ‘kill human coronavirus’ has been shared on Facebook over 2500 times, and led some users to speculate that manufacturers knew about Covid-19 ahead of the public.
Misinformation can have harmful consequences, including:
Public mistrust: The Reuters Institute found that 39% of false claims were about the actions of public authorities(such as government and the World Health Organisation). This can decrease public confidence and have consequences for cooperation with government guidelines. A recent study from King’s College London found that people who believe Covid-19 conspiracy theories are more likely to neglect public health guidance on social distancing.
Health implications: Covid-19 misinformation may contradict official health advice, which can have harmful consequences. It has been reported that in Iran more than 300 people died after drinking methanol following false claims that it can be used to treat Covid-19.
Crime: A UN official has commented that misinformation that attempts to blame particular groups presents a ‘risk of stigma and fear’. Researchers suggest that misreporting around the origins of the virus has spurred an increase in xenophobic abuse against people of Asian descent. Criminals have also exploited public uncertainty for fraudulent purposes. For example, people have received phishing-scam texts claiming to be from the UK Government, saying they must pay a £35 fine for breaching social distancing. Action Fraud (the UK’s national reporting centre for fraud and cybercrime) recorded losses of over £1.6 million due to Covid-19 related fraud.
Preventing the spread of COVID-19 misinformation
The Government has committed to work with social media companies to combat false information during the pandemic. In March 2020 it set up a Counter Disinformation Unit (part of the Department for Digital, Culture, Media and Sport), and has reported that the Unit is identifying and resolving up to 70 incidents per week.
Content moderation by digital platforms
In March 2020, Facebook, Google, Twitter, YouTube, LinkedIn, Reddit and Microsoft released a joint statement announcing their collaboration in preventing online misinformation around coronavirus. Approaches taken by these platforms and others include:
Content removal, deprioritising and labelling: Online platforms are not currently obliged to remove misinformation within the UK. However, private companies may choose to remove or deprioritise content. Content can be removed or demoted by human moderators or detected automatically.
Facebook is removing all Covid-19 related content that could cause imminent physical harm to users. Other misinformation is referred to a fact checking system. False content is demoted so that it ranks lower in users’ news feeds, and may be tagged with a warning. In April 2020, YouTube announced that it would remove conspiracy theory videos linking coronavirus to 5G.
Prioritising and promotion of official information: NHS England has collaborated with Twitter and other social media platforms to provide users with easy access to NHS guidance. When users search for Covid-19 or related terms on Twitter, a banner is displayed with links to the NHS website and the Department for Health and Social Care Twitter account. Similarly, when users Google Covid-19, there is an information panel linking to UK Government and NHS information. The WHO has launched a chatbot on Facebook Messenger and WhatsApp to provide instant information on Covid-19.
Advertising bans: Some platforms, including Twitter and Google, have placed restrictions on hosting certain adverts for Covid-19 related products, such as hand sanitisers, face masks and testing kits.
Messenger service restrictions: In April 2020, WhatsApp imposed restrictions on message forwarding as a way to prevent the spread of misinformation. Messages that have already been forwarded multiple times can only be forwarded on to one chat at a time.
One of the challenges of using automated tools for content moderation is their potential to incorrectly flag legitimate information as misinformation. In March 2020, a bug in Facebook’s software led to news articles about Covid-19 being incorrectly labelled as spam. It has also been suggested that labelling misinformation can be counterproductive, as it may draw additional attention. Commentators have raised concerns that removing conspiracy theory content may fuel further conspiracy theories through making users feel they are being censored.
Fact checking and myth busting
Fact checking organisations are carrying out an increasing number of checks on Covid-19 related information. One analysis estimated a 900% increase in English language fact checks from January to March 2020. The Reuters Institute found that there had been a varying response to fact checked posts. On Twitter, 59% of posts rated as false remained on the site with no warning label, 27% remained on YouTube and 24% on Facebook.
The International Fact-Checking Network has created a database of Covid-19-related fact checks, which pools together debunked misinformation published across 70 countries. The WHO has added a ‘myth busters’ section to its online resources, and UNESCO is promoting the use of hashtags such as #thinkbeforeyouclick.
Education and guidance
The Centre for Countering Digital Hate (a UK based charity), recently produced guidance called ‘Don’t Spread The Virus’, encouraging social media users not to share or comment on false information to prevent the content from appearing in other users’ social media feeds. Instead, users are encouraged to report and ‘drown out’ misinformation by sharing information from official sources. The UK Government has relaunched its ‘Don’t Feed The Beast’ public information campaign, which aims to empower users to question information they read online. The campaign includes a five-step checklist to help the public identify whether information may be misleading.
This is an article from POST (The Parliamentary Office of Science and Technology), written by Clare Lally and Lorna Christie. You can find an extended version of the article online. For more content on Covid-19, visit the POST Covid-19 hub.