Menu
Mon, 29 April 2024

Newsletter sign-up

Subscribe now
The House Live All
Technology
By Dr Vivek Murthy
Health
Communities
Passing The Carer’s Leave Act Partner content
By TSB
Press releases

The Misinfo Election: Will it be a fake news nightmare?

8 min read

Generative AI has made the production of misleading content easy for anyone with the most basic of computer skills. Tom Phillips asks whether the next general election could become a fake news nightmare

It had the hallmarks of a bombshell political scandal. Just days before the recent Slovak parliamentary election, audio recordings of Michal Šimečka, the leader of the Progressive Slovakia Party, were widely shared across social media. In one clip he was heard discussing plans to rig the ballot – taking control of polling stations, and buying votes from the country’s Roma minority. In another, he said he planned to raise the price of beer. 

The “recordings” were, of course, fakes – and rapidly debunked ones at that. What made them notable was less the fact that they were faked, but how they were faked: with generative AI, most likely with free or cheap online tools, readily available to anybody with the inclination to cause a little chaos. Feed in a few clips of a person speaking; get out a decent approximation of their voice reading any script you want. Technologies that were theoretical a few years ago are now just a click away. 

For anybody not keeping tabs on Slovakian politics, the potential was soon brought much closer to home: on the eve of the Labour conference, an audio clip of “Keir Starmer” abusing his staff circulated on the platform formerly known as Twitter.  

Like the Slovak example, it wasn’t too hard to peg as a fake: the intonation is a bit off, and the accounts that shared it were clearly having a laugh. But also like the Slovak example, a bunch of people believed it anyway. Conservative MP Simon Clarke was moved to brand the technology “a new threat to democracy”.  

As the United Kingdom enters the long slog towards a general election, that’s a fear that will only grow.  

It’s not hard to imagine scenes from an AI-infected election campaign. A photo of a laughing Rishi Sunak stepping over a homeless woman goes viral on multiple platforms. A brief video of a grinning Keir Starmer, high-fiving a Just Stop Oil protester as they block an ambulance, pings furiously from group chat to group chat. An image of Ed Davey chasing a 6ft-tall blue chicken with an oversized yellow broom is widely condemned as an obvious fake, until journalists clarify that it was an actual Lib Dem press event. 

Ed Davey chasing a 6ft-tall blue chicken

These would grab headlines, but there are subtler, potentially more dangerous risks. AI-generated text and images could allow for the rapid creation of superficially plausible, but entirely fictional, local news websites. On polling day, a range of AI-fuelled hoaxes – a fire, a flood, a bomb threat – could discourage people from going to their polling station. And as the results come in, manipulated evidence could fuel conspiracy theories and undermine confidence in the results.  

Those threats aren’t new; you’ve long been able to carry them out with enough resources, skill and inclination. The issue with AI is that it lets you do it far quicker, far cheaper, at an unprecedented scale. Artisanal misinformation can be turned into a mass-produced commodity. Glen Tarman, head of advocacy and policy at UK factchecking organisation Full Fact, says that for those who want to “enter the UK election as a mischief-maker, the price of admission has just gone down”. 

AI tools also raise the likelihood of accidental or incidental misinformation

We may not be ready for what’s coming – not least because, even before the rise of AI, our election systems were already struggling to adapt to the digital age. Tarman is adamant that politicians have “missed the boat” with the Online Safety Act and the Elections Act. These introduced some updates – digital imprints for political ads, a media literacy duty for Ofcom – but largely swerved the big questions. In June, the Electoral Commission also warned that “the law isn’t up to date” when it came to online campaigning and AI.  

Exacerbating the problem is the fact that it comes at a time when digital platforms – facing political attacks and broader staff layoffs – may be scaling back their efforts to combat misinformation (or, in some cases, appearing to openly embrace it). Most platforms do have policies around things like “manipulated media”, but questions remain about how they will be enforced.  

“There’s no reason why we should wait until a couple of days before the election for some of these platforms to say how they’re how they’re going to act,” says Tarman, “but that’s what’s happened before.” 

Nor are the technical guardrails put in place by the biggest AI firms likely to offer a solution, although they may help deter casual mischief. The main AI tools have policies to try to limit their harm, and may block certain requests. Microsoft’s Bing Image Creator, for example, refused to generate the hypothetical images of Keir Starmer and Rishi Sunak mentioned earlier. But there are enough alternative tools out there that a motivated bad actor could use to bypass these restrictions. 

For those trying to identify and combat misinformation, AI poses a twofold threat. There’s always been an imbalance between how long it takes to tell a lie and how long it takes to disprove it – “Falsehood flies, and the truth comes limping after”, as Jonathan Swift wrote – but the potential speed and scale of AI production only magnifies it. A potent piece of misinformation, dropped shortly before polling day, could simply run down the clock on efforts to verify it. 

Which raises the second problem: it can be near-impossible to definitively debunk the products of AI at all. They are sourceless. A diligent digital detective could previously track down the original for the Photoshopped picture, the misleadingly edited video, the out-of-context news article. But synthetic media springs fresh from the digital ether.  

Tools for “detecting AI” range from imperfect to virtually useless. For now, it’s sometimes possible to pick up on subtle flaws – stilted intonation, textual oddities, a non-standard number of fingers – but given how rapidly they’re advancing, there’s no guarantee that will still be the case when the UK goes to the polls.  

The most obvious concern is about politically motivated actors deliberately spreading falsehoods, but this may not even be the biggest worry. AI tools also raise the likelihood of accidental or incidental misinformation – as Tarman puts it, there’s a higher risk of false information “being unintentionally spread by people, because there’s the increased chance now of encountering it”.  

After all, the websites which spawned the term “fake news” back in 2016 didn’t care about the result of the US election – angry Americans were simply a lucrative source of clicks for web-savvy Macedonian teenagers farming ad impressions. And as every factchecker knows, a lot of misinformation is simply humour that’s been stripped of its context: satire mistaken for reality, or private in-jokes that escaped the group chat. AI makes all of these easier to produce. 

This might suggest a counsel of despair: that we have no options between “ban the internet” or “give up and go home”. But that would be unwise.  

While the issues posed by AI are real, to hyperfocus on the shiny new threat could risk ignoring mundane but equally pressing concerns. Tarman says that he still expects the bulk of factcheckers’ work in the campaign to be the traditional stuff: examining factual claims made by candidates and the media.  

Concerns about misinformation can give rise to their own species of untruth

And while AI might make some forms of misinformation easier to produce, we’ve never needed them to mislead people: as the flood of falsehoods around the situation in Israel and Gaza has shown, the easiest way to viral fame remains to simply recycle old images with dishonest captions. 

Meanwhile, the understandable urge to “do something about AI” elides the complexity of the issues. Numerous technologies come under that fuzzy umbrella – each of which has many uses, and may require different regulatory approaches.  

This will be an AI election regardless of whether we end up drowning in a deluge of deepfakes. Machine learning techniques will be humming away behind the scenes, helping parties with things like data analysis and ad targeting (in ways that may range from legitimate to questionable). And they will play a role in combating misinformation as well: internet platforms will use them to identify possible examples; Full Fact’s own AI tools will be deployed to help factcheck politicians’ speech in close to real time. 

Moreover, there is a risk that hyping these problems could be counterproductive. Concerns about misinformation can give rise to their own species of untruth – from real-life humans wrongly identified as “bots”, to the conspiracist’s easy dismissal of any counter evidence as fake.  

Ultimately, AI’s greatest risk may not be that people believe false things, but that they stop believing real things. That collapse in trust could be as easily fuelled by overreaction to the problem as by the technologies themselves.  

For now, Tarman says, the most useful thing politicians can do is “get your house in order”. The temptation for parties to use generative AI in their campaigns will be high – and it may have acceptable uses – but without transparency, the risk of damaging trust is profound. “It’s incumbent on the parties, and those who are going to stand for election for those parties, to make sure they’re open in how they're going to use these tools,” he says.  

And for goodness sake, think twice before you share that damning, slightly-too-perfect picture of your opponent. 

PoliticsHome Newsletters

Get the inside track on what MPs and Peers are talking about. Sign up to The House's morning email for the latest insight and reaction from Parliamentarians, policy-makers and organisations.