Menu
Sat, 20 April 2024

Newsletter sign-up

Subscribe now
The House Live All
Mobile UK warns that the government’s ambitions for widespread adoption of 5G could be at risk Partner content
Economy
Speed of delivery should be front and centre in the UK’s drive to be a Life Sciences leader Partner content
By WSP
Health
Data: driving the UK’s growth and productivity Partner content
Economy
Regulatory certainty as a key ingredient for industry resilience Partner content
By BASF
Energy
Press releases

MPs Warn Against "Sleepwalking" Into AI Danger Without Rapid Regulation

A new report has argued that AI could be key to improving UK productivity (Alamy)

8 min read

Conservative and Labour MPs have called on the government to “act now” on regulating rapidly advancing AI technology, as a new think tank report recommends the UK establishes itself as a leader in the "global AI race".

AI advancement has recently escalated with the release of OpenAI’s GPT-4, a generative AI programme which has massively improved AI's ability to understand and complete tasks. Industry experts say AI could transform the global economy by increasing productivity, enhancing innovation, and changing the structure of the global workforce. 

But there is concern that without stricter regulation AI could pose a risk to human safety and democracy, and a number of founders and scientists, including Elon Musk, signed a letter last month calling for a pause in AI research in order to implement a “set of shared safety protocols”.

Tory MP and former Justice Secretary Robert Buckland warned against the government “sleepwalking” into a dangerous position with AI.

Buckland has been considering how AI could benefit the severely overstretched judiciary system, but believes “evolutionary principles-based regulation” is needed to mitigate potential risks to justice and human rights.

“I suspect without that we're going to accidentally end up in a position which we didn't intend,” he told PoliticsHome.

“We need to not only be a leader but an exemplar, and if it just becomes a global race [to advance AI technology], we've got a problem – because a global race is all about who's fastest to the draw, not about how safe it is.”

Using AI in the justice system as an example, Buckland said AI should be like a “research assistant” but should be prevented from becoming the “judge” making final decisions. 

He has called for top level government attention to be given to regulation of AI and its impact. “Ultimately it should be the Prime Minister looking at this, because I think it's gonna take a national leadership,” the former secretary of state said. 

“He's interested in these issues, he really is, but we need some rules before it's too late.

“The time is now. It's happening at an increasingly fast pace. We might be having a completely different conversation in only a couple of months.”

Senior Conservative MP and former minister David Davis compared AI to nuclear power.

“Like nuclear power, AI could be very beneficial… or very dangerous,” he told PoliticsHome

But Davis was concerned there was a huge knowledge gap between people developing AI technology, and those considering its real world implications. He pointed to news that the so-called ‘godfather of AI’ Geoffrey Hinton had quit Google, citing concerns over misinformation, possible disruption to the job market, and the “existential risk” posed by AI.

“When one of the leading brains, Geoffrey Hinton, says he doesn't fully understand the AI out there, how can the public or ministers?,” Davis continued. 

He believes regulation is needed to ensure that those who design or deploy AI are held “absolutely responsible” for the consequences of artificial intelligence. 

“The government should make individuals responsible for AI, and that at least will make them careful about deploying AI instead of people, when the consequence could come back and bite,” he said. 

A new report by centre-right think tank Onward has explored the seismic impact of AI, both in terms of economic potential and safety. The report calls on the UK government to take ownership of innovation and develop “Great British GPT” as well as develop regulation to improve AI safety and stay ahead in the global AI race.

The report said this would be essential for the UK to defend against “future threats” as AI technology rapidly advances with potentially huge consequences for the labour market and UK productivity. 

As UK productivity has stagnated in recent years, Onward has argued that Generative AI could be key to improving it again, but that caution is needed when it comes to the risk that AI could automate jobs out of existence. 

The report suggested that the UK government should back the best British academics and entrepreneurs in the AI sector, as well as working with other governments internationally to establish a new set of global AI safety standards. 

“The Government clearly recognises the importance of AI but it needs to do more and act faster, given the exponential pace of change," Author of the Onward paper Shabbir Merali said.

"Investing in GB GPT and treating AI safety as a priority foreign policy objective are tangible actions the Government should take today.”

In response to the Onward report, Davis said innovation has to come primarily from individual companies rather than government. 

“Deep learning didn't come out of the state, it came from individual researchers,” he said. 

A senior Labour source told PoliticsHome they were discussing the need for a UK version of GPT a few weeks ago.

"Other countries are miles ahead in terms of harnessing the opportunities here," they said.

"We want to see responsible use of tech like AI, and it cannot come at the hands of the 'traditional' workforce. There is an important role for AI and innovation to enhance our opportunities both at home and abroad to allow us to be as internationally competitive as possible."

Labour MP and former government minister Kevin Brennan is also worried that government regulation is not keeping pace with the evolution of technology, and is particularly concerned about the impact of AI on the UK’s creative industries. 

Brennan is a musician himself, and plays guitar in the parliamentary rock band MP4 which was formed in 2004 by a group of cross-party MPs. 

On Wednesday he posed a Commons question to interim technology secretary Chloe Smith asking for assurance that AI will be “properly regulated” to make sure creative content is protected. 

Responding to Brennan’s questions, Smith said she was meeting with industry that day and intended to address how the technology could be regulated to protect creative work. A song that used AI to clone the voices of Drake and The Weeknd secured 15 million TikTok views in just a few days before being pulled from platforms, a flagship example cited in Onward's report.

“The UK has copyrights and intellectual property, we know how important those are for the continued success of the creative industries, we want to maintain them,” Smith said. 

“And therefore that will be a focus as we take this work forward.”

Brennan told PoliticsHome that while he recognises the benefits of AI in areas such as medicine, the government needs to ensure the AI landscape does not become a “wild west”.

“If you go down the route of a very delicate deregulatory approach, it will completely undermine the intellectual property framework that underpins our success in the creative industries,” he said.

“The government should not just pursue a laissez faire, hands off deregulatory approach to this, but should work with other jurisdictions because this is an international issue.”

He said he is worried that post-Brexit, the government wants to be seen to be doing things differently to the EU, but that the EU in his opinion has a “well developed approach” to AI with a regulatory framework taking shape. 

Questioning what rights creative people should be entitled to when protecting their work from being used by AI to generate new material, Brennan accused the government of “not facing up” to profound issues and called for more transparency from companies on how they are training AI to use creative output. 

“I think those profound ethical questions are unresolved and the government is not facing up to resolving them because it thinks it might stymie the rapid development of AI and lose the race to be the global leaders,” he said.

Research by OpenAI has estimated that 19 per cent of workers will have at least 50 per cent of their tasks impacted by AI, with Goldman Sachs forecasting 300 million potential job losses.

New survey data from Jimmy's Jobs of the Future podcast (hosted by former Downing Street advisor Jimmy McLoughlin) has shown that almost half of Brits think their job could be done by AI in a decade, while 63 per cent think governments should intervene to stop this happening.

The exclusive poll surveyed more than 1,000 people at the end of April 2023, and also found that less than one in five people in the UK have had AI training from employers. 

 

PoliticsHome Newsletters

PoliticsHome provides the most comprehensive coverage of UK politics anywhere on the web, offering high quality original reporting and analysis: Subscribe

Read the most recent article written by Zoe Crowther - Quarter Of Tory Councillors Think Adult Social Care Is Underfunded In Their Area

Categories

Technology