Menu
Fri, 4 October 2024

Newsletter sign-up

Subscribe now
The House Live All
Cheaper, greener, healthier and richer: the benefits of harnessing technology to decarbonise housing Partner content
Environment
Time to act on precision breeding to drive innovation and growth Partner content
Environment
To build an NHS ‘fit for the future’, we must unlock growth in the life sciences industry Partner content
By Guy Oliver, General Manager, Bristol Myers Squibb UK and Ireland
Health
New study indicates current planning laws do not cater to public attitudes and needs for mobile infrastructure Partner content
Technology
The UK Chemical Industry Celebrates its Achievements and Looks to the Future Partner content
Health
Press releases

UK Could End Up "Powerless" To Prevent AI Risks Without Urgent New Laws, Industry Warns

The UK has published its long-anticipated response to the AI white paper (Alamy)

6 min read

Tech sector figures have warned that the UK government's approach to preventing the risks of AI could leave the country "powerless" without urgent legislative action, as the government publishes its long-awaited response to its white paper on Artificial Intelligence (AI).

The AI white paper, set out in March 2023, considered “guidelines” for the use of AI, but was criticised by some industry figures and politicians for not proposing any new legislation, with the government instead intending to use existing regulatory frameworks to mitigate potential AI harms.

The government's response to the white paper, published on Tuesday, has confirmed the government's ongoing confidence in this approach, but has recognised that the UK – and other jurisdictions around the world – will need binding measures further down the road to ensure compliance with regulations.

The Department for Science, Innovation and Technology (DSIT) announced that £100m of existing funding will be allocated to support the regulation of AI, with £10m specifically going towards regulator capability to help them jumpstart their response. 

Nearly £90m will be put towards the establishment of nine new 'hubs' established to carry out research on how to embed responsible approaches in the deploment of AI, and a partnership with the US on responsible AI. PoliticsHome understands there is no timeline yet for when these hubs will be established and they will be set up at existing universities across the UK.

MPs have previously warned that without rapid regulation, the government could be “sleepwalking” into a dangerous position with AI. The UK government's landmark AI Safety Summit convened world powers in November to achieve multiple agreements between international governments and technology developers to work together on preventing existential risks from AI, but some parliamentarians questioned what would be next in terms setting out how this would be delivered.

Lord Tim Clement-Jones, the Lib Dem spokesperson for the digital economy in the Lords and co-founder of the All Party Parliamentary Group on AI, told PoliticsHome that the government's response to the white paper still did not address the issue of enforcement among big tech companies.

"Under the [EU] AI act, standards are beginning to converge and the UK is just getting left behind," he said.

"That's absolutely classic for this government. £100m is a drop in the ocean, okay, so they announced all that stuff but in terms of making sure that people actually do it...

"I'm fairly relaxed, because I think we've missed the boat anyway... at the end of the day, big corporates are not going to worry too much."

The government will also be launching a steering committee in spring to support and guide the activities of a formal regulator coordination structure within government, and key regulators, including Ofcom and the Competition and Markets Authority, have been given a deadline of 20 April to publish their approach to managing AI technology.

Clement-Jones, however, insisted this did not far enough, adding that even with this request, the regulators had "no teeth" and that technology firms are still not obliged to comply with regulator guidelines.

"People still want certainty, they want to know now, when they start adopting or developing [AI], the framework within which they need to operate," he continued.

As organisations such as Deep Mind had already set out what the risks are, Clement-Jones argued more research would not be "particularly helpful". 

"At the end of the day, I'm afraid we're way behind the curve," he said. 

"I don't think it's going to add up to very much, a bit like the AI Summit, where we all forgot what the AI summit had done within about a week. I think this will be forgotten within about a week."

Michael Birtwistle, Associate Director at the Ada Lovelace Institute, agreed that the government's proposed framework would be ineffective without legislative support.

"Only hard rules can incentivise developers and deployers of AI to comply and empower regulators to act," he said.

“It’s welcome that Government is now open to the option of bringing forward legislation, but making this intervention dependent on industry behaviour and further consultation would fall short of what is needed.

“It will take a year or longer for a government to pass legislation on AI. In a similar period we have seen general-purpose systems like ChatGPT go from a niche research area to the most quickly adopted digital product in history. 

“We shouldn’t be waiting for companies to stop cooperating or for a Post Office-style scandal to equip government and regulators to react. There is a very real risk that further delay on legislation could leave the UK powerless to prevent AI risks – or even to react effectively after the fact."

He added that the government's approach was still too "narrowly focused" on the most advanced AI systems and "reliant on the goodwill of AI companies like Microsoft, Google and Meta", but welcomed that the government was "evolving" its approach. 

Adam Leon Smith FBCS, of BCS, The Chartered Institute for IT and an AI expert told PoliticsHome it was "right" that the government was moving to fund and empower existing regulators, but added that BCS believed AI professionals should be professionally registered and held accountable to "clear standards".

Labour responded to the government's latest AI announcement by stating it still left the UK "lagging behind".

"While it is welcome to see the Government finally setting out some information about this crucial technology, ministers are still missing a plan to introduce legislation that safely grasps the many opportunities AI presents," Matt Rodda, Labour's Shadow Minister for AI and Intellectual Property, said.

"The United States issued an Executive Order setting out rules and regulations to keep US citizens safe and the EU is currently finalising legislation, but the UK is still lagging far behind with this white paper response being reportedly repeatedly delayed.

"Unlike the Tories, Labour sees AI as fundamental to our mission to grow the economy. We will seize the opportunities AI offers to revolutionise healthcare, boost the NHS and improve our public services with safety baked in at every stage of the process."

However, Conservative MP Greg Clark, who chairs the Science, Innovation and Technology Select Committee, said that the government had taken on board some of the recommendations set out by the committee in a report published in August last year – particularly in ensuring regulators have the capabilities to respond to the AI threats.

He said he believed it was important for the UK to not blindly follow the EU, who he described as taking steps to "legislate before they know exactly what legislation is required".

But at what point might legislation be required in the UK, if not now?

"I would say if it became apparent that companies were not complying with the regulatory approach that individual regulators were taking, or that there emerged some gap in regulatory powers, then I think you would have to accelerate the regulation," Clark said.

"But this central function that [DSIT] describe, that they're bolstering, seems clear to me that that is designed to keep on top of that."

DSIT Secretary Michelle Donelan said: “The UK's innovative approach to AI regulation has made us a world leader in both AI safety and AI development.

“I am personally driven by AI's potential to transform our public services and the economy for the better – leading to new treatments for cruel diseases like cancer and dementia, and opening the door to advanced skills and technology that will power the British economy of the future.

“AI is moving fast, but we have shown that humans can move just as fast. By taking an agile, sector-specific approach, we have begun to grip the risks immediately, which in turn is paving the way for the UK to become one of the first countries in the world to reap the benefits of AI safely.”

PoliticsHome Newsletters

PoliticsHome provides the most comprehensive coverage of UK politics anywhere on the web, offering high quality original reporting and analysis: Subscribe

Read the most recent article written by Zoe Crowther - Senior London Tory Refuses To Vote For Any Leadership Candidate

Categories

Technology