Government Urged To "Get On With" Regulating AI Ahead Of Rest Of World
The UK AI summit will be held at Bletchley Park (Alamy)
The government must address 12 key challenges presented by artificial intelligence and needs to urgently introduce legislation to tighten regulation around AI and get ahead of international competitors, according to a new select committee report.
The science, innovation and technology committee has published a report urging the government to speed up the implementation of its policies to mitigate potential harms from AI.
MPs have previously warned the government against “sleepwalking” into a dangerous position with AI, with both Tory and Labour politicians wary of the real-world impact the technology could have.
The new interim report has presented 12 challenges for AI governance to meet, with the intention of informing government policy and encouraging the Department for Science, Innovation and Technology to set out how and when they will address them with policies.
The challenges include how to prevent AI introducing biases, how to protect privacy for users of AI, how to stop AI misrepresenting information, what access to data should be permitted, whether AI code should be open to promote transparency and innovation, how to protect intellectual property rights, how to establish whether developers or providers of the technology bear any liability for harms, whether AI will disrupt people’s jobs, how to cooperate internationally on AI regulation, and how to mitigate AI as a major threat to human life in the long-run.
Science, innovation and technology committee chair and Conservative MP Greg Clark said although the government has set out the need to legislate in its AI white paper earlier this year, they need to “get on with it” to ensure they keep up with the fast pace of emerging technology globally.
The report recommends that legislation needs to be put before Parliament during its next session and before the next general election. Clark hopes it will be set out in the King’s Speech, expected in November.
“The King’s Speech might be the last speech before a general election,” Clark told PoliticsHome.
“If there isn't legislation passed in this session, then assuming the election is in late 2024, the earliest that new legislation can reach the statute book is mid to late 2025.
“Two years will have passed in which AI continues to be deployed and to develop and you wouldn't have the statutory means to govern it.
“And other jurisdictions such as the EU or the US will be proceeding themselves, and there is a danger that what has become embedded in Europe and in the US could become the default means of regulation, even if we had a better model in mind. That's another reason for getting on with it.”
The government’s AI white paper proposed that existing regulators be used to govern the threats presented by AI, but experts have warned that a patchwork of regulators may struggle to oversee AI effectively across different sectors, as the extent of their powers ranges massively.
“If you're going to rely on existing regulators, you need to be sure that all of the regulators, not just some of them, have the powers that they need and the resources and capability that they need,” Clark explained.
“That's something that the government needs pretty quickly to take a view on, and to act through legislation to make sure there's not a gap.”
One of the recommendations of the report is that there is a very urgently conducted “gap analysis” to see which regulators do not have the right powers and to ensure there is clear guidance as to who is liable for AI harms where they occur.
The interim report also urges greater international cooperation to address the 12 challenges identified.
On Wednesday, Foreign Secretary James Cleverly made the first trip to China by a UK foreign secretary for five years, leading to questions over whether China will be invited to the UK's AI summit in November, the first event of its kind in the world where international leaders will discuss action to advance safety around AI technology.
Although the report welcomes the November AI summit at Bletchley Park, Clark added that some countries, such as China, should be excluded from conversations around national security where it might compromise the UK and its allies’ interests.
“The UK is the third country in the world after the US and China for the number of companies operating in AI,” Clark continued.
“So there is a big opportunity there to not just get regulation right and governance right for the UK, but to promote a model that may be emulated by other countries.
“We think that there should be a global conversation, but when it comes to our national security, we need to talk with our allies in a private closed space about that, and not all of those conversations can be open to other countries.”
Alex Davies-Jones, Shadow Minister for Digital, Culture, Media and Sport, previously told PoliticsHome she was concerned that “big players” in the tech sector would be “dictating” discussions at the summit and having too much influence over government policy.
Although the new report did consult leading developers such as Google and Microsoft, Clark did share some of Davies-Jones’ concerns.
“We're not terribly worried that the government is biased towards the big companies, but there are some discussions as to how you deal with the prospective dominance of big companies [in AI],” Clark said.
“The government will need to take a view as to whether it prefers to see a small number of big companies, or a mass of smaller companies in the space.”
Large companies are likely to dominate as AI relies on accessing large data sets, and the question remains as to whether the government will pursue an “open source” approach to AI which would make its models more accessible to smaller businesses and start-ups, but open up more risk to bad actors using the technology.
Clark added that the committee will give a “steer” on how to balance the power of large tech companies in its final committee report, which will be published after the AI summit.
As AI technology continues to develop, Clark insisted there should be a “broad political consensus” on the need to act now to regulate it.
“One of the government’s advisers said that, based on current trends, within two years AI could develop in a way that threatens the lives of many humans,” he said.
“If the government's adviser says that, you can't say ‘we’ll worry about it in the long term’: you need to be addressing it now.
“And in particular, our security services need to be doing what we can now to recognise this is going to be an ongoing thing for many years to come.”
A government spokesperson said: “AI has enormous potential to change every aspect of our lives, and we owe it to our children and our grandchildren to harness that potential safely and responsibly.
“That’s why the UK is bringing together global leaders and experts for the world’s first major global summit on AI safety in November – driving targeted, rapid international action on the guardrails needed to support innovation while tackling risks and avoiding harms.
“Our AI Regulation White Paper sets out a proportionate and adaptable approach to regulation in the UK, while our Foundation Model Taskforce is focused on ensuring the safe development of AI models with an initial investment of £100 million – more funding dedicated to AI safety than any other government in the world.”
PoliticsHome provides the most comprehensive coverage of UK politics anywhere on the web, offering high quality original reporting and analysis: Subscribe