The stakes are too high for the AI Bill to be further delayed
3 min read
There is no time to waste for Labour to deliver its promised AI Bill.
New Secretary of State for Science, Innovation and Technology Liz Kendall has the unique opportunity to finally follow through on the promised AI Bill.
More than a year ago, on 26 July 2024, then-technology secretary Peter Kyle said: “We remain committed to bringing forward AI legislation so that we can realise the enormous benefits and opportunities of this technology in a safe and secure way.”
And earlier this summer, Lord Vallance, the Minister of State for Science, Research and Innovation, who worked closely with Kyle, stated anew: “We remain committed to bringing forward AI legislation so that we can realise the enormous benefits and opportunities of this technology in a safe and secure way.”
Unfortunately, Kyle’s Department for Science, Innovation and Technology consistently failed to deliver on this front; a trend that Kendall has the opportunity to buck.
DSIT has failed to release an AI Bill, despite Labour promising it already in its 2024 manifesto: “Labour will ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models and by banning the creation of sexually explicit deepfakes.”
DSIT has failed to organise a consultation on the AI Bill, despite Kyle’s promise more than a year ago: “We will shortly launch a consultation on these legislative proposals, to harness the insights and expertise of the AI industry, academia and civil society.”
DSIT has failed to provide concrete answers to Written Questions about its work on AI regulation, despite repeated opportunities. For example, the response to my question, “to ask the Secretary of State for Science, Innovation and Technology, when he plans to publish a consultation on the regulation of frontier AI systems", was only a repeated broad statement about the government attitude, not an actual date: “Departments are working proactively with regulators to provide clear strategic direction and support them on their AI capability needs. Through well-designed and implemented regulation, we can fuel fast, wide and safe development and adoption of AI.”
Yet, everyone involved knows what needs to be done: put the UK AI Security Institute on a statutory footing, and give it the regulatory powers necessary to address the risks posed by the most advanced AI systems.
Kyle himself acknowledged a year ago that this was the right thing to do. "At the moment, there are voluntary codes regulating AI, particularly the frontier AI. We would put it on a statutory footing and legislate to require the Frontier AI labs to release their safety data," he said.
The government also knows what the stakes are, with Kyle adding: “We must consider the possibility that risks won’t just come from malicious actors misusing AI models, but from the models themselves.
"Losing oversight and control of advanced AI systems, particularly Artificial General Intelligence (AGI), would be catastrophic. It must be avoided at all costs."
Already, AIs have started acting in worrying ways: Anthropic found that most AIs, including its flagship Claude, were willing to blackmail human users in a test. Such troubling behaviours are but the first taste of the kind of loss of control that comes with creating AIs we don’t understand, without any safeguards.
Given all this, Kendall must push forward an AI bill that keeps UK citizens safe.
Sarah Olney is the Liberal Democrat MP for Richmond Park.