Menu
Fri, 19 April 2024

Newsletter sign-up

Subscribe now
The House Live All
Mobile UK warns that the government’s ambitions for widespread adoption of 5G could be at risk Partner content
Economy
Speed of delivery should be front and centre in the UK’s drive to be a Life Sciences leader Partner content
By WSP
Health
Data: driving the UK’s growth and productivity Partner content
Economy
Regulatory certainty as a key ingredient for industry resilience Partner content
By BASF
Energy
Press releases

The long-awaited AI Governance White Paper falls far short of what is needed

(Alamy)

4 min read

Last week over a thousand leading technologists wrote an open letter pointing out the “profound risks to society and humanity” of artificial intelligence (AI) systems with human-competitive intelligence.

They called inter alia for AI developers to “work with policymakers to dramatically accelerate development of robust AI governance systems”. So, it is ironic that our government published proposals for AI governance barely worthy of the name.

This at a time when there is huge interest and apprehension of the capabilities of new AI, such as ChatGPT and GPT-4, has never been higher.

Business needs a clear central oversight and compliance mechanism, not a patchwork of regulation

A long gestation period of national AI policy making, which started so well back in 2017 with the Hall-Pesenti Review and the creation of the Centre for Data Ethics and Innovation, the Office for AI and the AI Council, has ended up producing a minimal  proposal for “a pro-innovation approach to AI regulation”. In substance this amounts to toothless exhortation by sectoral regulators to follow ethical principles and a complete failure to oversee AI development with no new regulator.

Much of the white paper’s diagnosis is correct in terms of the risks and opportunities of AI. It emphasizes the need for public trust and sets out the attendant risks and adopts a realistic approach to the definition of AI. It makes the case for central coordination and even admits that this is what business has asked for, but the actual governance prescription falls far short.

.After a long build-up this is a huge disappointment and produces a unsatisfactory result for developers, users, and the public. All of whom need to see a clear regulatory framework for AI applications such as live facial recognition, financial services algorithms and generative AI based on the risks they present.

The suggested form of governance of AI is a set of principles and exhortations which various regulators – with no lead regulator – are being asked to interpret in a range of sectors under the expectation that they will somehow join the dots between them. They will have no new enforcement powers. There may be standards for developers and adopters but no obligation to adopt them.

There is no recognition that the different forms of AI are technologies that need a comprehensive cross-sectoral approach to ensure that they are transparent, explainable, accurate and free from bias whether they are in an existing regulated or unregulated sector. Business needs a clear central oversight and compliance mechanism, not a patchwork of regulation. The government’s proposals will not meet its objective of ensuring public trust in AI technology.

The government seems intent on going forward entirely without regard to what any other country is doing in the belief that somehow this is pro innovation. It does not recognise the need for our developers to be confident they can exploit their technology internationally.

Far from being world leading or turbocharging growth in practice, our developers and adopters will be forced to look over their shoulder at other more rigorous jurisdictions. If they have any international ambitions they will have to conform to European Union requirements under the forthcoming AI Act and ensure they avoid liability in the United States by adopting the AI risk management standards being set by the National Institute for Standards and Technology. Once again government ideology is militating against the interests of our business and science and technology communities.

What is needed – which I sincerely hope the Science and Technology Committee will recommend in its inquiry into AI governance – is a combination of risk-based, cross-sectoral regulation combined with specific regulation in sectors, such as financial services, underpinned by common trustworthy international standards of risk assessment, audit and monitoring.

We have world beating AI researchers and developers. We need to support their international contribution, not fool them they can operate in isolation.

 

Lord Clement-Jones, Liberal Democrat peer and spokesperson for Science Innovation and Technology in the Lords and co-founder of the All Party Parliamentary Group on AI.  

PoliticsHome Newsletters

Get the inside track on what MPs and Peers are talking about. Sign up to The House's morning email for the latest insight and reaction from Parliamentarians, policy-makers and organisations.

Categories

Technology