Menu
Thu, 25 April 2024

Newsletter sign-up

Subscribe now
The House Live All
Technology
Mobile UK warns that the government’s ambitions for widespread adoption of 5G could be at risk Partner content
Economy
Speed of delivery should be front and centre in the UK’s drive to be a Life Sciences leader Partner content
By WSP
Health
Data: driving the UK’s growth and productivity Partner content
Economy
Regulatory certainty as a key ingredient for industry resilience Partner content
By BASF
Energy
Press releases

Politicians must prepare for AI or face the consequences

(Alamy)

4 min read

Rare today is the serious newspaper that, on any given day, does not include articles about artificial intelligence (AI)—covering either new and improved capabilities or mayhem caused by poorly designed systems, carelessly deployed.

AI systems are implicated in the propagation and amplification of racial and gender bias, in the creation of deceptive “deepfake” images and video, in the spread of disinformation, and, when embodied as lethal autonomous weapons, in the killing of human beings.

In response, governments, international organizations, corporations, and professional societies have been busy drawing up sets of AI principles—more than 300 sets, mostly bland platitudes lacking in precision and teeth. On really important and urgent issues, such as lethal autonomous weapons, governments (including, I’m afraid to say, the United Kingdom government) have mostly hemmed and hawed, pleading confusion while the technology careens forward. There are, however, some exceptions: for example, the EU’s proposed AI Act bans the impersonation of human beings by AI, thereby creating a new and, I think, entirely necessary right to know if you’re interacting with a person or a machine.

Governments have mostly hemmed and hawed, pleading confusion while the technology careens forward

On Tuesday 18 October, I’ll be delivering a Lord Speaker’s lecture in Parliament on AI entitled “AI: Promise and Peril”. Just this week an AI-powered robot, Ai-Da, gave evidence to the Lords Communications Committee. And despite the humming and hawing I describe, it’s inevitable that legislation to regulate these issues will one day make its way to Parliament and it’s important our politicians are fully prepared. 

My lecture will cover not just these present-day concerns, but also the longer-term consequences of success in AI. To understand what “success” means for AI, we have to understand what AI is trying to do. Obviously, it’s about making machines intelligent, but what does that mean?

To answer this question, the field of AI borrowed what was, in the 1950s, a widely accepted and constructive definition of human intelligence: Humans are intelligent to the extent that our actions can be expected to achieve our objectives. All those other characteristics of intelligence—perceiving, thinking, learning, and so on—can be understood through their contributions to our ability to act successfully.

This definition of AI is so pervasive I’ll call it the standard model. Similar definitions hold sway in related areas such as statistics, operations research, control theory, and economics. All of these disciplines create optimizing machinery and then plug in a human-defined objective, and off it goes. For example, a self-driving car is given an objective, such as “take me to Heathrow”, combines that with fixed objectives such as minimizing time and maximizing safety, and drives off (or opines, helpfully, “You’re better off taking the train, mate”).

The governance problems we face now are minor compared to those that are likely to emerge as AI moves towards general-purpose intelligence—that is, machines that can quickly learn to do more or less anything a human being can do. Such systems would inevitably far exceed human abilities in many areas because of their natural advantages in speed, memory, and communication bandwidth.

Governments need to plan ahead for this eventuality because it would simultaneously unleash a massive expansion of global wealth as well as a cataclysmic disruption of human economic roles. Some governments say, “we’ll just retrain everyone to be a data scientist”, but they are missing the point: the world needs a few million data scientists, but there are a few billion people who would like to work.

The other likely consequence of creating general-purpose intelligent machines would be that we humans might lose control over our own future. We do not yet know how to ensure that we retain power, forever, over machines more powerful than ourselves. We have some ideas, but we are far from solving the problem and we don’t know how long we have to solve it.

Furthermore, even if we do solve it by designing provably safe AI systems, we’ll need a way to ensure that no unsafe AI system is ever deployed. This, in turn, may require a radical security overhaul of our entire digital ecosystem. No one said this would be easy!

 

Professor Stuart Russell, professor of computer science at the University of California, Berkley.

PoliticsHome Newsletters

Get the inside track on what MPs and Peers are talking about. Sign up to The House's morning email for the latest insight and reaction from Parliamentarians, policy-makers and organisations.

Categories

Technology