Menu
Sat, 11 May 2024

Newsletter sign-up

Subscribe now
The House Live All
Celebrating 20 years: How CPI is helping to change the face of the manufacturing industry in the UK Partner content
Technology
Culture
Technology
Mobile UK warns that the government’s ambitions for widespread adoption of 5G could be at risk Partner content
Economy
Speed of delivery should be front and centre in the UK’s drive to be a Life Sciences leader Partner content
By WSP
Health
Press releases

AI Summit’s Bletchley Declaration does little to address tech harms in the here and now

5 min read

This week at the AI Summit, twenty-eight governments including the UK, US, EU, and China signed the Bletchley Declaration, aspiring to become a landmark plan to control the “catastrophic” threats of AI before it’s too late.

At first glance, the Declaration references ongoing risks posed by AI framed within the language of fairness, ethics, and accountability. Amnesty International welcomes the Declaration’s reference to the ‘protection’ and ‘enjoyment’ of rights, we’re concerned that it plays mere lip service to human rights as it fails to reference governments’ obligations to uphold existing international human rights law and standards.

Instead of focusing on real-world harms and taking a human rights-based approach to addressing them, the AI Summit gave a platform to alarmist narrative, where a ‘Terminator-style rise of machines’ became the driving force shaping conversations on AI harms globally. From government officials to industry players, these utterances loom large in every corner of the debate on AI regulation, and they’ve successfully set the agenda for what is on the table for discussion in the UK this past week.

What follows disappointingly goes on to mirror the tech industry’s narrative about AI’s “existential threats” to humanity, neglecting the voices that represent the perspectives and experiences of those who’ve been most impacted by AI today, whether it’s AI tools within welfare that have threatened their right to social security, to facial recognition technology that has threatened their right to non-discrimination and freedom of assembly and association, including the right to protest. We must not forget when one’s race, ethnic background, gender, sexual orientation, disability, marital status, and economic background are perceived, directly or indirectly, as risks by AI, then any solutions promised by these systems are compromised.

The focus and framing aren’t surprising given the summit and agenda was organised with little to no transparency and with heavy industry influence. The guest list overwhelmingly consisted of a narrow section of the AI industry, with only a few select civil society members receiving last minute invites. No human rights groups were in the room, and “civil society” participation mostly consisted of UK-based think-tanks and academic representatives. If the UK is serious about become a leader in AI governance globally, the voices of marginalised groups and those outside the UK, U.S., and Western Europe, need to be elevated so that resulting policy measures protect and promote the rights of all.

So, what is on the table? We’ve seen several concrete announcements, the biggest of which from a UK perspective came last week in the form of the new AI Safety Institute, focused on examining, evaluating and testing new types of AI to identify and mitigate risks. We are concerned whether this Institute will mirror the UK government's narrow focus on “frontier models” at the AI Summit and the long-term risks they pose, at the cost of present-day risks.

We’re also concerned about the plans to measure this technology and hold its use to account. The Agreement focuses heavily on technical evaluations, with no mention of established human rights measurements or accountability. From everything we know about these systems, they pose a serious societal risk, which means that such systems ought to be measured for their impact on human rights and our society, as well as for technical efficacy. 

The focus and framing aren’t surprising given the summit and agenda was organised with little to no transparency and with heavy industry influence

In parallel, the US executive order on AI published on Wednesday somewhat superseded announcements from the summit itself. The US unveiled a series of initiatives to advance the safe and responsible use of AI, including another AI Safety Institute based in the US. We welcome many of these proposals, particularly in developing standards and best practice for evaluating AI risks and providing technical and policy guidance to government agencies and regulators considering rulemaking and enforcement on these issues.

Crucially, in the Executive Order, the US does not pit innovation against public rights and safety and pays heed to present day human rights concerns around the use of AI in housing, policing and criminal justice, healthcare and the labour market. This represents a step in the right direction to which the UK government should take notice; however, we ultimately need to see binding regulation on both sides. 

The pledge within the executive order to incorporate rights-respecting practices into government development, deployment and use of AI cannot come at the expense of regulation, either at the national or global level, that is legally binding and enforceable. This is the only mechanism through which those impacted in the worst scenarios can seek accountability and justice, and which developers are forced to act upon the harms identified within any kind of evaluation.

At a global level, the Council of Europe has been paving the way for an international convention on AI grounded in human rights, rule of law, and democracy. As the future AI Convention would be legally binding on all signatory countries, some States (including the UK and US) are now pushing to water down the international legal instrument, by excluding the private sector from its scope and introducing dangerous loopholes for national security. This begs the question: does the AI Summit truly aim to address AI harm, or is yet another distraction to develop performative and non-binding policy, while giving free rein to AI developers and deployers to carry on unregulated?

With all the hype around AI, it is evident the tech industry's voice reverberates across every corner of discussion and it is unfortunate that lawmakers give undue importance to it. Discussion of AI Safety cannot come at the expense of accountability, it is imperative policy and regulatory outcomes take a human rights-based approach and center the actual harm of AI in their formulation. For this to happen, it's vital for human rights organisations to be in the room to ensure the voices of marginalised communities, and people who can represent those voices and experiences are at the heart of this process.

David Nolan and Hajira Maryam, Algorithmic Accountability Lab, Amnesty Tech, at Amnesty International. 

PoliticsHome Newsletters

Get the inside track on what MPs and Peers are talking about. Sign up to The House's morning email for the latest insight and reaction from Parliamentarians, policy-makers and organisations.

Tags

AI summit

Categories

Technology