AI can help fight crime – but not if there’s a cost to civil liberties and the rule of law
If a week is a long time in politics, what is a year in the development of advanced technology?
A year ago (plus a bit) the Lords Justice and Home Affairs Select Committee published a report titled, Technology rules? The advent of new technologies in the justice system – and yes, the title was intended to be read both ways. Recently we’ve been hearing a lot about how artificial Intelligence can, will and must not replace human beings. The committee’s work, which focused largely on policing, left us in no doubt that it must not – not without clear and rigorous safeguards. Only humans should take decisions about humans and that means more than the casual click of a button on a mouse.
That does not mean stifling innovation, but technologies must be used safely and ethically. Discrimination must not be baked in. Artificial intelligence (AI) can bring considerable benefits: not just efficiency, but new ways of working. A force for good in preventing and responding to crime – but not at a cost to human rights, civil liberties and equalities, not risking fair trials and damaging the rule of law.
New tools are used without questioning whether they always produce a justified outcome
We found what feels like a new Wild West, with technologies developing at such a pace that neither government nor citizens have kept up. It was not possible to work out who was responsible for what. More than 30 public bodies, initiatives and programmes playing a role in the governance of how new technologies were used in the justice sector. Roles were unclear, functions overlapped and joint working was patchy. I asked the police for a family tree. “More like a family bush,” they said – and it was.
Facial recognition is the best known technology, perhaps even better known after arrests during the coronation. (I don’t know whether anyone has asked ChatGPT to write up a police officer’s statement.) We heard about there being no mandatory training for users of facial recognition techniques or other AI. It was certainly apparent that neither users nor public had much, if any, understanding of how they work. How would it feel to be convicted and imprisoned following a process that involves AI which you don’t understand and, therefore, can’t challenge?
We heard horror stories about the sale of loss leaders, particularly in the United States, a child wrongly identified by technology and searched by police, data harvested but ending up… where?
I’m no enthusiast for adding to the statute book but legislation is needed to form the basis for detailed regulations, with a new national body setting strict scientific, validity and quality standards.
No tool should be introduced without being certified as meeting standards – a sort of kitemark. Alongside this we called for a duty of candour on the police. AI can have huge impacts on people’s lives. A national register of technologies in use would be part of the transparency that is essential; participation in the Algorithmic Transparency Recording Standard Hub, launched since our report, is voluntary. Without transparency, there can be no scrutiny and no accountability when things go wrong.
We had a strong impression that new tools are used without questioning whether they always produce a justified outcome. Is “the computer” always right? Are we too deferential? Or too suspicious?
I wondered, having left my desk to go to a division lobby, whether and when facial recognition technology would be used to register votes. I don’t think such a proposal would go down well – at the moment. It was different technology, but what happened to hundreds of Post Office managers is still a current issue.
Baroness Hamwee, Liberal Democrat peer and chair of the Lords Justice and Home Affairs Committee
Get the inside track on what MPs and Peers are talking about. Sign up to The House's morning email for the latest insight and reaction from Parliamentarians, policy-makers and organisations.