We must ensure that transparency and public trust lay at the heart of the public sector’s use of technology
The use of artificial intelligence and automation in the public sector opens the door to new efficiencies and transformative systems. However, it is also awash with ethical and practical challenges, explains Guinevere Poncia
New technologies often present the most complex opportunities and challenges for government. Ministers identified Artificial Intelligence (AI) and automation as one of the ‘grand challenges’ in the Industrial Strategy, which seeks to put the UK at the forefront of emerging technologies for public benefit. After an era of austerity that has left many departments short of resources, and with ever-mounting pressure to operate more efficiently, the Chief Executive of the Civil Service John Manzoni, among others, has heralded the potential of automation for transforming citizens’ interactions with public services. HMRC has led the way in this area, having created an Automation Delivery Centre which processed its 10 millionth transaction in 2018. It is also experimenting with using AI in its risk and compliance functions. Meanwhile, civil servants in the Department for Transport use machine learning to scour the news for transport-related stories, and the Serious Fraud Office is utilising AI to extract data from digital evidence.
However, the territory is laden with serious ethical and practical challenges. Automation extracts valuable knowledge from data but can also run the risk of embedding and industrialising unconscious or historic biases in complex data sets. Consequently, there are calls for greater regulation in the area and a reluctance for new technology to be rolled out until such serious issues are addressed.
A project that has attracted particular controversy is the West Midlands Police’s National Data Analytics Solution. Designed to use predictive analytics to enable a public health-centred approach to crime prevention, the system uses a combination of artificial intelligence and statistics to assess the risk of someone becoming a victim or perpetrator of crime, with a view to then intervening early with health or social support. However, this system, specifically designed to be rolled out nationwide, has raised some ethical concerns. The most serious of these centre on the risk of such a scheme reinforcing human prejudices – conscious or unconscious – within law enforcement. This could mean, for example, that a disproportionate number of BAME community members may be identified as perpetrators. While the West Midlands Police are drawing on advice for ethical governance of the project from both the Alan Turing Institute and the Independent Digital Ethics Panel for Policing, this project highlights a broader problem about the lack of joined-up oversight and transparency in public sector automation.
Compounding ethical concerns is the challenge of practical implementation. Numerous large projects attempting to simplify interactions between citizen and state have been blighted by problems. For example, the government’s flagship identity verification platform ‘Verify’ was criticised by both the Public Accounts Committee and the National Audit Office for spiralling costs and a failure to deliver for users.
The potential implications for future failings could be catastrophic, affecting welfare payments, visa applications and other crucial public services. For this reason, in their 2018 report on Algorithms in decision making, the Science and Technology Committee called for the “right to explanation”, so that citizens can demand to know how machine-learning programmes have made decisions that affect them. The committee also suggested the creation of a new ministerial position to oversee the government’s use of algorithms, with the aim of facilitating a more joined-up approach across Whitehall.
As remarked by Chi Onwurah in a recent debate on visa processing algorithms “the impact of technology on society is a political choice”. The government have unquestionably sought to broaden their understanding of the potential pernicious outcomes of algorithmic decision making, and the Centre for Data Ethics and Innovation has identified policing and healthcare as the riskiest areas for reinforcing bias. Indeed, the Cabinet Office’s Race Disparity Unit is currently investigating the potential for systemic bias in using such techniques within the criminal justice system. However, the government has a wider duty to ensure that transparency and public trust lay at the heart of the public sector’s use of technology. Nowhere is this more important than where automation risks compounding historic or existing biases.
Guinevere Poncia is a Dods Political Consultant