Government must resolve AI ethical issues in the Integrated Review
The opportunities and risks involved with the development of AI and other digital technologies and use of data loom large in the 4 key areas of the Strategic Framework of the Integrated Review.
The Lords recently debated the government’s Integrated Review set out in “Global Britain in a Competitive Age”. The opportunities and risks involved with the development of AI and other digital technologies and use of data loom large in the 4 key areas of the Strategic Framework of the Review. So, I hope that the promised AI Strategy this autumn and a Defence AI Strategy this May will flesh these out, resolve some of the contradictions and tackle a number of key issues. Let me mark the government’s card in the meantime.
Commercialisation of our R&D in the UK is key but can be a real weakness. The government need to honour their pledge to the Science and Technology Committee to support Catapults to be more effective institutions as a critical part of Innovation strategy. Access to finance is also crucial. The Kalifa Review of UK Fintech recommends the delivery of a digital finance package that creates a new regulatory framework for emerging technology. What is the government’s response to his creative ideas?
The pandemic has highlighted the need for public trust in date use
Regarding skills, the nature of work will change radically and there will be a need for different jobs and skills. There is a great deal happening on high end technical specialist skills. Turing Fellowships, Phd’s, conversion courses, An Office for Talent, a Global Talent Visa etc. As the AI Council Roadmap points out, the government needs to take steps to ensure that the general digital skills and digital literacy of the UK are brought up to speed. A specific training scheme should be designed to support people to work alongside AI and automation, and to be able to maximise its potential.
Building national resilience by adopting a whole-of-society approach to risk assessment is welcome but in this context the government should heed the recent Alan Turing Institute report which emphasizes that access to reliable information particularly online is crucial to the ability of a democracy to coordinate effective collective action. New AI applications such as GPT3 the language generation system, can readily spread and amplify disinformation. How will the Online Safety legislation tackle this?
At the heart of building resilience must lie a comprehensive cyber strategy but the threat in the digital world is far wider than cyber. Hazards and threats can become more likely because of the development of technologies like AI and the transformations it will bring and how technologies interconnect to amplify them.
A core of our resilience is of course defence capability. A new Defence Centre for Artificial Intelligence is now being formed to accelerate adoption and a Defence AI strategy is promised next month. Its importance is reinforced in the Defence Command Paper, but there is a wholly inadequate approach to the control of lethal autonomous weapon systems or LAWS. Whilst there is a NATO definition of “automated” and “autonomous”, the MOD has no operative definition of or LAWS. Given that the most problematic aspect – autonomy – has been defined is an extraordinary state of affairs given that the UK is a founding member of the AI Partnership for Defence, created to “provide values-based global leadership in defence for policies and approaches in adopting AI.”
The Review talks of supporting the effective and ethical adoption of AI and data technologies and identifying international opportunities to collaborate on AI R&D ethics and regulation. At the same time, it talks of the limits of global governance with “competition over the development of rules, norms and standards.” How do the two statements square? We have seen the recent publication of the EU’s proposed Legal Framework for the risk-based regulation of AI. Will the government follow suit?
Regarding data, the government says it wants to see a continuing focus on interoperability and to champion the international flow of data and is setting up a new Central Digital and Data Office. But the pandemic has highlighted the need for public trust in date use. Will the National Data Strategy (NDS) recognize this and take on board the AI Council’s recommendations to build public trust for use of public data, through competition in data access, and responsible and trustworthy data governance frameworks?
Lord Clement-Jones is a Liberal Democrat member of the House of Lords and co-chair the APPG on AI.