UK AI Policy for 2024 and Beyond

Glenlead and Jisc lead work on digital and AI accessibility in higher education
October 27, 2023
The United States Catching Up and Taking a Lead on AI Regulation
January 25, 2024
Show all

UK AI Policy for 2024 and Beyond

Author, Dr Ann Kristin Glenster, Executive Director of the Glenlead Centre – 23 January 2024.

In this blog, Glenlead’s Executive Director, Dr Ann Kristin Glenster, gives a summary of the UK policy landscape and the lessons we can learn from recent history for where to go next.

Following the fervent activity of last year, the next twelve months promise to be full of even more developments on the AI front in the UK and globally.

In early January 2024, I was invited to a high-level roundtable workshop with key stakeholders from civil society, government, academia, and industry. The roundtable was convened by Demos and Dr Elizabeth Seger to discuss how the UK can be a global leader of AI.

The UK has already taken great strides to develop its AI policy landscape. In March 2023, the Government published the white paper A pro-innovation approach to AI regulation. The paper was widely discussed, including in my own co-authored report Policy Brief: Generative AI, published by the University of Cambridge, and in the House of Commons Interim Report on AI Governance, published in August 2023.

The Government’s white paper built on the now disbanded non-governmental AI Council’s national AI Roadmap, published in 2021.

In 2021, the AI Council had two messages for the Government: (1) “to scale for reliability in areas of unique advantage” and (2) “to ‘double down’ on recent investment in the UK in AI … looking to the horizon and be adaptable to disruption.” According to the Council, these ambitions could only be fulfilled with full accountability and transparency, with commitments to deliver the best science, and with dedicated investment in talent.

The AI Council made 16 recommendations in 2021, spread across four pillars:

  1. Research, Development, and Innovation
  2. Skills and Diversity
  3. Data, Infrastructure, and Public Trust
  4. National, Cross-sector Adoption

The first pillar covered public-sector investment in AI, encouragement of international research collaboration, the role of the Alan Turing Institute, and a goal to ensure an environment for ‘moonshots’ R&D. The second pillar addressed skills-building, diversity and inclusivity, and data literacy. The third pillar concerned data governance and infrastructure, accountability through public scrutiny, and the UK’s position internationally. The fourth pillar addressed consumer confidence, support for start-ups, zero carbon emission, security, and the NHS.

Adopting a narrower focus, the Government’s pro-innovation 2023 white paper addressed innovation and regulation of AI as an industry, proposing that five value-based principles should be incorporated into existing sector-specific regulation.

The five principles were:

  1. Safety, security, and robustness
  2. Appropriate transparency
  3. Fairness
  4. Accountability and governance
  5. Contestability and redress

The Government did not commit to any plans for new legislation or the creation of a ‘super-regulator’, stating that more research was needed before any formal legal proposals could be formulated.

The House of Commons in its August 2023 Interim Report responded by identifying 12 challenges associated with AI:

  1. The Bias Challenge
  2. The Privacy Challenge
  3. The Misrepresentation Challenge
  4. The Access to Data Challenge
  5. The Access to Compute Challenge
  6. The Black Box Challenge
  7. The Open-Source Challenge
  8. The Intellectual Property and Copyright Challenge
  9. The Liability Challenge
  10. The Employment Challenge
  11. The International Coordination Challenge
  12. The Existential Challenge

In November 2023, the UK hosted the AI Safety Summit, which resulted in the Bletchley Declaration, signed by 28 attending countries, with promises of international cooperation for the development of AI safety. Prime Minister Rishi Sunak also announced the creation of a new AI Safety Institute “to minimise surprise to the UK and humanity from rapid and unexpected advances in AI.”  

The Government also assigned the AI policy brief to the Department for Science, Innovation and Technology (DSIT), which was created in February 2023. Going forward, the DSIT now has a key role in shaping the UK’s AI policy for 2024 and beyond.

Key Takeaways from the Roundtable

Given the DSIT’s central role in shaping the UK’s AI policy, it was appropriate that the roundtable opened with Imran Shafi OBE, the Director of Government AI Policy at DSIT, asking: What should the UK do to become a global leader of AI?

Seeking to answer that question, the discussion examined what dimensions the UK’s AI leadership should take – financial, industrial, regulatory? There was no easy answer, and the discussion covered a broad range of topics, which I have attempted to distil into a few points.

Key takeaways:

  • The UK can take a leading role in devising domestic and international regulation for Responsible AI, addressing concerns regarding issues such as privacy, bias & discrimination, safety, and trust. This is an opportunity for global leadership.
  • Risks associated with AI that is used for disinformation, misinformation, and manipulation fall under a wider category of security (cybersecurity, critical infrastructure, etc.). AI security is a broad umbrella that needs its own resources.
  • The UK’s national strategy and its implementation must be a process and therefore cannot simply be approached as end goals.
  • Given the speed of advancements, a key issue is how far upstream regulation should go.
  • It is still not clear how the Government should balance a sector-specific approach to regulation and the need for joined-up thinking. It is near impossible to untangle all the facets and components of a national AI strategy. All the issues could be examined from a sector-specific perspective, yet it is also important to keep an eye on how they intersect and overlap. Key overarching concepts are ‘trust,’ ‘human-in-the-loop’, and ‘accountability.’

My Three Lessons: Brexit, PPE, and Postmasters/Horizon

When considering the task of devising UK AI policy for the next year, I offered three lessons to the roundtable discussion. These come from the observation that Government strategy must work on three levels:

  1. How to integrate and upgrade AI in a legal, fair, and responsible manner within the existing legal and regulatory frameworks
  2. How to use AI to boost productivity and growth
  3. How to mitigate risks from AI

Following on from these three challenges, three lessons can be learned from recent history:

  1. Brexit and Regulation: The one thing business unequivocally called for and failed to receive when Britain left the EU was certainty of the new rules.
  2. Covid and PPE procurement: While we need to simplify the rules for investment and procurement of AI, caution should be taken against being seduced by the hype of speed. Lessons should be learned from the mistakes that were made with the controversial awards of PPE contracts during the pandemic, where need for speed was used to justify setting aside rules.
  3. The Postmaster Scandal: If done in haste and without proper scrutiny, the adoption of AI on a grand scale, particularly in public services, risks planting and arming a ‘ticking bomb’ for injustice.

Lesson 1: Brexit Preparedness

The first lesson must be taken from the Brexit transition. The lack of business preparedness, often reported as a result of absent or slow Government guidance, with the Office for Budget Responsibility estimating that import and export will be 15 percent lower in the long run “than if Britain had remained in the EU.”

In our report on Generative AI, Sam Gilbert and I argued that the lack of joined-up thinking on regulation could create an economic impediment to the uptake of generative AI because businesses, organisations, and government bodies would spend considerable money on trying to navigate how existing and sector-specific rules would apply to their use of AI. Lack of a clear regulatory framework can also produce risk-averse behaviour where businesses and organisations choose not to use AI at all.

Regulatory joined-up thinking is urgently needed to ensure that AI is trustworthy. Robust policy discussion is needed to examine what is meant by ‘responsible,’ ‘ethical’, ‘fair’, ‘legal’, and ‘safe’. Some key questions are: 

  • Should some AI technologies or uses be banned completely? What should be thresholds for transparency and explainability?
  • How should these be weighed against the need for privacy and proprietary protections?
  • How can we ensure generative AI is not built on datasets scrapped illegally from the Internet (thereby violating copyright) or produces ‘hallucinations’ or ‘deepfakes’?

Lesson 2: PPE Procurement

In the wake of the Covid pandemic, numerous news stories have detailed how contracts for PPE procurement for the NHS were awarded without proper oversight or scrutiny. Lessons must be learned from how the Government responded to a global crisis by pulling levers to ensure that Britain had the equipment needed, yet at the same time, not only wasted vast amounts of public money but also placed lives at risk by procuring substandard and faulty PPE at extortionate cost.

The seeming deluge of AI shares three similarities with the Covid pandemic: (1) it is global, (2) it is rapidly advancing, and (3) the outcome is uncertain. Learning from the mistakes made with PPE procurement, it is imperative that the Government not let itself be seduced by the hype of speed. Often, AI is framed as a space race, and the Government is keen to unlock investment and simplify the rules to ensure that the UK stays ahead in the race. However, there is no guarantee that pouring investment into AI development will benefit the economy or the country in the short or long-term if done without proper scrutiny and safeguards.

The Economist recently presented a cautionary tale of the huge sums that have been wasted by China on speculative AI investments. Without proper procurement rules, there is a chance that vast sums of money can be lost to faulty, over-hyped, under-performing AI development.

Lesson 3: Postmaster/Horizon Scandal

Following an ITV drama series, the scandal of hundreds of sub-postmasters wrongly convicted of fraud due to a faulty computer system has finally taken centre stage for parliamentarians and the media. The postmaster scandal should act as a cautionary tale of what can happen if a computerised system is rolled out large-scale without proper scrutiny and ‘human-in-the-loop’ oversight.

The Postmaster scandal is only one of many examples. The Netherlands was recently rocked by its own scandal of discriminatory algorithms wrongly denying social benefit payments on a large scale. Ultimately, the sheer scale by which these systems operate, and the fact that they are so hard to explain and therefore also to audit, should act as a major sign of warning.

Prime Minister Sunak is right in that safety should be front and centre when it comes to AI. But safety cannot only be a matter for innovation and development, it must also be a key component in adoption.

The Bigger Picture

The right strategy can make the UK a global leader but not by simply focusing on AI innovation and export. As Sam Gilbert and I argued in our generative AI report, the UK should ensure the safe, responsible, ethical, fair, and trustworthy adoption of AI across the economy.

Aligning the national strategy and regulatory framework closely to the US is sensible and forward-thinking. Broadening the focus on foundation models or ‘Frontier AI’ will play to Britain’s historical strengths of agility and entrepreneurship.

Discussions about a national AI strategy has tended to focus on end goals or idealised objectives, such as trustworthiness, productivity, or privacy. Yet, a national AI strategy cannot be divested from reality, particularly the challenges currently facing the public sector, especially related to the delivery of public services such as social care, health, or education.

A national AI strategy must be situated with other measures designed to tackle underfunded and underperforming public services, skills gaps, geographical inequalities, and the climate emergency. As one roundtable participant framed it: the national AI strategy must address “people, place, and power.” This must also entail a major upgrade of the public digital infrastructure across departments, agencies, and across the country.

It is not clear how the current Government policy will achieve these objectives. Roundtable participants queried the intended function of the newly launched AI Safety Institute, given that it would have no regulatory powers. On the one hand, its advisory role makes it more likely that industry will engage with it. On the other hand, lack of legal powers may also make it irrelevant if its advice can simply be set aside or ignored.

Continuing to build AI capabilities and capacities across Whitehall and the regulatory landscape will be crucial for the UK to become and retain a position as a global leader in AI. However, narrowly focusing on investment and innovation in the homegrown tech sector is unlikely to fulfil or sustain this ambition. Government policy must focus on the adoption of human-centric AI across the board to succeed. This Herculean task is what is facing us coming into 2024.

Dr Ann Kristin Glenster is the Executive Director of the Glenlead Centre. She is the Senior Advisor on Technology Governance and Law at the Minderoo Centre for Technology and Democracy and an affiliate at the Centre for Intellectual Property and Information Law at the University of Cambridge.

UK AI Policy for 2024 and Beyond
This website uses cookies to improve your experience. By using this website you agree to our Privacy Policy.
Read more