UK AI Policy for 2024 and Beyond
January 23, 2024Submission to UN Code of Conduct on Information Integrity on Digital Platforms
March 19, 2024Dr Ann Kristin Glenster, Executive Director of the Glenlead Centre – 24 January 2024.
In this blog, Glenlead’s Executive Director, Dr Ann Kristin Glenster, looks at the latest development of AI regulation in the US and argues that while federal legislation may not be forthcoming, the Biden Administration’s ambition for safe, secure, and trustworthy AI is likely to have a lasting impact on the federal government and may set an example to be followed by countries around the world.
For a long time, it seemed from an international perspective that the US was languishing in adopting robust AI regulation. Over the last year, that has undoubtedly changed as the US is firmly claiming its role as a decisive leader in dictating the terms for AI advances and adoption at home and abroad. Most significantly is the Biden Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, discussed later in this blog.
Let’s start instead with Capitol Hill which has seen nearly thirty congressional hearings and several bills on aspects of AI in the recent twelve months. Two bipartisan groups of senators have announced separate frameworks for AI legislation.
Some of the key proposals for AI regulation and legislation are:
- Senator Schumer’s SAFE Innovation Framework
- National AI Commission Act (H.R. 4223) (House-Blue Commission)
- Creating Resources for Every American to Experiment (CREATE) with Artificial Intelligence (CREATE AI ACT) (S. 2741/H.R. 5077)
- Protect Elections from Deceptive AI Acts (S. 2770)
- Require the Exposure of AI-Led (REAL) Political Advertisements Act (S. 1596/H.R. 3044)
- National Institute for Standards and Technology (NIST)’s AI Risk Management Framework
Background
According to Statista.com, the US AI market is estimated to be worth US$ 106.50bn in 2024 with a projected annual growth of 27 percent. The largest AI companies by market capitalisation are Microsoft, Alphabet (Google), NVIDIA, Meta Platforms (Facebook), Tesla, IBM, Palantir, Mobileye, Dynatrace, and UIPath. The US is decidedly leading the way with some commentators surmising that this lead is partly due to long-time Silicon Valley dominance of the technology sector and the lack of legal red tape.
Until recently, the US took a voluntary self-regulatory principled approach to AI regulation. In October 2022, the White House Office for Science and Technology Policy (OSTP) published A Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People.
According to the Foreword, the Bluebprint contained “a set of 5 principles and associated practices to help guide the design, use, and deployment of automated systems to protect the rights of the American public in the age of artificial intelligence”:
- You should be protected from unsafe and ineffective systems.
- You should not face discrimination from algorithms and systems should be used and designed in an equitable way.
- You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.
- You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.
- You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.
The Blueprint made it clear that it pertained to three critical areas:
- “Civil rights, civil liberties, and privacy: including freedom of speech, voting, and protection from discrimination, excessive punishment, unlawful surveillance, and violations of privacy and other freedoms in both public and private sector contexts;
- Equal opportunities: including equitable access to education, housing, credit, employment, and other programs; [and]
- Access to critical resources or services: such as healthcare, financial services, safety, social services, non-deceptive information about goods and services, and government benefits.”
The Blueprint went on to set out technical guidance for how organisations and companies could apply the principles and practices.
A year later, on 21 July 2023, the White House published a set of Voluntary AI Commitments that seven leading technology companies had signed. The companies were Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. The Commitments were:
- Commit to internal and external red-teaming of models or systems in areas including misuse, societal risks, and national security concerns, such as bio, cyber, and other safety areas.
- Work toward information sharing among companies and governments regarding trust and safety risks, dangerous or emergent capabilities, and attempts to circumvent safeguards.
- Invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights.
- Incent third-party discovery and reporting of issues and vulnerabilities.
- Develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated, including robust provenance, watermarking, or both, for AI-generated audio or visual content.
- Publicly report model or system capabilities, limitations, and domains of appropriate and inappropriate use, including discussion of societal risks, such as effects on fairness and bias.
- Prioritize research on societal risks posed by AI systems, including on avoiding harmful bias and discrimination, and protecting privacy.
- Develop and deploy frontier AI systems to help address society’s greatest challenges.
Biden Administration’s Executive Order
On 31 October 2023, the Biden Administration published its Executive Order for Safe, Secure, and Trustworthy Artificial Intelligence.
According to the White House’s Fact Sheet: “The Executive Order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership in the world, and more.” The Executive Order thus clearly states an international ambition for the US to lead efforts to institute international frameworks that will govern the development and use of AI worldwide.
Specifically, the Executive Order asks developers of high-risk foundation models to share safety information and tasks the National Institute of Standards and Technology (NIST) to develop the standards for red-team testing to be used before the AI is released to the public. The Departments of Homeland Security and of Energy are tasked with applying those standards to address threats by AI to critical infrastructure and chemical, biological, radiological, nuclear, and cybersecurity risks.
On the domestic front, the Executive Order takes what has been called an “all-of-government” approach by setting out actions to be taken across departments and federal agencies. As such, the Executive Order addresses a range of issues, including the need for strengthening protection against the adverse impact of AI in areas of education, health, and work. The Executive Order identifies the need for more robust privacy-enhancing technologies, including cryptographic tools, and safeguards to ensure that AI does not discriminate, especially when used by police and in the justice system.
Among the key takeaways are:
- NIST with the Department of Commerce will establish guidelines and best practices for developing and deploying safe, secure, and trustworthy AI systems.
- NIST will develop standards and procedures for developers of AI to conduct AI red-team testing.
- The Department of Energy will develop and implement a plan for AI model evaluation tools and AI testbeds, especially for AI outputs that “may represent nuclear, non-proliferation, biological, chemical, critical infrastructure, and energy security threats or hazards.”
- The Department of Commerce will institute reporting requirements for companies developing potential dual-use foundation models (AI models trained on broad data and applicable in a wide range of contexts. The Department will conduct public consultations on potential risks, benefits, and other implications of dual-use models, such as open-source large language models (LLMs).
- Federal agencies will supervise and report on critical infrastructure assessments to Homeland Security, and the Department of Treasury will develop best practices to manage AI cybersecurity risks for financial institutions.
- The Department of Homeland Security and the Department of Defense will conduct studies on AI in cyber defence. Homeland Security with the Office of Science and Technology Policy (OSTP) will evaluate AI-related chemical, biological, radiological, and nuclear (CBRN) threats.
- The Office of Management and Budget (OMB) will set up an interagency council on the use of AI in federal government operations and will require a designated Chief Artificial Intelligence Officer in each agency and to ensure agency compliance with guidance on AI technologies.
- The administrator of General Services and OMB will develop a framework for the use of generative AI by the federal workforce.
- The Department of Commerce will complete a study on existing tools and methods to detect AI generated content, especially Child Sexual Abuse Material, and will develop guidance on authentication of digital and synthetic content.
- The US Patent and Trademark Office (PTO) and the US Copyright Office will provide guidance on AI and patents and copyright respectively.
- The National Science Foundation (NSF) will establish a National AI Research Resource (NAIRR) to facilitate collaboration between the government and the private sector.
- There are also instructions concerning the protection of consumers, workers, civil rights, civil liberties, and the promotion of competition.
In summary, the Executive Order is a thoroughly comprehensive blueprint for US federal agencies to ensure that AI is developed in accordance with the Biden Administration’s objectives and priorities. It is a resource-intensive agenda which will require money, talent, expertise, and willingness of all stakeholders to realise President Biden’s vision for an AI future that is safe, secure, and trustworthy.
What’s Next for US AI Regulation?
While President Biden’s Executive Order has set out the agenda, the actual implementation will happen on the ground.
Given the number of AI-related bills in the U.S. Congress, it could be tempting to believe that federal AI legislation is forthcoming. However, the domestic political climate makes it unlikely that robust federal AI legislation will be enacted any time soon.
In the US context, it is always important to pay attention to the rulemaking and enforcement capacities of the regulators, such as the Federal Trade Commission, and the willingness of the individual state legislatures to pass legislation. State Governors may also issue their own groundbreaking Executive Orders, for example, California’s Governor Gavin Newsom’s Executive Order N-12-23 on the regulation of generative AI.
For the moment, the US is showing global leadership framed by the constraints of its political climate and institutional framework. Yet the future is uncertain. The speed of AI advances, the outcome of November’s forthcoming presidential election, and geopolitical factors may all throw a spanner in the works for the current administration.
The lasting impact of the Biden Administration may therefore be its across-the-board, solid upgrade of capabilities and capacities for AI regulation within federal government. Its robust approach to safe, secure, and trustworthy AI will certainly benefit all Americans, and possibly, if successful, lead the way for similar frameworks across the world.
Dr Ann Kristin Glenster is the Executive Director of the Glenlead Centre. She is also a Senior Policy Advisor for the Minderoo Centre for Technology and Democracy and an affiliate of the Centre for Intellectual Property and Information Law at the University of Cambridge.