Submission to UN Code of Conduct on Information Integrity on Digital Platforms
March 19, 2024AI Regulation, Intellectual Property, and Creative Content
March 20, 2024The Glenlead Centre is cited advocating for legislation in the House of Lords Communications and Digital Committee’s Large language models and generative AI report,
20 February 2024
In early February, the House of Lords Communications and Digital Committee published its authoritative report Large language models and generative AI. The report clearly demonstrates the Glenlead Centre’s impact on UK AI policy. In addition to three footnote references, the Committee also considered our view on the need for legislation:
“The Glenlead Centre supported legislation, arguing that its absence would make the UK a “rule-taker” as businesses would comply with more stringent rules set by other countries.” (p. 53)
While the Committee did not go all the way to endorse calls for legislation, its substantive recommendations incorporated many of the points we highlighted in our evidence submission.
These are some of the highlights from the Committees’ report:
The Committee found that the world is an inflection point with AI. This poses challenges to balancing risk and innovation. The committee identified seven trends for the next three years:
- Models will get bigger and more capable
- Costs will grow significantly
- Fine-tuned models will become increasingly capable and specialised
- Smaller models will offer attractive alternatives
- Open access models will proliferate
- Consumer trust will affect uptake
- Integration with other systems will grow
The report also acknowledged the lingering problems with AI, including hallucinations, bias, privacy invasions, errors, and difficulties with complex and multi-step tasks and interpretability.
How should the UK prepare?
The Committee echoed arguments made elsewhere by Glenlead’s Executive Director, Dr Ann Kristin Glenster, and the Market and Competition Authority that the UK should promote a range of models. Yet, there are obstacles to this path.
First, the Committee was concerned about regulatory capture, groupthink, and conflicts of interest. The need for independent expertise in the civil service and with the regulators was a recurrent theme in the report. Second, the need for digital skills. Third, the absence of investment.
The Committee did not find any plausible reasons to believe the UK would be faced with existential or catastrophic risks (biological or chemical release, destructive cyber tools, critical infrastructure failure) in the next three years, or even in the next decade. Instead, the Committee identified the imminent threats to upscaling the UK’s AI capabilities as:
- Inappropriate deployment
- Increasing the tools of malicious actors
- Poor performance or model malfunction
- Gradual over-reliance
- Loss of control
The Committee envisioned a critical role for the newly announced AI Safety Institute. According to the report, the Institute should “develop new ways to identify and track models once released, standardise expectations of documentation, and review the extent to which it is safe for some types of models to publish the underlying software code, weights and training data.” (p. 46).
The Committee was right to focus on safety and the potential for societal harms. Indeed, it used the example of the Post Office Horizon scandal, which our Executive Director, Dr Ann Kristin Glenster discussed in her blog UK AI Policy for 2024 and Beyond. Specifically, the Committee noted that:
“[Large Language Models] may amplify any number of existing societal problems, including inequality, environmental harm, declining human agency and routes for redress, digital divides, loss of privacy, economic displacement, and growing concentration of power.” (p. 48)
Addressing these issues will require both domestic and international policy. The Committee wrote that:
“International regulatory co-ordination will be key, but difficult and probably slow. Divergence appears more likely in the immediate future. We support the Government’s efforts to boost international co-operation, but it must not delay domestic action in the meantime.” (p. 52)
Thus, the Committee concluded that: “Setting the strategic direction for [Large Language Models] and developing enforceable, pro-innovation regulatory frameworks at pace should remain the Government’s immediate priority.” (p. 53)
According to the Committee, this will require building regulatory capacity, mandatory safety testing, accreditation of private sector auditors, and a clarification of the legal framework for copyright. The Glenlead Centre will continue to contribute evidence-based policy recommendations and analysis to this space.
Next on our slate is our forthcoming Policy Brief: AI, Intellectual Property (including Copyright), and Productivity. This report is schedule to be published by the Minderoo Centre for Technology and Democracy and the Bennett Institute for Public Policy, both at the University of Cambridge, in June 2024.