Understanding ChatGPT: What it can and can’t do
August 8, 2023Emotional Labour Offsetting: Unpacking the racial capitalist fauxtomation behind algorithmic content generation
September 15, 2023Glenlead submits policy recommendations on the regulation of large language models and generative AI to the House of Lords Communications and Digital Committee.
5 September 2023
Our submission addresses the Committee’s questions on what regulatory approach the UK should take regarding AI from a comparative perspective. It also responds to the UK Government’s white paper setting out a pro-innovation approach, in which the Government articulated five value-based principles for the regulation of AI:
- Safety, security, and robustness
- Appropriate transparency and explainability
- Fairness
- Accountability and governance
- Contestability and redress
At first glance, the Government’s approach seems similar to other emerging regulatory frameworks. For example, both the UNESCO and the OECD have proposed value-driven principles as pillars of AI regulation. However, these international principles are not entirely the same as those in the Government white paper. Notably, while both UNESCO and the OECD makes references to fundamental objectives, such as non-discrimination or sustainability, these desired outcomes are conspicuously absent from the UK Government’s list. In our submission, we argue that the lack of specificity of the principles in the Government’s white paper weakens the prospect of the UK adopting a framework that will lead to trustworthy AI.
Furthermore, the UK Government does not intend to enshrine its proposed value-based principles in law through legislation. Instead, the white paper encourages regulators to integrate the principles in their existing frameworks. There are advantages to this approach, not least that it allows the individual regulators to adapt alongside and at pace with technological innovation. As such, regulation and innovation will go hand-in-glove.
However, leaving it to the individual regulators may not necessarily produce the desired effect. Regulators’ remits, resources, and budgets are usually quite stretched. Asking them to take on yet another task without additional mandate, funding, or training may be asking too much. This approach may also lead to divergence in regulation as some regulators are more pro-active in relation to AI than others. There is a risk that the result will be a patchwork of regulatory rules that will add red tape to businesses that are already saddled with considerable compliance commitments.
We recommend that rather than wait for each regulator to adopt their own frameworks, the UK sets out a robust regulatory framework in legislation. We also recommend that this framework include a mandatory provision for researchers’ access to data to ensure accountability and to inform further regulatory measures. Without a legal requirement for the sharing of data for research, it is hard to see how the Government will ensure that its five value-based principles are meaningful in a rapidly evolving AI-driven world. Read our submission here.