Glenlead delivers solution session on deceptive design at the Nobel Prize Summit in Washington D.C.
May 30, 2023Glenlead continues collaboration with the Minderoo Centre for Technology and Democracy at the University of Cambridge
July 1, 2023Jonathan Romic, Glenlead Research Associate
In this blog, Glenlead research associate, Jonathan Romic argues that regulation can be a strategic tool for the use of generative AI tools in education.
13 June 2023
With the emergence of generative AI and Large Language Models (LLMs) like ChatGPT several scholars and practitioners in education have begun to seriously question and explore potential regulatory avenues to address their concerns. When the word regulation is used it often evokes thoughts of restricting behaviour and crafting compliance. However, a restrictive or prohibitory view is only part of the regulatory equation. Regulation can also equally represent a strategic tool that can promote desirable behaviour and technological development. This includes promoting the responsible and ethical utilization of technologies. The conceptual construction of regulatory efforts in this dichotomous manner is frequently framed in the literature as red light vs. green light regulation. The former frame aims to induce ideal behaviour through acting in a negative or prohibitory manner, whilst the latter aims to induce comportment in a positive or enabling fashion. This perspective situates regulation as a bi-phasic effort that can be applied to addressing the emergence of novel and potentially disruptive innovative technologies.
Red light regulations can be aptly characterized as any effort to regulate or restrict specific behaviours, undesirable activities, or uses of emerging innovative technologies. A classic example, of red-light regulatory efforts stemming from emerging innovative technologies is human cloning. This innovative technology was deemed so potentially dangerous that it warranted immediate prohibitive regulatory efforts. This approach is beneficial in cases where there is a clear and present danger from the emergence of an innovative technology. Moreover, it represents a pre-emptive step to reduce social, economic, and political risks. For example, several leading educational institutions initially viewed the emergence of ChatGPT as signalling another emerging disruptive innovation that could significantly impact the integrity of education. As a result, the initial course of action was to enact red light regulation that would prohibit the use and potential academic abuse of this technology. However, this blanket ban position was shortly thereafter replaced with a more blended approach. This took the form of restricting the use of AI to secure academic integrity, whilst still allowing students and staff to experiment with the beneficial applications.
Green light regulation can be framed as course of action that aims to open channels for further innovation to flourish. This is achieved by allowing or enabling access to emerging technologies, this represents the diffusion phase of innovation. It should be emphasized that green light regulation doesn’t represent a policy course that promotes a Wild West approach.
It is a not a free for all allowing any type of development or behaviour. Rather green light regulation may succinctly be summarized as providing the regulatory and policy guard rails for the moral and ethical utilization of emerging innovative technologies such as AI. Moreover, green light regulation could be a beneficial opportunity for educational institutions to mold and shape the evolution and use of educational AI systems. Nevertheless, with green light regulatory approaches regulators and policymakers must astutely guard against idealism or the uncritical acceptance of innovation. This perspective is generally referred to as the Panglossian Principle, which cautions against overly optimistic perspectives even in the face of concerning evidence. In essence, there needs to be regulatory balance capable of bridling overzealous extremes to either spectrum.
It is clear to regulators and policymakers what red-light approaches to ChatGPT would look like. However, what remains unclear is what exactly would constitute an effective green light approach and how do educators and institutions craft a green light policy that secures academic integrity whilst allowing innovation to flourish. It is likely that as AI continues to evolve and diffuse, there will be instances where there is an evident need for a specific red or green light approach. Nevertheless, there is a must be a balance between the two approaches that includes degrees of flexibility, and foresight. A contextual analogy that may be fruitfully considered is that of the emergence of calculators for maths education. This innovative technology has transformed maths education whilst its regulation perfectly exemplifies as blended red and green light approach. For example, calculators are permitted for in class, homework, and exams but with only limited or full functioning contingent upon desired calculations. The question that comes to mind is what the current state of maths education would be if a hardline red-light approach was taken. Moreover, this approach could have eventually led to policy entrenchment where regulators hold on to a view that calculators are detrimental to mathematical education.
Regulation may not always be capable of being framed or implemented as an either-or choice between enabling and prohibiting behaviours. There are examples where innovations were deemed so potentially dangerous, they warranted immediate red-light regulation. However, the danger of this approach is that it could hinder the growth and development of complementary innovations. Furthermore, it is probable that the future for AI in education is a blend between red and green light forms of regulation. Like the advent and regulation of personal calculators in maths education. The issue with a potentially simplistic either-or choice is that it represents a logical fallacy from which faulty policy could be drafted. This simplistic view can lead to a form of regulatory path dependency, which could lead to entrenched positions that are not adaptive to the environment, emerging risks, practical usage, and evolution of AI technologies.
Therefore, moving forward regulatory approaches to dealing with AI in education must be flexible, adaptive, and responsive. These capacities are essential to filling voids in the policy environment, ensuring that the actual application and use of emerging innovations match the policies being drafted and implemented.
Jonathan Romic (Cantab) is pursuing a PhD in AI Education and Regulation at the Faculty of Education at the University of Cambridge. He holds a Master in Technology Policy from the Judge Business School, University of Cambridge. His areas of specialization are AI, regulation, education, and developing policy solutions in an increasingly technology-driven global environment.