Emotional Labour Offsetting: Unpacking the racial capitalist fauxtomation behind algorithmic content generation
September 15, 2023Glenlead co-authors policy recommendations to unlock the potential of generative AI
October 18, 2023Photo credit: Paul Caddy
Glenlead Head of Education Research, Dr Steven Watson, publishes policy brief on generative AI and education in preparation for Executive Director, in preparation for a parliamentary roundtable attended by Executive Director, Dr Ann Kristin Glenster.
19 September 2023
The Glenlead Centre’s Executive Director, Dr Ann Kristin Glenster, contributed to a roundtable discussion on Generative AI and Higher Education held in Westminster Palace on 12 September 2023. Chaired by Daniel Zeichner MP, the event was hosted jointly by the Higher Education Commission, Policy Connect, and by the All-Party Parliamentary Group on Data Analytics. For nearly two hours, key stakeholders from academia and policy discussed the challenges of generative AI for higher education and how they can be addressed through a sector-wide approach.
Meeting in the wood-panelled Jubilee Room, the participants wrestled with four questions prepared by Policy Connect researcher, Alyson Hwang:
- What are the current risks and challenges associated with the use of generative AI in higher education? How do stakeholders in the education sector perceive such challenges?
- What are the most critical issues that the sector needs to discuss in relation to generative AI and changes to assessment/feedback, and why?
- What potential changes does generative AI bring to assessment methods in higher education? Are these changes positive?
- What can policymakers do to support educators and higher education providers to benefit from generative AI use in assessment?
The discussion that followed revealed a shared view that generative AI tools are here to stay and should be welcomed by the higher education sector. But this view also provoked difficult questions about what universities are for, how students should learn and how they should be assessed, and how to ensure that academic teaching staff are equipped and ready to teach using these tools.
Three conclusions emerged: First, a sector-wide approach is needed to build a common framework for assessments and academic policies taking into account the inevitable presence of generative AI in academia. Second, institutional investment is needed to train academic teaching staff, students, and administrators to use and assess generative AI tools. Third, without centralised policy intervention, generative AI tools in higher education will widen the inequity gaps that are already present in the sector.
All of these issues are identified in a policy brief which Glenlead’s Head of Education Research, Dr Steven Watson, Associate Professor at the Faculty of Education, University of Cambridge, prepared for this meeting. In his paper, Steven makes five tangible policy recommendations:
- Develop Dynamic Regulatory Frameworks
- Fund Empirical Research and Educator Training
- Establish Multi-Stakeholder Groups
- Address Resource Inequality
The roundtable discussion identified several key themes that urgently need to be addressed:
- Pace of development
Participants noted that the phenomenal pace of development of AI poses severe challenges to university governance structures. One person saw generative AI as just the latest step in a direction of travel where at the end of the road universities will no longer exist. Instead, it was suggested that each student will have their own bespoke tutor bot that will teach and assess while students fulfil apprenticeships or jobs training in situ.
The group was concerned by some members of the higher education community trying to put generative AI back into its box. As someone noted, there is a risk that the call for exams to go back to being handwritten in an exam hall will regress strides made for inclusive assessments for disabled and neurodivergent students.
Generative AI is revolutionising the world, including higher education. It is happening faster than the institutional framework can cope with. Yet, the challenges posed by generative AI are not new. Many of the issues that generative AI is bringing to the forefront have been present for years. Questions regarding the inadequacy of assessments used to test rote retention of facts are far from novel. The failure of higher education institutions to update their curriculums and give their academic teaching staff necessary digital skills is well-known.
- Assessments
The discussion kept returning to the challenges generative AI poses for assessments. As one person said, “Tech is showing up issues that should have been addressed long ago.” Nobody felt that the current assessment regime was fit for purpose, yet the group did not reach a clear consensus on what should replace it.
Some suggested models based on ‘inquiry-based assessments’ or ‘problem-based assessments’ as the solution. Others pointed to the need for universities to teach different skills than what are currently offered in the curriculums. Someone pointed out that the problem of assessments and higher education curriculums could be traced to the ‘knowledge-based skills’ approach for A-levels and GCSEs, but that maybe this model had reached its expiry date. The example of the calculator replacing times tables was used to illustrate how generative AI tools may make rote learning superfluous. As one participant saliently put it, “There is no point in training students to do what a machine can do.”
Instead, critical thinking, evidence-based thinking, and soft skills should be the focus. Yet, how would higher education assess these skills? This question remained unanswered.
There are practical issues as well. One participant pointed out that changing the assessment regime would take time, time that the sector does not have. Someone observed that for legal reasons, higher education institutions cannot change the assessment criteria once a student has begun a study, which means that new assessment standards would have to be phased in over time. Several other participants were concerned with the legitimacy of new assessment criteria, and particularly how these would be received by older generations who would find it hard to accept changes to the way they themselves had been assessed. This matters as those older generations are the ones who are sending their children to these learning institutions, employ the students coming out of these institutions, and devise and adopt the formal frameworks for assessments.
- Cheating or learning?
Some of the discussion centred on the concern that students are using generative AI tools to cheat. All participants asked for a sector-wide policy framework for how generative AI could be used and disclosed. One person observed that while some universities have adopted a policy requiring students to disclose where they have used generative AI, this policy was likely soon to be akin to asking someone to disclose when they had used Google Search. Another participant noted that generative AI tools highlighted existing issues with academic integrity, for example the grey area of using tools such as Grammarly.
- Digital literacy in academic teaching staff
A major recurring theme was the need for academic teaching staff to be trained using generative AI tools. Several participants asked for a future where academic teaching staff shifted from their current role as assessors to mentors, where they would have a personal relationship with the individual student based on intellectual nurturing, exchange, and growth. Yet, such a vision would require substantial resources, and risks widening the inequity gap in the sector even further.
- Equity
Equity emerged as a huge issue. Again and again, participants returned to the theme of how the access to generative AI is not affordable to all students equally. Some predicted that generative AI tools will only be available behind paywalls, which will be prohibitively expensive for many students. Equity of access is also a systemic issue as different higher education institutions have different budgets, resources (including training of staff), and student populations with different needs.
Yet, generative AI was also identified as a possible facilitator of equity. For example, non-native English speakers praised the timesaving which generative AI had provided in their own studies, and as such, identified generative AI as a potential leveller of educational attainment.
- An Issue of Trust
Dr Glenster form the Glenlead Centre raised the issue of the need to ensure that any framework for the use of generative AI, including a reconceptualization of assessments, must be mindful of the broader context of surveillance technologies coming into all facets of everyday life, including education. The need for robust frameworks for data protection and privacy in education also tallies with one of the major worries identified by students according to a study done by one of the roundtable participants.
There was little discussion on how to ensure that generative AI tools in education are trustworthy, but some examples to the contrary came up, such as made-up references or false plagiarism claims by AI-checking software. The group identified ways to tackle these problems by increasing digital literacy in students and academic staff, rather than requiring certification schemes or other types of approval of generative AI tools.
Another issue that was briefly raised concerned biases inherent in the training data used in generative AI and how these may be partly shaped by access or lack thereof to datasets due to copyright. Participants acknowledged that these biases could make the generative AI tools unreliable.
Read Dr Steven Watson’s Generative AI and Higher Education report here.