
Freedom of Thought, Neurotechnologies, and International Law
March 19, 2025
Cambridge Generative AI in Education Conference Oct 2025
May 29, 2025Teaching, Learning – and Questioning – in the Age of Generative AI

Laptop with pile of notepads on the screen and headphones on white desk blue background. Distant learning or working from home, online courses or school webinar concept.
Megan Ennion, Glenlead Research Associate
Generative AI didn’t wait for education to catch up. It’s already writing essays, summarising readings, and prompting new ways of working. Students are using it – daily. Teachers are responding – unevenly. Institutions are watching – cautiously. But amid this rapid shift, one thing is clear: we’re in the middle of a transformation we don’t yet fully understand.
Because generative AI systems aren’t just tools, and they don’t just provide information. They are ever-present dialogue partners – and co-creators. They don’t simply respond; they contribute. They suggest, rephrase, reframe. They speak to humans in ways that feel responsive, even relational. That means their influence is emotional as well as intellectual.
And that means we need to think critically – not just about what students learn with AI, but what they learn from it: about themselves, about what counts as credible or trustworthy, and about what learning is meant to be. Over time, the way generative AI interacts with students – the tone it uses, the kind of help it offers, the behaviours it rewards – will shape what feels like effective or appropriate learning. Sometimes that might build confidence and curiosity. Other times, it might encourage shortcuts, surface-level answers, or quiet compliance.
These patterns aren’t theoretical – they’re beginning to show up in real educational contexts. In studies with sixth form and university students and staff, I’ve seen a mix of excitement, uncertainty, frustration, anger, and unmet need. Many in education have very real and important concerns. And many see real value in these tools – but they also want guidance. Not just on how to use generative AI, but on how to use it well – ethically, critically, and in ways that genuinely support learning.
Beneath all of this, one message keeps returning: we don’t yet know what kind of shift we’re living through. What happens when we integrate something that isn’t just a new tool, but a new paradigm? What kinds of learners are we shaping? And in focusing on short-term outcomes, are we overlooking deeper behavioural and cultural shifts?
This conversation can’t happen in isolation from the wider questions these systems raise. Concerns about bias, surveillance, environmental cost, and corporate influence aren’t peripheral – they’re part of the same picture. If generative AI is becoming embedded in education, we need to ask what kind of embedding we’re willing to accept.
But this isn’t an abstract debate – it goes to the heart of what education is for. My own research explores how students interact with generative AI tutors – not just in terms of what they produce, but in how these interactions shape behaviour. It’s an inquiry into the evolving relationship between students and machines, and what that relationship might be quietly reconfiguring. When support comes from a system that replies instantly, fluently, and without judgment – yet has no understanding of the world it speaks about, and readily offers shortcuts around deep thinking – what happens to perseverance, help-seeking, or the value of productive struggle? These aren’t just questions about individual habits – they reflect broader shifts in how effort, support, and learning itself are being redefined in the age of generative AI.
And the implications aren’t just academic – they’re ethical. They ask us to consider the kinds of learners, and people, we are helping shape. This is what led me to start the AI & Education Community at the University of Cambridge. We need more than policies and position statements. We need space: to think, to talk, to challenge, and to rethink what it means to learn and teach alongside generative AI. That’s the spirit of the AI & Education Community at Cambridge. We’re not trying to settle the debate. We’re trying to open it up – bringing students, teachers, researchers and policymakers into dialogue about what AI is doing to education, and what it might do for it, if we’re intentional.
These aims are also central to Glenlead’s mission. Through partnerships like our collaboration with Jisc on responsible AI in higher education, and our co-hosting of the upcoming Generative AI and Education Conference, Glenlead is creating spaces where these tensions can be explored – not solved, but held and worked through. Not in isolation, but together.
Because generative AI isn’t just a tool. It’s part of a wider cultural shift – fast-moving, market-driven, and deeply social. If we want that shift to support human learning, flourishing and equity, we have to act. Not with all the answers, but with serious questions, shared values, and a commitment to shaping the future – not just reacting to it. If you’re thinking about, working on, or even struggling with AI and education – especially if you have more questions than answers – we’d love to hear from you. Learn more about the Community at aiandeducationatcambridge.co.uk. And if you’re curious about my research on AI and learning behaviours, you can read our latest publication here:
Ennion, M., & McLellan, R. (2025). Large Language Model Chatbots in Education: Exploring Literature Insights on Their Impact and Influence on Learning Behaviours. Studies in Technology Enhanced Learning, 4(1).
https://doi.org/10.21428/8c225f6e.540b41b5