AI Isn’t Undermining University - It’s Exposing It
Never mind what students are prompting AI to do, AI should prompt us to rethink education....
Hey folks, it’s half-term this week, so a short one from me and no audio - couldn’t trust my kids not to walk in and start shouting down the mic.
In this week’s newsletter:
Why AI should prompt a rethink in higher education.
What I told the TV industry about its decline.
Universities are at a tipping point with AI.
One recent survey found 88 per cent of students have used Gen AI for written work, up from 53 per cent last year. This has fuelled resentment from students who don't rely on the tech but receive lower marks than those who do.
Universities are scrambling to respond, drafting guidelines, while some parents are hiring lawyers to contest students caught up in AI misconduct or marking decisions. Tutors have become AI detectives, trying to decipher guilty students that surpass detection software.
“First we had to be mental health professionals, now we’re supposed to be LLM experts,” one junior academic told me. “When do we actually get to teach?”
Older professors often don’t realise how serious the issue is, while younger staff, often on precarious short-term contracts, are too time-poor and powerless to push back. Many are even quietly using AI in the marking process just to keep up.
This didn’t start with ChatGPT.
Over a decade ago, students were already submitting downloaded essays. When I was an academic, I remember being asked to quietly pass an essay from an international student, whose English was so poor it was clear they hadn’t written the paper they submitted.
This current crisis reflects a deeper shift in the student-university relationship since tuition fees were introduced. Once knowledge became transactional, students began treating degrees like commodities. With rising costs came pressure to secure top marks by any means.
Knowledge for its own sake has been replaced by credentialism.
The shift from student to customer has been gradual but real.
When I was in academia, I remember a fair portion of staff meetings devoted to how we could boost our student survey rankings. What worked better — free beer or pizza?
And as customer satisfaction became king, so did grade inflation. In 2018, 29 per cent of students were awarded a first-class degree. Back in 2011, it was just 16 per cent.
Professors found themselves confused not just by their students' attitude, but by their declining authority. Where had the deference gone?
Then came COVID.
Reduced contact hours, rising debt, and a weakened job market drove growing dissatisfaction. A HEPI Survey found 69 per cent of students still had remote lectures in 2024.
Now graduates face an AI-driven job market with a falling degree premium. We’ve left a generation underprepared for the future and overcharged for what’s past.
So, what next?
Firstly, what if, instead of us prompting AI, AI prompted us to think better?
We need to stop thinking about AI as just a threat and start recognising it as the reality of knowledge acquisition in the 2020s. Assume everyone is using it.
Secondly, treat AI as a starting point, not the end.
Encourage students to interrogate it: What did it miss? What assumptions or biases appear? How could better human input improve its output? This is the critical thinking that universities claim to teach.
Finally, we also need to recognise that outdated formats — the 2,000-word essay or even 15,000-word dissertation — are no longer fit for purpose.
Why not revive oral exams, the standard before 19th-century paper tests?
In a world of automation, unleashing human spontaneity is key. I work with companies every day, and they tell me the same thing: they want people who can talk, think, listen, and persuade - skills that are increasingly lacking amongst digital natives.
We must not allow these oral and rhetorical skills to become the preserve of those who attended private schools, where debating clubs, performance, and public speaking are part of the cultural fabric.
These abilities shouldn’t be a privilege. They should be an educational standard.
PhDs are still orally defended.
I vividly remember my viva — justifying every word before two experts. No AI could prepare me for that.
It gave me a voice.
And what’s more powerful in education than that?
One of the unremarked aspects of the AI debate is how we over-focus on how intelligent AI is.
But what if the real revelation isn’t the brilliance of machines, but the banality of most of the human routines they can now replace?
This is our opportunity to question the very foundations of how we acquire, assess, teach, and democratise knowledge.
These are questions that universities, historically at the epicentre of intellectual innovation, have always tackled.
They must do so again.
If universities embraced such an approach, they wouldn’t just improve the quality of education, they’d prepare students for a job market driven increasingly by AI-knowledge rather than university accreditation.
And maybe, just maybe, they’d start to repair the deep sense of disillusionment that now haunts the system.
Is TV in perpetual decline?
Earlier this month, I spoke at the DTG Summit about one of the most deceptively simple questions of our time: what are we actually doing on our screens?
We talk endlessly about screen time - how much we’re on, whether we’re addicted, whether kids will ever look up from their tablets - but we rarely ask what we’re consuming and why it differs so much across generations.
💻 Here’s the deck I shared at the summit — now available to view:
👉 Click to open on Canva
Thanks for reading,
Eliza
Thank you for a fascinating piece. I would celebrate a return to oral exams and tutorial / seminar like discussions. As you say, these do encourage you to think, to prepare — and give you a voice, wonderful and powerful in itself. One caveat: surface fluency should not be thought synonymous with the ability to talk, think, listen, and persuade. Going back to Socrates debating in the stoa, testing and challenging ideas, needs an accommodation of all voices, both fluent and dysfluent. STAMMA is already worried that AI might misinterpret / mark down people who stammer during automated interviews, just as today's fraud and phone gate-keeping systems sometimes do. Vivas, great in themselves, carry the need for reasonable adjustments and a general willingness to value different voices.