The AI Elephant in the Lecture Hall
When we talk about AI in education, we’re not talking about some far-off sci-fi concept. We’re talking about large language models (LLMs) like OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude. These are now as accessible to students as a library book was a generation ago. And let’s not pretend they’re only good for cheating. These tools can be phenomenal learning aids. A student can use an AI to brainstorm essay ideas, simplify a dense scientific paper, get feedback on their writing style, or even act as a tireless tutor that explains complex calculus at 2 a.m.
The upside is clear: enhanced learning, greater efficiency, and a way to level the playing field for students who might not have access to private tutors. AI can help administrative staff automate scheduling and grading, freeing up educators to do what they do best: teach and mentor. But with every new powerful tool comes a new set of rules to write, and in this case, the rulebook is currently a blank slate.
The Thorny Question of Ethics
This bring us squarely to the minefield of Education ethics. Where is the line between using an AI as a helpful co-pilot and outright AI misconduct? Is asking an AI to outline an essay acceptable, but having it write a full paragraph a step too far? What if it just rewrites your own clunky sentences into something more articulate?
Jennifer Hing, a pro-vice chancellor at De Montfort University (DMU), put it quite simply in a recent BBC report: “Is it your work? Is it your words? If it is not, then you’ve crossed a line”. It sounds straightforward, but in practice, it’s anything but. The temptation for students under pressure is immense. And for institutions, the challenge is fostering an environment of innovation—encouraging students to become AI-literate for the future workforce—while simultaneously upholding the principles of academic integrity that give a degree its value. This isn’t just about catching cheaters; it’s about defining what ‘original work’ even means in the age of generative AI.
The Flawed Gospel of AI Detection
In the initial panic, many universities threw money at a seemingly simple solution: AI detection software. This was meant to be the cornerstone of Plagiarism prevention 2.0. The pitch was seductive: run a student’s essay through the software, and it will tell you if it was written by a machine. Problem solved, right?
Wrong. Terribly wrong.
Relying on these AI detectors is like hiring a security guard who is not only trigger-happy but also partially blind. These tools are notoriously unreliable. The internet is littered with stories of them producing embarrassing false positives. The BBC highlighted a particularly damning case where an AI checker flagged 100% of a PhD candidate’s thesis—a document representing years of original research—as AI-generated. At DMU, another tool flagged 60 instances of the word ‘fostering’ as a potential AI hallmark. It’s a technical and ethical disaster.
This creates an environment of fear and mistrust. An innocent student could see their academic career derailed by a faulty algorithm. It’s a classic case of a technological ‘solution’ creating a bigger social problem. The arms race is unwinnable. As Dr Abiodun Egbetokun, an associate professor at DMU, notes, as the AI models improve, “it’s becoming increasingly difficult to spot.” The detectors will always be one step behind the generators.
The De Montfort Gambit: Training Humans, Not Just Trusting Machines
So if the machines can’t be trusted to catch the machines, what’s a university to do? This is where the approach at De Montfort University (DMU) becomes so compelling. Instead of doubling down on flawed tech, they’re investing in their people. Their strategy for Academic AI Monitoring is refreshingly human-centric.
DMU is actively training its lecturers to become smarter about AI. They’re not just learning about the technology, but about its nuances and tells. This includes looking for subtle markers that might indicate AI use, such as:
– Peculiar vocabulary: An over-reliance on Americanisms in a paper submitted in the UK.
– Repetitive sentence structures: The kind of monotonous prose early AIs were known for.
– A disconnect in tone: Where the essay sounds professionally written but the student can’t articulate the core arguments in person.
Crucially, this isn’t a witch hunt. The training equally emphasizes the legitimate and powerful uses of AI. Lecturers like Dr Ruth McKie are encouraging students to use AI for tasks like refining project briefs or generating initial ideas. Brett Koenig, a senior law lecturer, sees it as his duty to teach students how to use these tools ethically and effectively, as they will undoubtedly encounter them in their professional careers.
Lessons from the Front Line
DMU’s approach provides a practical blueprint for other institutions grappling with this issue. The key takeaway is that the solution is pedagogical, not technological. Rather than trying to police every keystroke, the focus should be on designing assessments that are harder to ‘game’ with AI.
What might this look like in practice?
– More in-person assessments: Viva voce (oral) exams where students must verbally defend their written work.
– Personalised assignments: Tying essay topics to specific in-class discussions or personal experiences that an AI would have no context for.
– Process-based grading: Assessing students not just on the final product, but on their research process, drafts, and reflections.
This is a fundamental shift. It moves assessment away from just measuring a student’s ability to produce a polished document and towards evaluating their critical thinking, synthesis, and genuine understanding of a subject. It’s more work for educators, certainly. But it also leads to a more robust and meaningful education.
The future of academic integrity won’t be won by a better algorithm. It will be secured by smarter teaching. The rise of AI forces us to ask what we truly value in education. Is it the final, polished essay? Or is it the messy, difficult, and deeply human process of learning itself? The institutions that embrace the latter are the ones that will thrive.
What do you think? Is it possible to design an AI-proof assignment, or is that a fool’s errand? Let me know your thoughts below.


