Are blue books and oral exams the answer?
Recent Substack article about AI cheating in higher ed, AI-free spaces, and a case study on adapting a paper assignment to be more AI resistant.

Excerpt:
In the wake of LLM popularity, college is changing rapidly and both professors and students have been left flailing.
Imagine being a college student on your first day of a new semester. One professor says that using generative AI is cheating, while the next says you will be using it extensively in the course, and yet another does not mention it. You are told you will be punished if an AI detector classifies your assessments as AI-generated. Some professors encourage you to use a grammar app to improve your writing, but others tell you that doing so counts as cheating. At home, one parent worries AI will atrophy your brain and abilities, while the other tells you that you need to learn prompt engineering to have any hope of landing a job in the new AI-infused economy. Pundits in the media say AI makes college obsolete, social media influencers advertise apps that can complete all your papers and online tests, and meanwhile some of your friends are showing off creative applications of AI for fun while others say AI will destroy the world. If you were a college student, you would probably find yourself confused, perhaps excited or nervous about this new technology, and likely unsure of where it fits in your future.
That quote is how I started a journal article I recently published in Teaching of Psychology (Stone, 2025; free PDF). The 700+ college students I surveyed reported a lot of uncertainty about the role of AI in their education and future. They’re both nervous and excited about the new technology, but leaning nervous: will AI increase inequality? Will it rob them of the jobs they hoped for after graduating?
They’re getting mixed messages, contradictory advice, and different policies from all around. The students in my study reported that they’re more likely to use AI in what they consider ambiguous use cases (unsure if it’s allowed or not) than in ways that are outright banned. Even so, more than 40% of the 700+ students I surveyed admitted to using AI to cheat in this dataset from a year ago.
And in a more recent follow-up study involving 500+ students I found that had increased to more than 60% of students admitting to cheating with AI. Some college students have always cheated, but the scale of cheating with AI is massive.
And it’s no secret. Popular media outlets have been ringing the alarm bells.
- “Everyone is cheating their way through college. ChatGPT has unraveled the entire academic project” (Walsh, 2025)
- Is ChatGPT killing higher education? AI is creating a cheating utopia. Universities don’t know how to respond” (Illing, 2025)
- “The cheating vibe shift: With ChatGPT and other AI tools, cheating in college feels easier than ever — and students are telling professors that it’s no big deal” (Stripling, 2025)
In a conference not long ago, OpenAI shared that their usage spiked during the school year and dropped off massively in the summer. This past spring semester, right around finals time, AI companies started giving away free premium subscriptions to college students (e.g., Gemini Advanced, ChatGPT Plus, Super Grok). Meanwhile, last month the biggest AI apps in China shut down some of their features during the week of nationwide college entrance exams.
[The rest of the post here, completely free]
Leave a Reply