HEAR FROM THE EXPERTS
Dan Cryer

Could you briefly introduce yourself and your current role?
I’m Dan Cryer, Associate Professor of English at Johnson County Community College, just outside of Kansas City. Before coming here I was the director of undergraduate writing at Roosevelt University in Chicago, and an assistant writing director at the University of New Mexico, where I did my graduate work. My degree is in rhetoric and composition, and in and after grad school my research was in environmental rhetoric, technical writing, and gun-rights rhetoric. My pivot to AI was driven more by necessity than anything else, but I can see plenty of continuity with my previous research topics.
How do you address AI in your own courses? How do you see it intersecting with writing instruction in particular?
If I had to sum up my approach in one sentence, I’d say that I want my students to be able to work with AI and without it, too, and I want them to approach AI tools critically and not trust them blindly. In my introductory writing courses, like first-year composition, we use AI tools for research, narrowing topics, and generating research questions. This semester I think I will also experiment with AI voice analysis assignments where students discuss the qualities of “AI voice” – what makes something “sound like it was written by AI” – and compare that to what they perceive as their own writing voice.
In terms of academic integrity, I have students do a fair amount of hand-written assignments in and outside of class – always for low stakes, never for big, point-heavy assignments – to establish a baseline of what their writing looks like without tech-mediation. And for some papers, I ask them to turn in two versions: one written with all their spelling and grammar checkers turned off, no AI, no translation software, just them and their keyboard; and another “cleaned up” with whatever assistive (not generative) technology they prefer to use. This is partly about academic integrity (and partly because I suspect many people don’t know how to turn these checkers off, and I want them to know), but it’s just as much about voice: I want to be able to compare what their writing looks like with and without tech mediation, and I want to be able to give them feedback specifically about their voice. And I want them to be able to make that comparison and reflect on their voice, too. I want them to know what makes their writing unique, and to deploy those things when appropriate in their academic and professional writing, because I think that will serve them well in a future increasingly overrun with bland language.
What AI initiatives or programs have you been involved with? Do you think addressing AI should be approached differently at, say, community colleges than 4-year universities?
ChatGPT dropped in November 2022, and almost immediately the chair of our department, Andrea Broomfield (a great Kansas City food writer and scholar of food writing) convened an ad hoc committee on AI and writing, which I chaired. We had an extensive position statement on AI and writing – the first such statement at our institution, I believe – by April 2023. In fall 2024 I developed a website of resources for college-level tachers – especially those who use writing in their classes – called The AI Minimalist. Now I’m serving on our college-wide AI policy task force.
Regarding 2-year and 4-year schools and AI, I don’t think their overall approaches should be different. I think what I said above is a pretty good general rule, no matter the institution: students should be able to work confidently both with and without AI – that is, in situations where they must use it and in situations where they can’t or where the tools may be a bad fit – and they should approach it with a critical eye.
Maybe the only slight caveat is that I think the need to work confidently without AI is most acute in lower-division classes where students are getting their bearings and developing foundational skills, and those are the courses taught at 2-year schools. Much of one’s skill in using AI comes from having the basic knowledge of your field. That’s what helps you understand what to prompt AI tools to do and allows you to evaluate their output. In the courses I’m most familiar with – ones in the humanities and, to a lesser extent, the social sciences – AI presents, I think, more potential for harm to learning than for improvements to it. So in those classes I think the “without AI” piece is especially important.
What opportunities, if any, do you see for AI to have a genuinely positive impact on education?
Last month (in August), there was an episode of the New York Times “The Opinions” podcast featuring Jessica Grose, a columnist who has been writing about AI in education, and Tressie MacMillan Cottam, a sociologist who has long studied education who’s also a NYT columnist, and they both thought that AI’s impact on education in the humanities was almost entirely negative. I mostly agree with them.
The studies that I’ve seen showing AI’s educational benefits describe people working in controlled environments where they have access to a specific kind of tool tailored to an educational purpose. But students don’t work in controlled environments. If they have access to a tool that’s beneficial to learning because it walks them through a process of problem-solving rather than giving them the answer to that problem, and they have access to a tool that will simply give them the answer, and they’re short on time because they have to get to their job because they’re running out of money, and they’re buffeted by other pressures in their lives… it would take an extraordinary act of will to choose that first tool rather than the second one.
A lot of this boils down to the fact that LLM-based chatbots aren’t educational tools. They’re productivity tools. Learning and productivity are not entirely different from each other. But learning – especially in the writing classes I teach – is much more about process than product. For AI to be used in a process-focused way rather than a product-focused way requires a user to constantly reign it in, to push it in a direction that’s different from what it’s designed for. And it requires students to focus on their learning rather than on their grades, which they’re certainly capable of, but that’s not, generally speaking, what our educational system has trained them to do. So basically, for the most widely available AI tools to be leveraged positively toward education, both the tools and the students need to operate in ways counter to their design and training. Not impossible, certainly, but no easy matter, either.
What do you see as the biggest threats to education, if any, from AI or how it’s used?
One of the biggest threats comes out of what I was just saying. When these tools are suddenly all over students’ learning environments and their products can’t be reliably detected by teachers, we essentially make the learning process optional. Another way of saying that – and I’ve been writing about this – is that we make students responsible for the integrity of our classes. This is not a reasonable burden.
Some have reacted to this observation by saying that students have always been responsible for academic integrity: it’s their job to follow honor codes and be responsible for their own learning, full stop. And there’s truth in that. But teachers and institutions bear some of the responsibility, too. We create assignments that, at the very least, don’t serve up the possibility of cheating on a silver platter, and we reward good-faith, honest work and punish cheating when we find it. But when we have no reliable way of finding it, and every student has access to tools that will do their work for them for free, we have effectively shifted the responsibility for learning almost entirely onto students.
Here’s the danger: When students can choose which assignments to commit themselves to and which to outsource, it becomes their responsibility to correctly choose which skills they need to develop and which they can ignore. This is not a reasonable thing to ask of them. Even if the entire purpose of higher education were to make students ready for the workforce (to say nothing of skills and knowledge necessary for democratic citizenship), this would be an unreasonable expectation. The future of any given profession is difficult to predict even for insiders. Add AI to the mix, and the future of work gets even murkier. It is wildly unrealistic to make students responsible for correctly choosing which skills they should develop for their future employment.
Lord knows there is much wrong with higher education, but the basic curricular model, where faculty and administrators create wide paths through the institution in which students have some freedom to choose a more specific path, is sound. People with educational and field-specific expertise design those wide paths so that when a student reaches the end, they have some durable skills and knowledge that can serve them well in a variety of contexts. AI threatens to take the expertise out of that equation.
What advice would you give other faculty who feel overwhelmed by AI or unsure how to address it in their classroom?
First I would say that the feeling of being overwhelmed points to something that I wish more people knew: AI creates more work for teachers, not less. It is not a time-saver or something that, on balance, allows for more efficiency and productivity for educators.
As for how to address it in the classroom, I think the best we can do is aim for broad principles that apply across disciplines, since different classes call for different methods. One of these principles is to try and cultivate intrinsic motivation in students. I know that can sound glib and that most teachers are doing this already, but the presence of AI means we need to think harder about it and be more purposeful in working towards it. When someone is intrinsically motivated to do something, they are compelled from within to do it (as opposed to being extrinsically motivated by something like money or grades). I’m not sure if there are studies backing this up, but it seems to me that students are less likely to outsource their work to a machine if they are compelled by interest or passion. But this is very abstract, so it’s useful to think about barriers to this kind of motivation and work against them. Two big ones are: not seeing value in a task, and not having the confidence that you can complete a task. To create or communicate value, we try to tap into students’ interests and natural curiosity, and to tie the skills we’re teaching to something students already value. To build confidence, we build up students’ knowledge and skills through low-stakes assignments and frequent feedback on their way to the bigger assignments, so by the time they’re sitting down to work on the big, high-stakes, stressful thing, they’ve essentially already started it and already gotten direction from you, making it less stressful.
The other thing to build, in addition to intrinsic motivation, is accountability. Semi-frequent, low-stakes, pen-and-paper writing done in class, with all devices put away, helps you get to know students’ voices, so if their writing shows a significant departure from that voice later on, you can have a conversation with them about their writing process and appropriate uses of technology. I’m afraid this advice goes against what many in my field (rhetoric and composition) are recommending, and I understand their reasoning: pen-and-paper writing can run up against some students’ accessibility needs, and well-established research in the field argues against timed writing exams. For both these reasons I’ve been keeping these “assignments” pretty informal and always low-stakes, and sometimes they’re done as homework rather than in-class. For students with accessibility accommodations, we work those out on a case-by-case basis. To me, though, accountability has to be in the mix, partly because of what I said above: without it, we offload our share of resposibility for academic integrity onto students, which is not fair to them.
What do you wish more people knew?
What I mentioned above: that AI creates more work for teachers, not less. And it’s not close.
And it’s not just more work, but opportunity costs, too. Here’s just one example. In 2022, I and some colleagues were growing an inter-departmental working group on media literacy at my institution. We had slowly grown to about a dozen members who met a few times a semester to discuss this critically important topic. We had compiled resources for faculty and made them available online. And we were beginning to give presentations about teaching media literacy on campus that were well attended. When ChatGPT came out in November of that year, we stopped that work and haven’t resumed it, largely because I and others have had to devote our energies to AI in various ways: creating new working groups and committees, and all the changes to teaching that AI has made necessary. I hope we’ll get back to that work on media literacy – any cross-disciplinary group working from intrinsic motivation on a genuine teaching problem is a rare, good thing! – and when we do, AI will be an important part of the conversation. But I wonder how many other initiatives have been dropped because people had to turn their attention to AI.
How can readers connect with you or follow your work?
My website and blog is https://aiminimalist.wordpress.com/, and my contact info and links to my other work are there as well.