HEAR FROM THE SCHOLARS
Rebecca Yeager

Could you briefly introduce yourself and your background?
I taught English as a Second Language (ESL) to international students as non-tenure-track faculty at the University of Iowa for thirteen years, where I saw institutional support for international students (and higher education in general) being systematically dismantled. This summer I resigned and started on a PhD at the University of Illinois Urbana-Champaign, with a focus on language assessment. I’ve always been frustrated by bad tests (and by how little agency teachers often have over assessment), and in my research I hope that I can draw attention to ways to make tests a little bit better (and give teachers a little bit more agency over them).
What is your experience with AI, and how do you address it in your own courses?
As you might imagine, AI has upended my entire world. It’s not only that, as an educator, I have to wrestle with the impact of AI on teaching and learning, or that, as a test developer, all of the papers in my field have “AI” in the title now (AI scoring, AI proctoring, AI detection, etc.): it’s that the content area that I teach is language itself. With the advent of Large Language Models (LLMs), some people have leapt to the conclusion that in the future, we won’t even need to learn languages anymore. We’ll just interface with the world through AI translation tools – like the Babelfish from Hitchhiker’s Guide to the Galaxy. This is the reasoning which, in part, has contributed to the shuttering of language programs in higher education, including Linguistics and Languages programs at West Virginia University, Indiana University (my alma mater), and others.
The fact that I am still in this field (even though I thought about leaving it) is evidence that I don’t believe that AI will ever obviate the need for humans to learn one another’s languages and connect with each other directly, rather than through a technological mediator. Number one, I don’t think AI will ever be reliable enough – the problem of hallucinations is too intractable. Number two, the cost of AI is currently being subsidized by venture capital and tax breaks, and even so companies are still hemorrhaging money. We cannot assume that the tools which are available to us now will continue to be affordable or even available over the course of our lifetimes. Number three, there will always be situations where humans want or even demand to interact with other humans – where the risk of manipulation or miscommunication is high, or where there is an emotional desire to connect directly.
In my ESL writing classes, I am straightforward about these concerns with my students (I teach primarily graduate students in small, in-person classes). During the first week of classes, we talk about the value of using your own brain to learn skills which are yours forever rather than relying on Ed-Tech tools for what I call the “rent-a-brain” subscription model of education. For that reason, I ask my students not to use AI writing tools for the duration of the course so they can focus on building their own internal language repertoire. On the final (a weeklong take-home project), I invite them to use any AI tools they like, as long as they also write a reflection describing which tools they used (if any) for which parts of the project, how well it went, and whether they would use those tools in the same way again. Before the final, we spend a week talking through strategies for identifying which aspects of a task might be valuable to outsource to AI, how to evaluate the various tools available for those tasks, and how to remain in the driver’s seat when using technology in your workflow. I don’t want to give them a set of rules but rather a set of questions which they can adapt to their various disciplines (since they hail from a variety of fields where AI may be more or less useful).
My students have responded very positively to this approach. In their final projects, typically about half of my students choose not to use any AI (sometimes after experimenting with the tools outside of class and being disappointed by them), and about half of them choose to use AI for various parts of their writing process – some feeling satisfied with its performance and some feeling they’d want to do things differently in future assignments. I have not noticed a difference in grades between students who use AI and students who don’t. To be fair, I work with very high performers who are internally motivated to learn. I also don’t really have issues with unauthorized use of AI throughout the semester, except usually a couple of cases on the first assignment. For those cases, when I pull the student aside and talk about it, they typically reveal that they were using Grammarly or a translation tool and they didn’t realize those tools were AI/unauthorized for the course. After those initial discussions, I’ve never really had ongoing issues. I also tend to assign some in-class writing throughout the course, so I have access to some samples of what they can do unaided by technology as well as what they can do at home. I don’t think an adversarial approach is super useful. I don’t use AI detectors. Suspicious writing samples trigger a conversation and an opportunity to rewrite the assignment. I’m less concerned about enforcing my policies and more concerned about helping students understand why it’s valuable to learn these skills independently. I do believe there is a place for enforcing AI restrictions (especially on high stakes assessments), but I’m fortunate to teach in a low-stakes context. In ESL classes, the priority is to focus on building student skills so that they leave my class equipped with the language necessary to engage in their other coursework. I want them to be prepared for whatever tasks and expectations will be thrown at them.
On that note, I’m currently involved in a multi-site research project that investigates AI policies among faculty across disciplines. The purpose of this study is to understand to what extent ESL students can assume that they will have access to AI tools for their university writing assignments, and to what extent they will be expected to write independently. We decided to focus our survey at the task level rather than the syllabus level in recognition of the fact that faculty may allow AI use for some writing tasks but not others. Our provisional findings indicate there is a vast amount of heterogeneity in AI expectations across university writing tasks. As of Fall 2024, a little over half of writing tasks described in the study did not allow any AI tools at all, about 5% allowed full and unrestricted access to AI, and the remainder allowed certain AI tools for certain parts of the task (with substantial variation in which uses were allowed and which were restricted). A smaller follow-up survey in Spring 2025 found generally the same pattern, with a slight shift towards AI acceptance (pretty much evenly split among tasks which fully prohibited and partially allowed AI tools; unrestricted access remained rare). These findings convinced us that, for the present, we need to continue teaching international students how to write independently without access to AI, since there is no guarantee they will always have access to it for all of their assignments. However, the data also supports the practice of introducing students to AI tools (and making sure they are aware that AI use policies will differ across tasks and across majors). The current evidence points towards the value of AI literacy as an additive, domain-specific skill which students can develop on top of their own independent writing ability – an ability which remains foundational to academic success.
What opportunities do you see for AI to have a genuinely positive impact on education?
I tend to be fairly critical of the impact of AI overall, but I also see its potential in specific applications. I’ve actually been using specific AI tools since 2018 – mostly tools that build on automatic speech recognition (ASR) abilities. For example, I’ve designed pronunciation practice activities that students can do at home using voice-to-text capabilities, and I’ve used AI-based transcription services to create first drafts of transcripts for listening assessments. I’ve also used AI transcription to speed the process of data analysis for several research projects I’m involved in. However, it’s important to make sure that you have informed consent from your participants before you run their data through any AI tools, and manually check the output for accuracy.
I do think specific AI applications have the potential to be transformative in language assessment, with a huge caveat: it depends on whether AI is being used to degrade an assessment task which was formerly performed by a human, or if it is being used to enable a task which was previously impossible due to practical or technological constraints. For example, there is a lot of discussion now in my field about the capacity of AI to serve as a conversational agent in a computerized assessment of speaking ability. My attitude towards this type of task hinges on what it is replacing. For example, some high-stakes language tests (such as IELTS) have historically included an oral interview with a human rater. I would be devastated if IELTS were to replace this human conversation with a simulated AI interview. However, many other standardized language tests currently lack any interactive speaking tasks. I would potentially be supportive of the addition of an AI-simulated conversation to those tests, not because I believe an AI-simulated conversation can capture all the important elements of a real human conversation, but because it is (most likely) better to include a simplified task than to include no interactive speaking tasks at all. Here’s another example: Burak Senel is a graduate student at Iowa State University who is working on developing a listening test which allows the listener to pause the lecture at any point, ask a clarification question aloud, and receive an AI-generated answer in the lecturer’s voice. The task he is developing uses retrieval-augmented generation (RAG) in the attempt to restrict the answer to the information included in the lecture, reducing the chance of hallucinations. This type of task better reflects the type of interactive listening skills that we want to encourage in our students, and should prioritize active over passive listening skills. I think this is so cool. I think we should have this kind of test available yesterday. Again, I am most excited about the potential of AI to enable tasks which were previously impossible. However, my worry is that this type of application is uncommon in practice. Rather, the applications which seem to be getting most of the funding and attention are those applications which “reduce costs” (typically by outsourcing formerly-human tasks to AI). But it is a good reminder that the main problems with AI are not the tools themselves, but the systems which control and create them.
What do you see as the biggest threats to education from AI or how it’s used?
I honestly see the biggest threat coming from philanthrocapitalism or venture philanthropy, two related concepts that involve donating venture capital money “no strings attached” but in ways that will ultimately generate more revenue for the donor. I see AI as just one more example of the ways that the tech sector is seeking to “disrupt” the non-profit world of education and turning it into a site of profit. Exhibit A: by making commercial AI tools free for students during the time they are in college, companies hope to create lifelong paying customers, since students develop habits while they are in school which can be difficult to shake afterward, especially if they never learned to write without the tool. Exhibit B: Many foundations done money to causes which effectively promote deregulation and market-based solutions in education, positioning themselves as saviors of a broken system, typically through technological solutions (including AI) which collect data about student users and make predictions about those same students – essentially monetizing their investment in educational reform. Exhibit C: Ed-tech money (including money from AI companies) supplies the bulk of funding for research in the educational field today, funding conferences, journals, grants, fellowships, and internships which steer researchers in the direction of questions that Silicon Valley wants them to ask (and makes it difficult for critical voices to get published).
I could go on. For the moment, though, I would encourage you to look up the work of Ben Williamson, who has long been one of those rare critical voices through his blog Code Acts in Education, and Roderic Crookes, whose book Access is Capture: How Ed Tech Reproduces Racial Inequality drew my attention to the extent of Ed-Tech surveillance and its tendency to fall heaviest on underprivileged kids.
What advice would you give faculty who feel overwhelmed by AI or unsure how to address it in their classroom?
You’re not alone, and you’re not crazy. “AI Fatigue” is the latest trending research term (which I learned from Jason Gulya) that describes how most of us are feeling about now. None of us asked for this. None of us asked to see ourselves and our students turned into educational guinea pigs. But this isn’t new. Billionaires have been trying to “transform education” (in ways that benefit their own pocketbook) as long as they have existed. Audrey Watters notes that 2026 marks the 100th anniversary of the first recorded attempt to replace human teachers with “teaching machines” (by Sydney Pressey in 1926). Note that these attempts have not worked so well. The history of Ed-Tech is mostly embarrassing. Remember Prezi? Remember MOOCs? Remember the Metaverse? Remember prompt engineering? Remember AGI (ha – I am writing this in early 2026 betting that by this summer “AGI” will be an abandoned buzzword)? Human teachers are still the bedrock of education. We’re underpaid, we’re overworked, we’re belittled by CEOs and administrators who know nothing about our profession. But we, not Ed-Tech, are reason learning happens. “If you can read this, thank a teacher.”
Chances are, if you are reading this blog, you are already the sort of teacher who is thoughtful, reflective, open-minded, critical, and conscientious. Just – keep it up. Keep asking questions. Keep talking with your students and your colleagues about AI. Keep learning about this technology (and the goals of its funders). Join a community where you have the chance to trade ideas with others who share similar concerns. Find some way to organize locally if you can. But also, don’t put too much pressure on yourself. Touch grass. Hug your partner. Get some sleep. Talk about other stuff. None of us can keep up to date on all the AI developments all the time, and that’s part of their strategy – “information overload” is an intentional adversarial strategy used by political lobbyists to achieve regulatory capture (they tell politicians there is too much information for a non-expert to handle, so we need to just trust them – see Wei et al. 2024: https://dl.acm.org/doi/10.5555/3716662.3716796).
I have to keep reminding myself of something Solana Larsen said in a talk she gave for Stanford HAI not long after ChatGPT was released: AI companies have a certain type of expertise – they know how their tool works. But we, the users, have a certain type of expertise which is equally valuable. We know the consequences of using the tool, in ways that even their creators do not. They are too far removed from the front lines, but we are the front lines. They need us (even assuming the best of intentions on the part of the creators) to tell them what is working, what is not, and what is creating unintended consequences. So don’t ever let anyone tell you you are not an AI expert. You are an expert on the consequences of AI in your classroom. (You can watch Solana’s talk here: https://youtu.be/G_XpnTwsUbg?si=Q6cKwNWZXXCu60yc)
What advice would you give to a student entering college today?
Oh gosh, I’m sorry. I’m so sorry. Honestly, it’s not just AI, it’s everything in the news these days. Education is being hollowed out, and so is the workforce, and I don’t know what the world will be like in four years or what kind of jobs will be available when you graduate. I’m just so sorry that the world is like this. You deserve a world that treats you with respect and dignity, and supports your effort to build your skills so that you can contribute to your community. I’m sorry that this world is under attack. But I promise you, some of the people you meet in higher ed care a lot – about you, about education, about community. Find those people, hang on to them, and become one of them. As Mr. Rogers used to say whenever there is a tragedy: Look for the helpers. There are always helpers.
Speaking of Mr. Rogers, here are two other “helpers” whose work has helped me stay focused when I feel like giving up on education:
1. Gert Biesta (2015) argues that education has three purposes: qualification (gaining the qualifications that will get you a job), socialization (gaining the social skills that will help you to thrive in your job/community once you find it), and subjectification (the process of becoming a subject: that is, gaining the ability to advocate for your own ideas in your job/community using the social networks that you have built). Silicon Valley would like you to focus only on the first goal: gain skills so that they can use you, abuse you, and lose you after they’ve automated your job. But you are not going to school to become an automaton. You are going to school to become an agent of change.
2. Adam Mastroianni is my favorite blogger, and his blend of psychology and comedic humor never fail to inspire me when I’m feeling overwhelmed. These posts were also influential on my decision to quit my job and go back to school. I can 200% promise you that these three posts will not waste your time:
- There is a Place for Everyone: https://www.experimental-history.com/p/theres-a-place-for-everyone
- Underrated Ways to Change the World: https://www.experimental-history.com/p/underrated-ways-to-change-the-world
- Thank you for Being Annoying: https://www.experimental-history.com/p/thank-you-for-being-annoying
What do you wish more people knew? Are there any misconceptions about AI or education that you’d like to correct?
So many misconceptions – but this is getting long, so let me keep it to just one. If anyone ever tells you that AI writing is better for the environment than human writing, ask for their sources and laugh loudly if they mention Bill Tomlinson. Tomlinson et al. (2024) and its follow-up, Ren et al. (2024), are the only studies I have seen so far cited in support of this claim, and this “support” is very silly, so let me disarm it for you.
The authors claim to calculate the carbon impact of using AI to write one page of text and compare that to the carbon impact of using human labor to write one page of text. However, the way that they calculate the carbon impact of human labor is hilarious. Instead of calculating a lifecycle analysis for various writing technologies (paper, pencil, pen, composing offline on a laptop, composing online in the cloud, etc.), they simply take the per capita carbon emissions for all humanity and divide that by their estimate of the time it takes a human to write one page of text. This is hilarious because, firstly, the majority of carbon emissions comes from companies, not individuals, and even among individuals, the majority of carbon emissions comes from the wealthy, not the rest of us, and even within an individual, some of our activities contribute more to emissions than others (flying to Paris or eating a burger contribute more than biking to the park or eating a salad from your own garden). So first off it is nonsensical to assign responsibility for human emissions as if we all share in it equally, when we don’t. But the authors also use a very silly method to determine the amount of time it takes a human to write one page of text: they pull a quote from Mark Twain saying that it took him about an hour to write 300 words, and then assume that this is the average for all of us. Literally. The quote from Mark Twain itself is not even a direct source: Tomlinson et al. cites The Writer magazine, which itself links to a website called “twainquotes.com.” There are so many spurious inferences here. Even if the Twain quote was legit, we don’t know that his average was typical for his time, and we definitely aren’t given any reasons why the average from his time would be the average for our time. I kid you not. This entire study hinges on a single metric from twainquotes.com.
Even worse, since the authors make no attempt to calculate the actual carbon impact of human writing per se, but only calculate the average per capita carbon impact of one hour of human life, their conclusion (that we should use AI instead of humans to write text) leaves us in a double bind: EITHER we simply add the carbon impact of AI to the carbon impact of one hour of human life (leading to an overall increase), OR we . . . uh . . . eliminate the human. This is literally an analysis of who deserves earth’s resources: humans or bots. If this sounds too grotesque to be believable, the follow-up study by Tomlinson and several co-authors (Ren et al., 2024) doubles down on this logic, making the argument that our bosses technically “rent a portion of that person’s life,” paying us for our labor not just with cash but with access to utilities such as water and electricity. By this dystopian logic, if the bot uses less resources than the human for one page of text, then the boss should allocate those resources to the bot instead of us.
The study also conveniently neglects to include any estimation of the human time spent prompting or editing the AI output, slotting the human time when using AI at zero. They also avoid any discussion of whether the AI output is any good and whether or not it will generate more work down the line for human laborers who must fact-check or fix the text. Overall, it’s an almost unbelievably lazy piece of writing. It might not surprise you, therefore, to learn that the authors of the original Tomlinson et al. (2024) paper disclose that they used AI to write it, but they “ran the text through Turnitin” to check for plagiarism. Lol. This disclosure explains so much for me. One co-author, Andrew Torrance, also confesses a conflict of interest: he owns stock in Nvidia.
Despite the absolute laughability of this study, I have lost count of the times that I have seen it uncritically cited in other articles and in popular press, even by authors that I otherwise respect. Each time that I have heard a claim of this nature, I check the references or hyperlinks and it has led me back to Tomlinson et al. (2024) or the follow-up, Ren et al. (2024). (I will concede that the follow-up study uses the per capita residential carbon footprint instead of the total carbon footprint, a marginally better approach. But the Mark Twain metric remains.) For some reason, these papers keep getting cited, even by people who should know better. Frankly, even the lead author on the second study (Shaolei Ren at UC Riverside) should know better. He has done some very useful work elsewhere on AI’s water footprint, but in my opinion he swerved too far out of his lane on this paper (or trusted his co-author without checking his work).
Please, whenever you see this claim shared in your circles, speak up! Let’s make this paper the public laughingstock it was always meant to be. And in general, whenever you hear or read some claim about AI that sounds too good to be true, look it up and read it carefully. Hype almost always withers under closer inspection.
What’s one policy, support, or resource that you think higher ed truly needs to implement to address AI well?
I think my work on the AI expectations study has reinforced my belief that faculty, as both content and pedagogical experts, are the ones best equipped to make decisions about where and how AI should be incorporated in their classrooms. Immediately after the introduction of ChatGPT, we saw several institutions rush to “ban” AI on their campuses. This approach has since been abandoned, with a minority of institutions now even forcing all faculty to embrace AI: essentially banning AI restrictions at the classroom level. I personally think that both approaches are misguided. To quote some wisdom from Jon Ippolito, “Monoculture is the worst environment for innovation.” We need faculty to take the lead on exploring the usefulness of these tools in their own classrooms, by giving them the pedagogical freedom to set AI expectations on a class basis or task basis, as our study explored. I think most likely, as the dust settles, we will find that AI is useful for some learning tasks and harmful for others. We need to protect the rights of faculty both to experiment with AI and to say no to AI in their classrooms and learning management systems when they perceive these tools causing negative consequences for their students’ learning. And finally, we need to protect the rights of both students and faculty to “opt out” of using tools which they conscientiously object to.
Check out Katie Conrad’s Blueprint for a Bill of Rights in Higher Education for a more fully-articulated vision of the role of AI in higher education: https://kconrad.substack.com/p/a-blueprint-for-an-ai-bill-of-rights
*****
xperRebecca Yeager has a Master’s degree in TESOL and Applied Linguistics from Indiana University and taught ESL for thirteen years at the University of Iowa, where she also assumed responsibility for English placement testing. She is now studying for a PhD at the University of Illinois Urbana-Champaign under Xun Yan. Her research interests focus on the intersection between listening, writing, and integrated assessment, with a particular focus on domain analysis, cognitive validity, and consequential validity. She has published in Language Teaching Research, the International Journal of Listening, the Journal of English for Academic Purposes, and Language Testing.
