Warr & Heath, https://doi.org/10.1177/00224871251325073

LINK: https://journals.sagepub.com/doi/10.1177/00224871251325073

Abstract:

In this article, we explore the concept of a “hidden curriculum” within generative AI, specifically Large Language Models (LLMs), and its intersection with the hidden curriculum in education. We highlight how AI, trained on biased human data, can perpetuate societal inequities and discriminatory practices despite appearing objective. We present a technology audit that examines how LLMs score and provide feedback on student writing samples paired with student descriptions. Findings reveal that LLMs exhibit implicit biases, such as assigning lower scores when students are said to attend an “inner-city school” or prefer rap music. In addition, the feedback text given to passages said to be written by Black and Hispanic students displayed higher levels of clout or authority, mirroring and legitimizing power dynamics of schooling. We conclude by discussing implications of these findings for teacher education, policy, and research, emphasizing the need to address AI’s hidden curriculum to avoid perpetuating educational inequality.


Leave a Reply

Your email address will not be published. Required fields are marked *