HEAR FROM THE EXPERTS
Maha Bali

Could you briefly introduce yourself and your current role?
I consider myself a critical, open and connected educator. Formally, I work full time at the American University in Cairo as a professor of practice at the Center for Learning and Teaching. My primary role is as an educational developer: to support other faculty members with their teaching, which now includes of course supporting them to adapt to the new reality of generative AI and its impact on their courses. I also usually teach one undergraduate liberal arts course called Digital Literacies and Intercultural Learning, which always included an element of AI (pre-ChatGPT even) and now has AI literacy in the title. Apart from my official roles, I am also a public scholar, and I do a lot of my work online and in the open, with my blog, with workshops I offer, with international collaborations, and I also often get invited to speak.
In the context of AI, I should clarify that my undergraduate degree was in computer science and my graduation thesis used machine learning, so I understand very clearly how AI works. Since then, I have moved onto focusing on education, and my graduate studies and current work all focus on social justice and care in education, including in digital education.
How has AI changed what you do?
- Educational development work: had to create workshops on this topic and navigate multiple approaches to suit different audiences with different goals. People needed that locally and internationally. Unfortunately, this distracts and gives less energy to other important pedagogical topics.
- Teaching: I was already teaching about AI as part of digital literacies prior to ChatGPT; now I do much more Critical AI literacy in my course and help students unpack all the issues and make their own ethical decisions about AI use in their education and lives. But I always start with laying a foundation of understanding deeply inequities and biases in the world before we apply this to AI.
What opportunities, if any, do you see for AI to have a genuinely positive impact on education?
I’d rather talk about threats, but three positive things are:
- Now educators really need to focus on teaching what is important in authentic ways that are relevant to students, things that AI cannot replace, focusing on process not product, because AI produces things that look like the product, but have none of the effort behind it that constitutes the learning.
- AI tools have been and continue to support accessibility. They don’t replace human effort in accessibility, but can support it. Things like auto-generated closed captions, live on video calls, or image interpretation, such as BeMyAI. They are imperfect but helpful for the most part.
- AI for translation is also imperfect but helpful to speed up translation for professional translators, and helpful when we need to communicate quickly with people who speak different languages. I’m wary of overusing this because of how badly it can go wrong, but I use it when I am stuck in a foreign country. I also use it sometimes between two languages I am fluent in, just to speed up the act of translation, then I revise.
What do you see as the biggest threats to education, if any, from AI or how it’s used?
Biggest threats are loss of agency, accountability, and human relationships, when educators and administrators use AI for things like learning analytics (labeling students based on behavioral markers rather than interacting with them as humans), grading/feedback (which would demotivate students if they think no human is reading their work, work that is possibly irrelevant to them to begin with), and honestly any replacement of personal interaction with an AI interaction. Even if a bot’s job is to replace a teaching assistant, this is a lost opportunity to build human relationships and converts question posing to a transactional act rather than a potential beginning of a connection that could grow with another human.
We already know that past AI tools have reproduced biases in criminal justice and recruitment. The same would happen (and is starting to happen) with college applications and grading. Who is accountable for the biases reproduced with AI?
When we claim to use AI for Learning personalization, what we do is actually take away learners’ and teachers’ agency and pretend learning happens best on an individual level, forgetting all we know about social constructivism and the value of learning with peers and teachers.
Of course we should also worry, I think, about young people who grow up with AI and have never known anything else; if they don’t develop critical AI literacies, they may not learn to question the validity of AI outputs, may not know how to recognize its biases, may start using it instead of talking to people when they need help.
The amount of AI slop filling the internet and deep fakes are going to make it difficult to verify AI outputs in the future (it’s already starting) and we need to keep updating our literacies to deal with this amount of disinformation and misinformation.
What advice would you give faculty who feel overwhelmed by AI or unsure how to address it in their classroom?
I want to tell them that it’s OK to feel overwhelmed by what you don’t know, and not to let the hype about AI get to them.
However, I also want to tell them three important things:
- Our students have access to AI, whether we like it or not. There is no need to fall for any hype that AI will inevitably transform education for the better, but there is reason to believe that students will use AI whether we guide them on it or not. Therefore it is important for all educators to know more about this thing students are using and to know what it is capable of (much less than the hype, believe me, but it looks impressive to students at first glance), so we can judge how it will impact our teaching;
- Don’t be intimidated. It doesn’t take that much effort to learn to use AI, and you don’t have to do it alone. For a starting place check out the AI Pedagogy Project, which has a starter kit and then also curated assignments and activities that other people have done, and which you can use or adapt yourself to develop students’ AI literacies. Develop your own critical AI literacies first (e.g., make sure you are aware of the inequities and biases AI can reproduce, as well as ethical challenges AI poses). Find others around you who are using AI and learn from them: colleagues, students, librarians, and of course if you have educational developers like myself.
- Keep your values and teaching philosophy front and center, then figure out how AI fits (or doesn’t fit!) into that. If you choose to restrict or prohibited AI use altogether, just know that you may need to take very particular measures to ensure that. If you do choose to integrate AI into your classroom somehow, make sure you are fully aware of risks and downsides so that you and your students use it ethically and appropriately. This is the part that is not straightforward, and there is guidance in different places, but I believe these questions are complex and it will take time before we all agree on what is acceptable and what is not, in ways that are also feasible (e.g., the EU came up with very good guidelines, but they are difficult to implement).
What do you wish more people knew?
I wish more people took the negative impacts of AI more seriously. Even if generative AI tools eventually got better and lived up to the hype, the negative impacts would still be there, as well as the harm from automation altogether! It’s not a question of CAN we automate this, but SHOULD we automate this?
People who tend to create and sell us these tools are not representative of the most marginalized in society, so they will never understand how badly these tools reproduce oppression.
If you have the privilege to not be harmed by these tools, try to learn about how it harms others and don’t dismiss harm and ethics in the name of progress and innovation. This capitalist, modernist, colonialist type of discourse and action cannot be how we continue to live our lives, and educators should not be perpetuating this kind of discourse and passing it onto future generations as the natural order of things.
How can readers connect with you or follow your work?
I’m generally quite active on social media. I blog at https://blog.mahabali.me and I used to be most active on Twitter/X but this has changed and I am now trying to move to Bluesky: @mahabali.bsky.social
I would also encourage folks to look out for Equity Unbound activities – we offer equity-oriented professional learning for educators around the world, most intensively during the mid-year (see https://myfest.equityunbound.org) but also throughout the year. To sign up for our mailing list, email Unboundeq@gmail.com.
*****
Biography: Maha Bali is Professor of Practice at the Center for Learning and Teaching at the American University in Cairo. She has a PhD in Education from the University of Sheffield, UK. She is co-founder of virtuallyconnecting.org (a grassroots movement that challenges academic gatekeeping at conferences) and co-facilitator of Equity Unbound (an equity-focused, open, connected intercultural learning curriculum, which has also branched into academic community activities Continuity with Care, Socially Just Academia, a collaboration with OneHE: Community-building Resources and MYFest, an innovative 3-month professional learning journey. She writes and speaks frequently about social justice, critical pedagogy, and open and online education. She blogs regularly at https://blog.mahabali.me and tweets @bali_maha.