Folks! I was asked to join another podcast episode with the team at Human Intelligence Institute.
In this 20-minute episode, I got to talk about the potential of play in learning and life, to enable us to imagine and work towards a future we value. I also talked about the professional risks of being openly critical of GenAI in education, and gave a very scrappy history lesson on the constraining, standardising role of “design” in institutional education.
What I wish I had said: it doesn’t have to be that way. The greatest learning designers out there are the everyday activists for inclusive, relational and accessible education.
Listen to the full podcast at this link, or read the transcript below.
Transcript
Kira: I’m Kira Cleveland, and this is Human Intelligence. With my co-host, Ned Hayes, we talk about the importance of human creativity in the age of AI. Welcome to the future. Today, we’re talking to Miriam Reynoldson, the author of An Open Letter for Educators Who Refuse the Call to Adopt Generative AI in Education, which has gotten a lot of attention from educators around the world. Miriam is a digital learning design specialist based in Melbourne, Australia. She’s a fierce defender of equity, ethics, and the diversity of human experience. Having worked in adult education for most of her adult life, her current aim is to explore the value of adult learning practices beyond formal education. Welcome to the show, Miriam. We’re glad to have you.
Me: Thank you for having me.
Kira: Well, I’m excited to kind of start off with our favorite go-to question, but what is generative AI? Like, what is it to you?
Me: Oh, gosh. I first started learning about generative AI models when I read an adorable book about six years ago called You Look Like a Thing and I Love You, which is written by a machine learning engineer and researcher named Janelle Shane, who was doing all sorts of ridiculous things with the early GPT models.
And so, you know, she used to ask the various models to create different paint colours or name different recipes. And the models were small at the time, so they were producing all sort of hilarious results. So just having a lot of fun with it and then wrote this book that was fully illustrated with her own kind of scrappy cartoons, explaining how large language models work, how generative adversarial networks worked, which is where most of the image generating models now are diffusion models rather than GANs.
But I really enjoyed the idea of a generator and a discriminator, which I thought as an author and a critic, these kind of two algorithms pinging off one another. And so my understanding of how most of the generative algorithms that we talk about work comes from that ridiculous illustrated book, which I really recommend.
Ned: We’ll definitely list the book in the show notes. Thank you for that recommendation. It sounds both funny and instructive, which I bet might be part of your pedagogical practice – engaging people, and yet also teaching them something. So I was curious if you could tell us more about how you got into pedagogy and education in the first place.
Me: Very sideways move, which is what happens for most learning designers. My first career, I was moving into editing and publishing, which was a really exciting thing for someone who doesn’t understand how economy works! About 20 years ago, the bottom fell out of the publishing industry and I found myself doing a lot of freelance editing, found myself working with a learning design team and editing their work.
I kind of learned a lot of the pedagogical and education system concepts by osmosis and fell in love. So it’s been a very, very sideways, kind of stealth move. And I didn’t start teaching until about 10 years into my career and now teach educational design and a lot of other things, and I absolutely adore it because it’s shifted a lot of my understanding from theoretical and scale principles to the on the ground. So I feel like my career has been inside out.
Ned: Yeah, well, it’s fascinating. We’re using terms like educational design and digital curriculum now, and we’re using them without questioning them. But 30, 40 years ago, I don’t think people had thought about those concepts in the same way. So what’s changed in education that now we have this whole kind of design industry around education?
Me: Well, the reality is that since the introduction of organised institutional education, there has always been a design… design has essentially been the educational system. So standardised curriculum, models that allow hundreds or thousands of students to learn the same thing, has always been about design. As I’ve learned more about the field that I fell into, I learned more about how so many of the models, the design processes, ADDIE, the concept of standardised testing and multiple choice, have been imported through the military.
And so there are a tremendous number of principles and things that I only learned to understand from other angles when I began teaching and discovering how they weren’t really a fit for the culture of learning and teaching on the ground.
Kira: That makes sense. And that’s really interesting. You know, when we think about education design, AI wasn’t a part of that at the beginning initially, right? And since it’s become more of that conversation, trying to get into that space, you’ve pushed back on including AI in curriculum design. In fact, you posted the open letter July 5th, 2025 [link here], and since then, the letter has accumulated 916 educator signatures, from what I understand, working across like 35 different countries. So my question’s like, how has it been for you seeing your letter make that kind of an impact?
Me: If I’m honest, I messaged a colleague, Melanie Dusseau, who’s the first signature, just before going away for the weekend. And I said, “I saw this really fabulous open letter.” There was already one going, it was about a week in, the open letter from the Netherlands led by Olivia Guest [here, launched 27 June 2025 and still open for signatures], which is phenomenal and much stronger worded than ours. And another that came out of Literary Hub [here, published 27 June 2025].
And I went, “We should have something like this! But not just for academia, and not just for the publishing industry. Why don’t we throw something out there?” So we whipped it up really fast and put it out. And I thought, if we get 30 people who agree with us, we’ll feel a little bit less alone. So the fact that it has circulated so widely is really heartening.
But what’s also been really significant for me and has really sort of lit a fire in my belly is that I have been messaged by hundreds of other people who cannot sign it.
Kira: Interesting. Can you tell us a little bit more about that?
Me: Sure. Look, there are many, many folks out there who are in a position where putting their name to something like this is a shockingly political and divisive act. And I found it absolutely astounding at first, because this is not, you know, it’s not a political statement about genocide. It is, “I do not want to use this software in my class.”
Ned: It should be a softball throw. Like, “I don’t want to use this software.” It shouldn’t be controversial.
Me: Precisely. I understand it, personally, because I have been pulled up by one of my own universities for having made a statement in the media, which I found really surprising because in theory, educators in the university system are protected by academic freedom. But the reality of that system is a lot less robust and a lot less protected than we imagine it might be in principle.
So I’ve spoken to so many people, particularly in the university system, but well beyond it across the K-12 system, who are not in a position where that would be a safe move to make.
And so for me, part of the open letter is about finding a sense of solidarity and community. And part of it, surprisingly, was about learning what things are dangerous to say at the moment and how we need to make space to be able to say them for and with each other. I don’t know the answers yet, but I’m working on it.
Ned: Well, you state so well in the letter. You state: “Through education, learners should be empowered to participate meaningfully in society, industry, and the planet.” And then you go on to write, “but in its current form, Gen AI is corrosive to the agency of students, educators, and professionals.” It’s a really powerful statement. Could you expand on that a bit?
Me: “In its current form” is really shorthand for essentially the big five. The generative AI models that we’re talking about when we talk about generative AI at the moment are specifically those coming out of OpenAI, Google, Meta, I hesitate to say xAI, but I guess it’s out there, Anthropic.
And these… despite some veneer of non-profit, these are commercial models. Their companies are organising very significant, very partisan partnerships with universities and school districts and are creating a significant amount of coercion and control both with the institutions and with the educators and students that operate within them. I’ve found that really insidious.
I also struggle a bit, because my own students’ use of AI is unrestricted. That’s a policy that we have in our department, but it’s also a policy that I really support. I particularly like being able to give students feedback on their use of whatever tools that they’ve told me they’ve used. Because if I understand their thinking process, that helps me to give them feedback on how they’re working, what’s working and what’s not working, regardless of what it is.
But what I see when students are using it heavily, usually at the beginning of their program, The level of reliance on whatever tool is shockingly high. And they’re essentially finding themselves in spaces where they’re not really able to think outside of what’s being fed to them.
So there’s a macro level coercion and there’s a micro level subconscious coercion happening. And I just want to make spaces where we can say, “We don’t have to do this.”
Kira: Yeah, absolutely. And that’s a huge service. And, you know, along the lines of what you’re just sharing, what’s interesting is there is a lot of research coming out right now on how the over-reliance on AI and younger people is causing kind of a diminished capacity for critical thinking. Is that something that you’re seeing out in the world as well?
Me: Look, I think we’re having a very challenging time in terms of the evidence coming out. It’s fairly widely understood that the research being published is of very, very patchy quality. And of course, any individual educator is going to have, you know, a lot of personal lived experience, but only from a single perspective.
So I can see something and interpret it, but I know that it’s only my experience. It’s only the people that I’ve met or the people that I’ve spoken to. So I hesitate to rest my arguments on – I’m doing scare quotes here – “evidence”. Because I think the evidence that we’re getting is… patchy.
But I do worry, I do really worry that we’re in a situation where we do not have evidence, really, we do not know what the consequences will be and we have every reason to expect that they will cause a diminishment.
Twenty or twenty-five years after the birth of social media, our young people are in such an absolute crisis of wellbeing. I really do not want us in a situation where twenty years from now, our young people are in such a situation that they’re not able to think without these tools that cannot think.
Ned: Right. Just before we started this interview, Kira and I were talking about Jonathan Haidt’s The Anxious Generation. And we were talking about the possibility that, as you said, in 20 years, there might be an AI generation book that points out the harms and ills that we can already anticipate are happening with cognitive decline, with over-dependence. And so I’m curious. We’ve talked about where things are going badly. How do you think things could be good? If we could shift the world, if Miriam was in charge, how would you shift things so that things would go in a good direction?
Me: It’s a really difficult question to ask. I think there’s a lot of magical thinking about being able to roll back the march of corporate push. And so what I imagine is clearly a practical impossibility.
But what I desperately want to see is for the lawsuits to begin to win. I want to see these corporations that we’ve spoken about being really, really held to account. There was recently a settlement with Anthropic on their piracy of the 7 million books identified.
Originally, we understood that they would be on the hook for probably, I think it was $150K per work? Or potentially per author of the works, which at scale was hundreds of billions of dollars. The settlement has seen them agreeing to pay $1.5 billion total. So there is a settlement, they have acknowledged a wrong, but that’s far from truly being held to account. The true amount would have bankrupted the company and closed it. And that was a conclusion that I was really hoping for because it’s a precedent that then rolls onto Meta, it rolls onto OpenAI, I imagine it rolls onto Google, although I’m not sure if they use the same data set.
But if we actually begin to punish those organisations for the crimes that they have committed and are continuing to commit the crimes that they have committed, I believe we could actually see a reckoning of corporate excess and we can actually begin to move towards investing in the ways of pursuing these technologies that actually support communities and the planet.
Ned: Yeah. Well, so I have a little bit of exposure to early childhood education because my wife works with children who are younger, four, five, six-year-olds, and she’s an expert in that field. And so she’s taught me about the Reggio Emilia approach, which really views children as capable, curious, full of potential. And as I read The Anxious Generation, I saw some impulse there to really welcome playfulness back into the the lives of children and to welcome exploration. And so I’m curious if you could talk about that kind of future for children of being playful and curious and open and learning rather than being constricted into kind of a corporate model.
Me: Well, the first thing to say is that the conversation around generative AI is an absolutely tiny political move within a much broader constellation that constricts children’s autonomy and playfulness. The second is that I obviously focus on adult education. So I don’t have very clever things to say beyond armchair philosophy about children’s play.
But I am a fierce advocate of play throughout life. Something that I talk about a lot is the way that as adolescents and then as adults, we begin to transform play into learning to play games.
And the distinction is, essentially, it moves from an enjoyment of discovery to an attempt to figure out what the rules are and how to win. And there is no ultimate winning, there’s just getting more money. But once we become trapped in that cycle of “follow or beat the rules” to succeed, to get the points, we lose that ability to explore the otherwise.
You asked me a moment ago, “what might an otherwise look like”. And I think that when we have – we, children, adults, all of us – have the opportunity to play, we’re able to imagine otherwise is that we value, or at least that we enjoy and can laugh at.
I’d like to see more of those spaces for all of us – but it’s a much bigger conversation than generative AI!
Kira: Absolutely, it is. But along those lines, I appreciate you bringing that otherness, those opportunities for something other than what is being pushed into the education system for people to choose powerfully. You know, your open letter is a demonstration of your commitment to that concept. And so it’s been really great to pick your brain and understand your thoughts on AI and the education system and the way that you are thoughtfully approaching it. So thank you so much.
Me: No worries. Thank you very much.
© 2025 Human Intelligence Institute

Leave a comment