A special issue of JIME (the Journal for Interactive Media in Education) just came out. It’s a cool one: a deep dive into metaphors of AI in education.
Metaphor is super fun. It’s a way of using language to frame one thing in terms of another, in order to make a particular kind of sense of the first thing. There have been so many, so many metaphors proposed over the past five years or so, as various forms of generative AI have emerged in the popular consciousness. The articles in the special issue are fascinating and powerful and important, and I have a disclosure to make in two parts: (a) many are written by brilliant people I consider friends, and (b) I haven’t read them all yet.
But as delicious and insightful as metaphors are, one concern keeps returning to me as I contemplate them.
The metaphor isn’t the thing.
Ultimately, under enough pressure, every metaphor will fall down and stop working.
The authors of “On the Dangers of Stochastic Parrots“, Bender, Gebru and Schmitchell (tee hee, honestly my favourite thing about that paper), have battled volumes of critique about their metaphor for LLMs, because of course text-generating algorithms aren’t actually parrots. The metaphor is a critique; an emphasis; a vibe. It draws attention to particular features of LLMs in order to emphasise the role of probabilities and the consequent nature of output, and all other features are disregarded because they aren’t the authors’ point.
Metaphors are rhetorical devices, designed specifically to convey the values of those who use them to those who hear them. If I tell you that there is a “recipe” for success, I’m trying to convince you to view success as something that can be achieved through formulaic means – maybe to reassure you that it’s not so difficult. I hope you’ll feel reassured. I hope you’ll listen to what I tell you next. The “recipe” metaphor isn’t decorative, but a real strategy I’m using to influence your feelings and behaviour.
Metaphors are sensemaking on rails. By that I mean that the kind of sense that can be made is limited to the specific sense prescribed by the metaphor-user. You have to stay on those rails, because the metaphor can’t travel anywhere else.
With this view in mind, I think that metaphor is an incredible means of conveying feeling between two people. I think that’s why I enjoy metaphors so very much – I enjoy the unexpected mouthfeel of a thought with another thought’s hat on. (Not hackneyed old sayings like “mountain of paperwork”, which have lost the power to surprise. It doesn’t make me think of a towering craggy imposing mountain any more – I just think of, like, a lot of paperwork.)
But we mustn’t forget the hard limits of metaphors to inform or educate. We must never take them too seriously, and we must be incredibly careful never to become used to them, because then we will forget what they are and what work they are doing.
Artificial intelligence is, itself, a metaphor. The field of AI was founded on the basis of “the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”
That is: the term likens the technologies to manufactured “intelligence”, at least according to a definition of intelligence espoused at Dartmouth when the term was coined for the Summer Research Project on Artificial Intelligence in 1956.
Of course, none of the technologies are intelligence; they simply simulate aspects of intelligence (again, according to the Dartmouth definition). By clinging so tightly to this metaphor for 70 years now, we have forgotten what work it has been doing, and how it has shaped our feelings and behaviour.
Analysing our (fresh) metaphors of “AI” is essential work. In light of the past three years of jubilation, hope, panic and fury, of adoption and transformation and delusion and despair, we need urgently to reflect on how our mental models of “AI” are making us feel, think and act.
But then.
Then, we need to move on.
Not long before the launch of ChatGPT, Emily Tucker of Georgetown Law’s Center on Privacy & Technology announced that the Center would no longer be using the terms “Artificial intelligence, “AI,” or “machine learning”.
Instead, she pledged, the Center would use clear, specific, and informative language to refer to digital technologies. It would:
- identify what the technology does and how
- make efforts to identify when this information has been obscured from the public
- name the companies who create and distribute the technology
- ascribe agency to the people who make, use and promote it, not to the technology itself.
Tucker’s full article, Artifice and Intelligence, is well worth reading. She unpacks the inherent misdirection of the “intelligence” metaphor, showing how the word was used to focus tech development efforts on attempts to counterfeit intelligence – not to create it.
Speaking and writing without the “AI” label is challenging. Yes, “AI” is lovely and short – but that’s not why.
It’s challenging because most of us still don’t understand very well what these technologies do. It’s challenging because the information we need is still so often obscured from us.
But, I think, it’s long past time to demand that information. Education must go beyond the metaphors, beyond the vibes. As educators, we need to understand our technological circumstances far more deeply; as students and researchers, we deserve more than the hollow scaffolding of a metaphorical explanation.
Try it. Next time someone (it could be you!) uses the term “AI”, ask out loud:
“Cool name – but what does it do, and who decided that?”

Leave a reply to Sasha Stubbs Cancel reply