I’m sorry for so many posts so close together. I understand if you can’t read them all, and I truly thank you if you do anyway. But this is important, as I believe people’s safety is at risk.
It’s come to my attention that certain “AI consultants” are responding to our GenAI refusal in education open letter with misinformation, jeering, and commentary that borders on threats.
On 10 July 2025, a LinkedIn poster writes:
UNPOPULAR OPINION:
It’s come to my attention that there’s now a petition going around for educators to opt out of Al entirely.
First of all-good luck.
Second…I get the instinct. I really do. But I wonder:
Is opting out really helping students—or just avoiding the hard part?
Many of us already assign work that doesn’t require Al. That’s not the issue.
The petition goes further: it encourages educators to reject Al categorically, refusing new tools, partnerships, even curriculum changes.
But let’s be honest: pretending Al didn’t happen won’t protect education. It will just further isolate educators from the technological shifts already reshaping society, industry, and the workforce.
Maybe you have been offered a false binary.
But what about a third option, I would love to know why is that so out of reach?
While some educators double down on bans, students are already building relationships with Al, in secret, uncritically, and without guidance.
In my personal opinion, education is the BEST place to teach:
Withholding exposure doesn’t make students safer. It only makes them less prepared, more vulnerable to misinformation, and more likely to develop unhealthy, dependent, or misinformed relationships with GenAl.
Critical Al literacy is not just hype. It’s also about preparing students to ask better questions, make ethical choices, and build resilience in a world where these tools aren’t going away.
We don’t prevent harm by closing our eyes.
We prevent harm by teaching students how to see clearly.
Change my mind.
*AlinEducation #EdTech #CriticalAl #TeachingWithA Soijo
I’ve reported this post on LinkedIn as misinformation, as it’s beginning to lead to suggestions of shaming and professionally sabotaging those education professionals who have lent their names to the letter.
I’d already reported the post and it was no longer visible, but a friend then let me know that this had been commented:
“I think people who sign the petition should do so publicly and identify their university. Then, the list should be sortable by subject, major, professor, and university. This way students can make better decisions for 2026-27 enrolment.”
Suggesting that signatories of the letter should be searchable in some kind of database as AI-deniers, as suggested by the poster above, is dangerously close to promoting systematic persecution.
I feel responsible for all those whose job security is threatened by his ugly remark.
I’m also disappointed, though not surprised, that the post above has framed our letter as calling for a ban. It is the precise opposite: a statement defending choice.
The post has also misinterpreted it as calling for refusal of new tools. The letter states:
We will not promote institutional GenAI products built on unethically-developed foundation models like ChatGPT, Claude, Copilot, Gemini, Grok or Llama.
This doesn’t preclude new products, or products that are not built on these rapid-scaled models built on piracy and modern slavery.
The post claims we refuse AI partnerships. The letter states:
We will not allow corporate-institutional partnerships to compromise our academic freedom.
In fact, this statement implicitly accepts that corporate-institutional partnerships will continue to exist. But we won’t accept corporate powers to compromise our rights, nor the rights of our students to quality education.
The post suggests we refuse curriculum changes. Again, this is not the wording or the intent of the letter. The letter states:
We will not rewrite curriculum to insert generative AI into it for the purposes of “scaffolding AI literacy”.
Curriculum changes aren’t just inevitable — they’re essential. But AI literacy isn’t a reason we’ll accept for them. Speaking entirely for myself, I support all curriculum changes that better support students to prepare for the real-world practice that awaits them; but I don’t believe that using generative AI is a meaningful way to support that practice in my own disciplines (education, design and sociology). Through 2024-25 I’ve done extensive research, engaging with dozens of university educators, to explore meaningful applications of generative AI in course design which do not add to workload and reduce education quality. So far, there are none. I won’t sell my students something I believe to be untrue.
I don’t believe “AI literacy” is literacy. I don’t believe “critical AI literacy” is literacy either — though I understand the sentiment, and it is important. I don’t believe I’d be serving my students by trying to shoehorn courses on ethics and software training into the already profoundly limited time I get to share with them.
But — and this is the part that matters most of all, our letter does not call for a ban.
Banning generative AI would be a meaningless act, and one that took rights and options away from people, as well as removing the ability of educators to provide meaningful support for students who chose to use it.
None of the education professionals who have signed, or will sign, our open letter should be misrepresented or vilified for their sober choices.
Any discussion to this effect is harassment and will not be tolerated.

Leave a comment