A lot of folk talk about embedding “Responsible AI” in professional contexts, which I think is a wonderful thing to do. So, in support of this movement, I wanted to share a breakdown of how I’ve embedded Responsible AI in my own learning design practice.
Anatomy of a Responsible AI project: Designing an online course
In this project, we designed a six-week-long course focused on operationalising climate action in Australian organisations. The course is taught and assessed online, for credit, but the primary goal wasn’t to prove that learners were competent — it was to provide them with systems, strategies and real-world leverage to implement climate action.
Here are the key steps we took, and how we incorporated Responsible AI at each stage.
Writing the learning outcomes
First we needed to identify the main goals of the course — that is, its learning outcomes. Basically, learning outcomes are the things we commit to providing full support for throughout the course.
We decided not to use generative AI to create these, because they essentially represent a contract between all designers, educators and learners: these are the central goals to which we all commit through this education program. We didn’t want to offload that agreement.
Instead, we workshopped the learning outcomes between a subject matter expert, the client and our design team. This was priceless, because it meant we were able to work together to agree on shared words and meanings, to go forward and design the course together.
Course planning
Then, we had to plan the sequence of the course. We knew it would be six weeks long, and we knew how the assessment would be structured (a final task that learners would do a little bit of process work towards every week).
We decided not to use generative AI to map the course content, because the course had an Australian focus and covered new developments in public policy and strategy that post-dated all “official” LLM training data.
Instead, our subject matter expert advised us on a logical sequence of topics to cover.
Assessment design
We next needed to write the assessment instructions. Our approach is authentic assessment, which means tasks are based on real-world practice, so that learners will be undertaking a task they would need to perform in their professional lives.
We decided not to use generative AI to write the assessment, because it hasn’t got any real-world practice.
Instead, we worked with the subject matter expert and client to understand what good, contemporary, situated practice looked like, and wrote task instructions and success criteria based on this.
Content writing
The course itself needed to be written, and we needed to base it on quality sources of information.
We decided not to use generative AI to write the content, because if using a US-trained model, we felt we’d be likely to see a lot of factual errors about the Australian context and even, potentially, about climate science.
Instead, we worked with a subject matter expert to write the content, because they had real-world, current knowledge and experience. Where their knowledge had gaps, they were able to identify where to look and who to ask for accurate knowledge of the content.
Activity design
To support learners to apply the knowledge we were presenting to them, we designed a sequence of learning activities, discussions and tasks.
We decided not to use generative AI to design the learning activities, because once we understood the content and the final assessment it would be applied to, the activities just sort of write themselves.
Or, more accurately, we have already had the conversations where we’ve talked through what application looks like, the words used to explain it, and the considerations involved.
Media production
Of course, an online course needs to be more than just a textbook. We needed to develop visual aids and other media — illustrations, figures, interactive diagrams, videos presenting the perspectives of expert practitioners.
We decided not to use generative AI to create any of the media for this, because we had specific requirements for what these would look like, and LLMs and diffusion models are notorious for being unable to follow instructions.
Instead we engaged a skilled visual designer and videographer to produce a series of media for embedding throughout the course. The bulk of time was not in the creation of these assets, but in the processes of review and refinement which involved a lot of subject matter expert and client feedback.
At each stage, I’ve tried to show what kind of thought and multi-person input needs to go into each step of designing a course. These are complex, involved, tricky, and incredibly rewarding processes.
It’s not production that takes the time — it’s working together. I’m not simply saying you can’t skip that. I’m saying I hope you don’t actually want to. This job is so much fun.
Of course, I should be completely honest with you and say that we didn’t really give any thought to using generative AI for any of these steps of the learning design process, because education is a project of sharing and extending knowledge between teachers and students, and generative AI is not capable of producing knowledge.
The content produced by these technologies (mostly large language models and diffusion models baked into a basic chatbot interface) is created through mathematical processes of probability. That is, the words or visuals produced are based on what is statistically most likely to appear next, not what is correct.
If a student has invested time, effort, and — more likely than not — money into pursuing an education program, they are putting their trust in the educator to deliver material they know, believe and care about. To generate such material using generative AI would not only be unethical, but an actual failure to provide education itself.
It would also have been somewhat irresponsible to use a software app that was built off predatory exploitative labour, intellectual property theft at a global scale, and geopolitical control by profit-driven interests (including but not only the venture capitalists who fund generative AI development).
Responsible AI means choosing not to.

Leave a reply to Nathan Quinlan Cancel reply