Assessment in higher education is in crisis! Providers across Australia and the world are scrabbling to “secure” their assessment processes in the “age of AI”.
Security, here, means surveillance and controlled conditions. What’s being “secured” is the sense of certainty that students’ work is their own, and not generated by artificial intelligence. We don’t want students cheating their way through college.
But validity has been positioned as far more important to assessment than is cheating: “a students’ assessment submission is valid if it represents their actual capability”.
And yes, a secure assessment might be “valid” in the sense that it is definitely the student’s own work (we locked them in a room and watched them closely as they produced it). But whether what they produced represents their “actual capability” is another matter. And, indeed, what is actual capability? Is student work representative of actual capability if the student is free from all software assistance, assistive technologies, peer support and medication? Or are some of these acceptable? Under what conditions is actual capability elicited?
I agree that validity is critical in assessment. But I don’t think we can ignore the fundamental heart of this “assessment crisis”, and it is not a crisis of validity, but one fundamentally about cheating.
Only… what even is that?
What is cheating?
It’s surprising to discover, after all of this moral panic about cheating in assessment, that there is a great gaping blank where a definition of “cheating” should be.
I’ve suggested before that cheating is an attempt to win without playing by the rules.
It helps to think about this in reference to game logic. Cheating also happens when “rules” are broken in marriage, business and acts of death-defying bravado, but games make their rules uniquely explicit, so they offer an especially clear framework for making sense of cheating.
Bernard Suits defined playing games as pursuing a goal by means of the rules. He suggested that, in games, the rules are there precisely to make winning harder.
“To play a game is to attempt to achieve a specific state of affairs using only means permitted by rules, where the rules prohibit use of more efficient in favour of less efficient means, and where the rules are accepted just because they make possible such activity.”
Bernard Suits in The Grasshopper: Games, Life and Utopia, 1978
To cheat, then, is to bypass the rules while pursuing the goal. Say you load the dice, or hide an ace up your sleeve. What rule is this breaking? Well, in games of dice and cards, players must abide by the laws of chance. To load the dice is to bypass chance in your own favour. It’s cheating.
If this definition holds for cheating in educational assessment, then assessments must have a winning condition (they do: passing the assessment) and there must be rules to be broken. And what we are grappling with right now is a complete breakdown in agreement about what those rules are. What is authorship? What counts as a student’s “own work”? What does citation mean any more, if anything we could possibly cite has been chewed up and digested by LLMs?
It all depends on which game you’re playing.
Over the past couple of years we have debated about how to frame expectations about students’ GenAI use in their assessment work. Some suggest we should use a scale, from “no-AI” to “AI-under-specific-constraints” to “lots-of-AI”. Others suggest there’s just no point in setting restrictions since we can’t police them.
But disputes about assessment rules have been going on much longer than this. I won’t excavate the history of debates about calculator use in exams, but I will point out that we have still never reached agreement about reasonable adjustments for students with disabilities. Even though it’s the law. Many educators still believe that adjusting assessment conditions for a disabled student (extra time, format flexibility and so on) results in unfair advantage. In short, it involves bending the rules by which everyone else has to play. Considered this way, reasonable adjustment becomes viewed as a systemic form of cheating.
Because our societies, including our education systems, have never truly accepted the social model of disability, we are still operating with this mindset in regard to assessment. Many of us don’t care nearly as much as we claim to about observing students’ actual capabilities; we care about ranking them against each other. And why? Because we believe society outside of education doesn’t care, either. Employers don’t tell you you’re not hired because of your autism and adjustment needs; they just don’t hire you.
So while validity lends a noble headline to the “crisis in higher education assessment”, we are still very much caught up in the problem of cheating, and we (including students) disagree violently about what the rules should be. Only if we can agree on them can we begin to defend “assessment integrity”, as TEQSA demands. But we have a vast playing field filled with differing positions; things like…
- Assessment conditions should be equal and identical for all
- Assessment conditions should be tailored to each student to enable them to work to the best of their ability
- Assessment conditions should primarily focus on ensuring the person doing the work is the student themselves
- Assessment conditions should be secured and supervised or the assessment judgement is meaningless
- Supervising a person at work changes the nature of the work they do
- It doesn’t matter whether the student can do the work without help, it only matters that they possess the evaluative judgement to recognise what good work looks like
- Assessment should involve use of leading technologies to ensure students are able to perform innovatively in contemporary circumstances…
But here’s what I think: educational assessment isn’t worth nearly as much as all that. Especially to students. Especially to employers. Both know that the real test of whether a graduate is capable is whether they can satisfy the demands of the “real world”. I’m not saying that all assessment should be about employment, but what I am saying is that assessment is a proxy for real evaluation — and in almost every case, it’s a poor one. But it’s what we’ve based the award of degrees on, and employers expect candidates to have degrees.
And so of course students search for ways to optimise their chances of succeeding at assessment. Of course many of them bend the rules, both written and unwritten. Because, you know, if they can get around “the rules” of school at school, you can bet they can get around them outside school, where there are different, far more consequential games in play.

Leave a comment