- Posted on
- • Artificial intelligence
The strategy of AI-proofing assignments isn't going to be successful
- Author
-
-
- User
- Gary Fisk
- Posts by this author
- Posts by this author
-
As a technology leader on my campus I get to see and hear what everyone is trying out. It's a tremendous vantage point, plus teaching with technology is something that I'm passionate about.
The introduction of artificial intelligence (AI) has completely changed how authorship of submitted work is viewed. There's no trust anymore that students are doing their own work. There's always been some suspicions about the authenticity of student work, of course, but now there's about zero trust. Since students cannot be trusted, educators are working hard to find a strategy that the students are, in fact, the authors of what they submit for homework assignments.
Two colleagues have chosen very different approaches to this goal, high-tech and low-tech. The low-tech strategy is simple: All of the student writing is done during a traditional, in-person classroom meeting. The professor can directly monitor everything to ensure that no fakery is happening. There's no doubt that it works. It's labor intensive and limited though. It's also like taking a step backward in time.
The other colleague has turned to AI for assistance. The assignments are fed into AI with a prompt to make suggestions for "hardening" (a computer security term) the assignment against completion with AI. The AI then gives feedback that is used to improve the assignment with an aim towards authenticity. Some basic examples are prompts to write from a first-person perspective or apply a concept to an everyday experience.
Since ChatGPT has come into widespread use numerous educators have been advocating that assignments can be made AI-proof. It's a call for hardening like my high tech colleague. One of the first conference meetings I attended on AI in education had several experts suggesting that making assignments AI-proof was the way to go. This strategy avoided the need to rely upon potentially problematic AI detection technologies. One person even went further. She opined that your assignments are probably crappy anyway, so this AI challenge was a much needed force for making professors write decent assignments. (eyeroll here)
My opinion: The AI-proof assignments strategy is NOT going to be successful.
AI-proofing probably works at the moment. Consider the long-term outlook though. Over time, AI will improve and be able to do supposedly AI-proof assignments. AI-proofing is a short-term, temporary strategy.
Another problem is that this AI-proofing concept inevitably leads to academic elitism. Do you have an AI problem in your class? Well, it's really your fault (not the students') because you made weak, easily defeated assignments! It's unproductive to blame the victim. Please, don't kick your fellow educators while they are struggling.
The path forward is to teach AI literacy of responsible use. Of course, this is hard to accomplish. Nobody knows exactly how to achieve this careful balance point between appropriate and inappropriate use. I'm trying out some new ideas this term, like asking students to use AI feedback on their own writing. This seems to be the direction that the world is headed: human - computer collaboration. We'll see where it goes.