- Posted on
- • Artificial intelligence
How I'm effectively (maybe?) dealing with artificial intelligence (AI) plagiarism: Part 1 - the classroom
- Author
-
-
- User
- Gary Fisk
- Posts by this author
- Posts by this author
-
Educators are currently struggling with inauthentic student writing in assignments, with the primary problem being indiscriminate use of artificial intelligence (AI). My position on generative AI use is middle of the road. I see it as genuinely useful for some tasks, but problematic if overused, such as a substitution for human effort. My position is somewhere between the extremes of "ban it!" and "AI will replace human writing" that these debates often take.
Some recommendations for decreasing AI plagiarism that I've tried haven't been super successful. I've carefully crafted an AI policy, but have doubts about the policy's impact. Many students have probably not bothered to read it.
The recommendation to have frank discussions with students hasn't worked. My students don't want to talk about AI use. They seem unmoved by arguments that AI use or abuse might negatively impact their learning. The student attitude is to get assignments done as fast as possible, a task-oriented mindset. Thoughtful discussions about the abstract goals of higher education might connect with students at elite institutions, but these ideas don't fly at rural State U.
So, here's what I'm doing that seems to be working.
Broadly speaking, I'm not working too hard on being the plagiarism police. Punishment-oriented approaches have never really worked well, even before AI became widespread. Our cultural norms on what constitutes plagiarism seem to be shifting towards favoring AI use. I'm not paddling my boat up this river. While my attitude is a bit relaxed, it is not totally permissive. When AI abuses occur, students are challenged about their AI use. The balance between permissive and boundaries is similar to what developmental psychologists call authoritative parenting. It aims to educate about good principles while setting some reasonable limits on what is unacceptable.
Peer-review is being helpful, which has surprised me. In one class, students write short weekly essays that require higher-order thinking (analysis, arguments, evaluation, etc.). The students bring these to class, printed on paper. The papers are circulated to other students, who then comment on the papers. The commentary can be basic writing skills or the ideas being proposed. After reading, we discuss the papers and ideas in class.
Peer-review is working for two reasons. It creates a social pressure that might discourage AI use. Classmates might notice bland AI writing and react negatively. The assignment goals are also structured to focus upon exploring student thoughts. These essays are mixtures of arguments, personal statements, and evidence. The goal is not an expository paper with a super-formal writing style.
The assignments are graded pass/fail in the context of a specification grading scheme. I am the primary detector of AI abuse. Suspicious work is analyzed further through my homemade AI detector for a second opinion. If there's a problem, the student is notified by email or we have a frank one-on-one discussion. The decision in abuse cases is usually a do-over. I don't bother with filing academic conduct paperwork because my institution doesn't take much action against academic integrity cases. This is also consistent with getting away from a punishment response to academic integrity problems.
Overall, I'm happy with how this strategy is going. The students do seem to be successfully exploring their own thoughts about the implications of class content. Writing skills have also shown improvement. It's low stress for me and the students. The situation is about as good as I can hope to achieve.