- Posted on
- • Artificial intelligence
My artificial intelligence (AI) policy for 2026
- Author
-
-
- User
- garyfisk
- Posts by this author
- Posts by this author
-
A big challenge for addressing artificial intelligence (AI) in higher education is to clearly communicate what is acceptable or unacceptable uses to the students. There is no traditional model to fall back upon for guidance. Furthermore, our administration has been exceedingly slow at developing AI guidelines, making institutional guidance very slim. Looking externally, there is no clear leader to follow with all of the right answers, plus the technology situation is rapidly changing. Overall, the responsibility seems to fall heavily on individual faculty to find their own way. At least this describes my situation.
My past efforts at AI policies have, to be completely honest, probably had little benefit. They were too long and too much of a scold.
I'm sharing my continually evolving policy for 2026 in the hope that it might help other university-level faculty. It's not a perfect solution, of course, but maybe it can be at least a small light in the darkness.
Permission is granted to use these examples for your own courses.
The first part is a clarification of the difference between generative AI (example: ChatGPT) and assistive AI (example: Grammarly). Many students think of AI narrowly: all AI is generative AI. This oversimplification may lead to the awkward situation of students using assistive AI when they are asked to not use any AI at all.
The second part is the traffic light model directly copied from Davis (2024, Table 3.2, p. 25). This divides AI uses into acceptable (green light), risky (yellow light), and unacceptable uses (red light). It's an everyday analogy that students should be able to relate to. Stop, caution, and go might make the judgments less abstract.
The third part describes the general uses in the course in broad strokes using the traffic light model. This receives further elaboration in assignments go into specific details for each assignment. This specificity was inspired by transparent teaching practices: clearly articulating the course expectations will be helpful to students.
These policy improvements are tightly focused and avoid lecturing students about their moral obligations.
Best wishes for your efforts to tame the AI beast in the 2026 classroom!
Source: Davis, M. (2024). Supporting inclusion in academic integrity in the age of GenAI. Using Generative AI Effectively in Higher Education, editors S. Beckingham, J. Lawrence, S. Powell, and P. Hartley, Routledge Focus, p. 21 – 29. https://doi.org/10.4324/9781003482918-4