Gary Fisk's Blog

ai

All posts tagged ai by Gary Fisk's Blog
  • Posted on

    A big challenge for addressing artificial intelligence (AI) in higher education is to clearly communicate what is acceptable or unacceptable uses to the students. There is no traditional model to fall back upon for guidance. Furthermore, our administration has been exceedingly slow at developing AI guidelines, making institutional guidance very slim. Looking externally, there is no clear leader to follow with all of the right answers, plus the technology situation is rapidly changing. Overall, the responsibility seems to fall heavily on individual faculty to find their own way. At least this describes my situation.

    My past efforts at AI policies have, to be completely honest, probably had little benefit. They were too long and too much of a scold.

    I'm sharing my continually evolving policy for 2026 in the hope that it might help other university-level faculty. It's not a perfect solution, of course, but maybe it can be at least a small light in the darkness.

    Permission is granted to use these examples for your own courses.

    The first part is a clarification of the difference between generative AI (example: ChatGPT) and assistive AI (example: Grammarly). Many students think of AI narrowly: all AI is generative AI. This oversimplification may lead to the awkward situation of students using assistive AI when they are asked to not use any AI at all.

    The second part is the traffic light model directly copied from Davis (2024, Table 3.2, p. 25). This divides AI uses into acceptable (green light), risky (yellow light), and unacceptable uses (red light). It's an everyday analogy that students should be able to relate to. Stop, caution, and go might make the judgments less abstract.

    The third part describes the general uses in the course in broad strokes using the traffic light model. This receives further elaboration in assignments go into specific details for each assignment. This specificity was inspired by transparent teaching practices: clearly articulating the course expectations will be helpful to students.

    These policy improvements are tightly focused and avoid lecturing students about their moral obligations.

    Best wishes for your efforts to tame the AI beast in the 2026 classroom!

    Source: Davis, M. (2024). Supporting inclusion in academic integrity in the age of GenAI. Using Generative AI Effectively in Higher Education, editors S. Beckingham, J. Lawrence, S. Powell, and P. Hartley, Routledge Focus, p. 21 – 29. https://doi.org/10.4324/9781003482918-4

  • Posted on

    Note: This was first published on May 13, 2024.

    Asking students to write in college courses has been a long-time educational strategy. Writing a summary shows that the assigned work has been read. Paraphrasing is to express information and ideas in our own voice, thereby putting a personal stamp on what has been read. Higher order thinking is (hopefully) accomplished via applications, analysis, and similar cognitive operations. Altogether, written assignments can be described as a write-to-learn strategy.

    New advances in generative artificial intelligence (AI) have raised questions about the value of these traditional approaches.

    Summarizing written work can now easily be done by AI systems. Student submissions are increasingly well-done, but bland AI summaries. These are often just vague overviews. A request to summarize no longer demonstrates that the student read anything. Maybe this isn't entirely new, but it seems to be an increasing problem in the last year and a half.

    Paraphrasing is also losing value as an educational tool. Here's two new tools that recently surpised me. Microsoft Copilot (through the Edge browser) offers "rewrite with copilot" for text pasted into a learning management system. Likewise, Google has a "help me write" option. Two screenshots are shown below. Screenshot of Google's 'help me write' command on a pop-up menu

    Screenshot of Microsoft Pilot offering to rewrite text

    This will be a mixed bag. The feedback may be helpful. The downside though is that this may be the beginning of a deskilling of writing.