Gary Fisk's Blog

Artificial intelligence

Insights into artificial intelligence (AI)

  • Posted on

    PSPP is a free (cost and licensing) software application that works like SPSS. This software application is used in my introductory-level statistics course.

    At one time, we used SPSS because it was the standard for our field. The key problem was cost: about $6000/year for our 26 computers. A second problem was the restrictive licensing that, when combined with the cost, effectively prevented students from using it anywhere outside of our classroom.

    PSPP has been a good solution for these dilemmas. The cost savings have been tremendous: About $48,000 saved in total so far. The open licensing allows students to use it on their own computers or in computer labs across campus. I supported this effort by creating a tutorial guide called PSPP for Beginners. The University System of Georgia also provided financial support.

    The news is that PSPP for Beginners has been significantly updated. The layout has been improved: topical rather than squential. The appearance is nicer. Videos have been added to demonstrate the look of some operations.

    Please feel free to use PSPP for Beginners for your statistics classes or personal statistical needs.

  • Posted on

    A big challenge for addressing artificial intelligence (AI) in higher education is to clearly communicate what is acceptable or unacceptable uses to the students. There is no traditional model to fall back upon for guidance. Furthermore, our administration has been exceedingly slow at developing AI guidelines, making institutional guidance very slim. Looking externally, there is no clear leader to follow with all of the right answers, plus the technology situation is rapidly changing. Overall, the responsibility seems to fall heavily on individual faculty to find their own way. At least this describes my situation.

    My past efforts at AI policies have, to be completely honest, probably had little benefit. They were too long and too much of a scold.

    I'm sharing my continually evolving policy for 2026 in the hope that it might help other university-level faculty. It's not a perfect solution, of course, but maybe it can be at least a small light in the darkness.

    Permission is granted to use these examples for your own courses.

    The first part is a clarification of the difference between generative AI (example: ChatGPT) and assistive AI (example: Grammarly). Many students think of AI narrowly: all AI is generative AI. This oversimplification may lead to the awkward situation of students using assistive AI when they are asked to not use any AI at all.

    The second part is the traffic light model directly copied from Davis (2024, Table 3.2, p. 25). This divides AI uses into acceptable (green light), risky (yellow light), and unacceptable uses (red light). It's an everyday analogy that students should be able to relate to. Stop, caution, and go might make the judgments less abstract.

    The third part describes the general uses in the course in broad strokes using the traffic light model. This receives further elaboration in assignments go into specific details for each assignment. This specificity was inspired by transparent teaching practices: clearly articulating the course expectations will be helpful to students.

    These policy improvements are tightly focused and avoid lecturing students about their moral obligations.

    Best wishes for your efforts to tame the AI beast in the 2026 classroom!

    Source: Davis, M. (2024). Supporting inclusion in academic integrity in the age of GenAI. Using Generative AI Effectively in Higher Education, editors S. Beckingham, J. Lawrence, S. Powell, and P. Hartley, Routledge Focus, p. 21 – 29. https://doi.org/10.4324/9781003482918-4