Gary Fisk's Blog

Artificial intelligence

Insights into artificial intelligence (AI)

  • Posted on

    PSPP is a free (cost and licensing) software application that works like SPSS. This software application is used in my introductory-level statistics course.

    At one time, we used SPSS because it was the standard for our field. The key problem was cost: about $6000/year for our 26 computers. A second problem was the restrictive licensing that, when combined with the cost, effectively prevented students from using it anywhere outside of our classroom.

    PSPP has been a good solution for these dilemmas. The cost savings have been tremendous: About $48,000 saved in total so far. The open licensing allows students to use it on their own computers or in computer labs across campus. I supported this effort by creating a tutorial guide called PSPP for Beginners. The University System of Georgia also provided financial support.

    The news is that PSPP for Beginners has been significantly updated. The layout has been improved: topical rather than squential. The appearance is nicer. Videos have been added to demonstrate the look of some operations.

    Please feel free to use PSPP for Beginners for your statistics classes or personal statistical needs.

  • Posted on

    A big challenge for addressing artificial intelligence (AI) in higher education is to clearly communicate what is acceptable or unacceptable uses to the students. There is no traditional model to fall back upon for guidance. Furthermore, our administration has been exceedingly slow at developing AI guidelines, making institutional guidance very slim. Looking externally, there is no clear leader to follow with all of the right answers, plus the technology situation is rapidly changing. Overall, the responsibility seems to fall heavily on individual faculty to find their own way. At least this describes my situation.

    My past efforts at AI policies have, to be completely honest, probably had little benefit. They were too long and too much of a scold.

    I'm sharing my continually evolving policy for 2026 in the hope that it might help other university-level faculty. It's not a perfect solution, of course, but maybe it can be at least a small light in the darkness.

    Permission is granted to use these examples for your own courses.

    The first part is a clarification of the difference between generative AI (example: ChatGPT) and assistive AI (example: Grammarly). Many students think of AI narrowly: all AI is generative AI. This oversimplification may lead to the awkward situation of students using assistive AI when they are asked to not use any AI at all.

    The second part is the traffic light model directly copied from Davis (2024, Table 3.2, p. 25). This divides AI uses into acceptable (green light), risky (yellow light), and unacceptable uses (red light). It's an everyday analogy that students should be able to relate to. Stop, caution, and go might make the judgments less abstract.

    The third part describes the general uses in the course in broad strokes using the traffic light model. This receives further elaboration in assignments go into specific details for each assignment. This specificity was inspired by transparent teaching practices: clearly articulating the course expectations will be helpful to students.

    These policy improvements are tightly focused and avoid lecturing students about their moral obligations.

    Best wishes for your efforts to tame the AI beast in the 2026 classroom!

    Source: Davis, M. (2024). Supporting inclusion in academic integrity in the age of GenAI. Using Generative AI Effectively in Higher Education, editors S. Beckingham, J. Lawrence, S. Powell, and P. Hartley, Routledge Focus, p. 21 – 29. https://doi.org/10.4324/9781003482918-4

  • Posted on

    Like many people, I have struggled to understand how artificial intelligence works. There are good technical explanations for how these technologies are built and run (training data, guard rails, etc.), but it felt like there was some missing piece that left my understanding incomplete.

    Analogies to other technologies can be helpful. In academia, generative AI has been compared to the introduction of the hand-held calculator. Using calculators for teaching was controversial at one time, but is now well accepted. Delegating the small, tedious calculations to machines can free people to pursue more advanced math. It's hard to imagine teaching statistics now without some form of software to handle the basic calculations. Similar comparisons have been made between AI and other technologies: the world wide web, Wikipedia, Google, and more.

    While these analogies are helpful, some piece of the puzzle still seemed to be missing.

    A small insight occurred to me while reading a book about the possible health hazards caused by eating ultra-processed food (UPF), the kind of food made through industrial processes (van Tulleken, 2023). These foods originate from raw ingredients that are broken down into smaller fundamental components. Beef, chicken, corn, and soybeans are turned into hamburger or high-fructose corn syrup. It's a digestion process that extracts sugars, starches, fats, and proteins from the raw materials. These molecular components are then combined with various specialized chemicals like flavorings, preservatives, binders, emulsifiers and coloring. The end product is a synthesized food like chicken nuggets, cheese whiz, and soda pop. The additional chemicals help to make the food appealing and long lasting. UPFs are popular, possibly even addictive. However, others disagree. Michael Pollan and Van Tulleken have questioned whether these synthetic creations should even be called food.

    The connection insight was that generative AI has strong parallels to UPF. For AI, the raw materials are writing and pictures taken from books and the internet. The writing is broken down (digested?) into units called tokens. The complex relationships between tokens are stored via advanced math, like binder chemicals for UPF. From this mass of related tokens, outputs like text and pictures are generated based on user prompts. This is like extruding chicken paste into the shape of chicken nuggets. Altogether, similar industrial processes are at work when we view generative AI and ultra-processed foods in a broad fashion: digestion, then recombination into new forms. Just like ground chicken can be made into strips, patties, or dinosaur shapes, AI products manufacture words and images into an infinite number of novel forms.

    There are many more industrial processes that fit into this pattern of breaking down complex raw materials and then reassembling into desired forms. In construction, fundamental components like sand, oil, and trees are transformed into concrete, plastics, and lumber. These component blocks can be assembled in an infinite number of ways to construct buildings. In finance, companies are broken down into stocks, bonds, and other financial parts. The financial industry reassembles these parts into synthetic mixes called mutual funds or exchange traded funds that are sold to investors. There are likely many more examples.

    This idea puts AI technology into a cultural context of how industrial processes break down raw materials into a flexible mush of processed components that can be reassembled to take any imaginable form. Generative AI is the industrial handling of words, pictures, and music.

    This AI-UPF analogy shows the weakness of common analogies, like handheld calculators. Generative AI is more than simply speeding up mathematical calculations or looking up information on Google. These older technologies simply increased the efficiency of actions that humans could do in an analog world. In contrast, generative AI is formed from an industrial process of digestion, then reassembly into desired forms. It is a synthetic process, making it more than a mere increase in efficiency.

    Placing generative AI in a cultural context of digestion/synthesis processes may give hints about where this trend is headed. Modern farmers are paid very little for growing chickens and corn. These raw materials are called commodities: one farmer's corn is basically the same as another farmer's corn. The bigger money is in the food products that are made from the corn through industrial processes. In the same manner, human writing is perhaps on the way to being devalued. Writing might become a commodity. Perhaps a few writers will be esteemed for their special creations, but most writing will simply have little value.

    It's worth noting that academia has its own industrial writing processes. Textbooks are ground-up bits of research studies that are carefully synthesized into new packages - textbooks - for student consumption. The aim is that these synthetic recreations are more digestible to students. Textbooks are expensive, while the original research studies have little value. Condensed books for students like Cliff notes summarize books even further into snack-sized portions for people who are too busy or uninterested to bother with reading the original source. Van Tulleken's book even has a "supersummary" guide. Perhaps these academic processes will be replaced eventually with AI-driven products that are fact checked.

    Van Tulleken's book is titled "ultra-processed people." It's a term that extends well from UPF to AI products. A culture that embraces chicken nuggets will certainly embrace synthetic text, pictures, and music too.


    I thank Whitney (@writerethink@wandering.shop on the Fediverse) for sharing the same idea and rekindling my interest in this analogy. Our conversation is here.

    van Tulleken, C. (2023). Ultra-processed people: Why do we all eat stuff that isn't food... and why can't we stop?