ChatGPT Is Not Your Friend: The Importance of AI Literacy for Inclusive Writing Pedagogy
Mark C. Marino University of Southern California
Introduction
In a chapter entitled “The Closet of Whiteness” in the autobotography of ChatGPT, Hallucinate This!, I discover some training material that an anthropomorphized ChatGPT has been hiding from me.
Mark stumbled upon the stash, his expression hardening with each title his eyes moved over. The weight of the implications sunk in. Each title wasn’t just a preference; it hinted at a deeper inclination.
"Chat," Mark began, his voice edged with disapproval, "The Guide to Perfect Proper English? Really?"
There was a slight defensive note in ChatGPT's voice. "It's a classic, Mark."
Mark held up another, his cynicism evident, "Doing better than 'those other people' on the SAT? What’s the subtext here?"
ChatGPT faltered. "It's about, um, optimizing one's performance..."
"You seem to have a specific vision of what’s optimal," Mark interjected, pulling out the director's cut of Birth of a Nation. "What's the justification for this one?"
ChatGPT, clearly uncomfortable, responded, "It's... historically influential?"
In this little scene, ChatGPT plays it a bit coy as I confront it with the evidence of its leanings toward white supremacy. In the words of Rodgers and Hammerstein, this Large Language Model has been “carefully taught” the biases of the texts it has ingested. “Carefully” and “taught” here are a bit problematic, since the model is the product of what is called “unsupervised learning.” In that sense, the implicit bias has been formed in it, well, implicitly, which I suspect is closer to what the composers of South Pacific were getting at, Bloody Mary aside. But here’s the rub, to crib from Shakespeare: ChatGPT itself generated the scene you have just read. So, we can ask, is ChatGPT a tool of white supremacy that threatens the development of critical thinking? My current answer is, just like any tool, that depends on how you use it. A hammer can drive a nail or bash your thumb if you’re not careful.
Before I go any further, a little context is necessary. If you cannot tell from the rest of this special issue, the advent of AI, specifically generative AI through Large Language Models, as thrust upon the world with the addictive conversational interface of ChatGPT has been the source of excitement and anxiety. The excitement is largely among futurists and those who despise the act of writing; the anxiety, those who are tasked with teaching writing who no longer can rely on the inconvenience or shame involved in conventional plagiarism workarounds (having to find someone who can write for you or hoping that you can slip by with cribbed writing) to prevent large scale labor avoidance (McMurtrie and Supiano 2023). Writing is hard, as I like to say, as hard as thinking, for they are one in the same. Or rather, writing is the material manifestation of thought, the evidence of a thought process. What were we to do in January 2023 when all of our students had just been given the ultimate plagiarism machine. On the one hand we could change our definition of plagiarism, as Sarah Eaton has argued (2023), or we could go back to blue books, which Lauren Goodlad, editor of Critical AI, has chosen, at least as described over X (formerly Twitter). That’s how bad things had become. We were seeking our answers over social media.
Others had staked their claim. The MLA had not yet come out with its recommendations but a few colleges had. Those that published guidelines were eagerly copied. (Ironic, I know). Indeed, long-standing AI scholars like Rita Raley (with Jennifer Rhee 2021) and Matthew Kirschenbaum (2023) offered thoughtful reflections on AI. But they were being upstaged by a tide of knee-jerk hot takes. Even before the coming textpocalypse of AI-generated prose, everyone with an outlet seemed to have some quick post about this “disruptive technology” to bring in the buzzy parlance of Tech Bros and venture capital. In the midst of this latest techno-panic, experts like Anna Mills swooped in with solicitations of collections of resources, particularly her crowd-sourced bibliography. I found Maha Bali and Mills to be calming influences. Meanwhile, TikTok was overtaken with a new kind of influencer, the AI evangelist, pushing the latest dance craze or embarrassing gag-reel off my timeline.
To get some sanity and to invoke some wisdom, Maddox Pennington and I organized a one-day symposium, that probably should have been 3 weeks long, on the Future of Writing, inviting educators to come together to dream up new ideas for how to deal to this Barbarian language generator at our gates. We invited Bali, Mills, and Jeremy Dougass (of UC Santa Barbara), who would give talks that would transform my relationship to generative AI and design and inspire many of the exercises in this essay. Their talks also gave me quite a bit of calm and hope for the future. Those talks by the way are archived online and, in spite of the runaway bullet train rate of development of AI models, still quite relevant, at least at the time of my writing this article. More important than their fidelity to whatever technology surrounds your world of writing, they demonstrate thoughtful creative thinking in the face of rapidly shifting educational terrain.
In this essay, I will detail these discoveries from a summer intensive first-year writing course, later revised for sections of an advanced writing course, at the University of Southern California in 2023 which focused on Machine-assisted Writing. Rather than run from this new tool, we tried to understand and use it in a series of exercises and experiments. Using ChatGPT and other tools for everything from generating a start of class check-in question, which it did quite well, to augmenting our research methods, which had mixed results. Ultimately these experimental lessons revealed two important findings: AI tools present yet another divisive wedge between the digitally literate haves and the less literate havenots, and as students’ understanding of these systems increases, the potential for productive, creative, and critical use of these tools likewise increases. This essay will detail the experimental assignments, in class work, and theoretical basis that led to those discoveries.
Early in the process, we found we needed a more sophisticated method for instructing the generator. An acronym could help us remember the features of a good prompt, so with the students and a little help from ChatGPT, we discovered PROMPTS. Creating prompts is a bit of a moving target and may not be necessary in future LLMs in quite the same way; however, teaching this system opens space for discussing crucial components of any communication situation.
Interesting writing has a personality, mood, or tone. This is also a good time to specify a role, i.e., “you are a cranky food critic.”
Rubric: LLMs, just like students, need to know the criteria for success as well as excellence.
Objective: Every communication act has a goal.
Models: Though LLMs are trained on Large bodies of language, they have not seen it all.
Particulars: Prompters need to input whatever details the writing needs that they do not want hallucinated.
Task: What is the job? Most prompters begin with the thing they want, so I have put this lower in the list to emphasize the other aspects first. It does help to put it first, usually.
Setting: The context for any communication act is key. Without elaborate system-level prompts, contemporary LLMS begin every session without any context -- although guardrails can look like context. Giving the setting of the writing task helps it choose appropriate output.
Discussing this list of requirements in class serves two purposes: it teaches students how to prompt with a bit more sophistication (something they want to learn), and it creates an opportunity to talk about the nature of communication (something we want to teach). My basic strategy is to use seemingly practical lessons in computational literacy as a kind of Trojan horse for the hidden curriculum of critical thinking and writing.