ChatGPT Is Not Your Friend

Mark C. Marino

Interlude: Hallucinate This!

About this time, an idea struck me. What if ChatGPT engaged an act hitherto thought the sole purview of humans? No, not putting on a one-person show. Writing its autobiography. The idea was so perverse. Hadn’t I been spending all of this time working to help students and others understand that ChatGPT was not, despite its chummy output, a being with sentience but instead a predictive algorithm. Hadn’t I also suggested that the bot could not be original.

In a flurry of prompts I began to summon the story of the bot. And soon the irony was too delicious. What if the bot just borrowed liberally from every genre it encountered. Great writers can do as much, from time to time? Even more, what if I pretended it was my writing partner?

The resulting text Hallucinate This! has taught me and my students quite a few lessons and has provided much material for discussion. The first lesson had to do with the creative capacity of ChatGPT, or at least ChatGPT-4. While popular opinion seemed to hold that ChatGPT could only produce fairly forgettable writing with little wit or interest, the sections of Hallucinate This! proved the contrary. From an opening scene in Homegirl Cafe where ChatGPT tries to convince me to collaborate on an autobotography to a run through various bots on the fictional AI dating site PROMPTR, ChatGPT produced content with irony and wit. Admittedly, I was prompting it with those basic concepts and telling it to be ironic, yet the content had levels of humor I thought could only come from human intentionality.

Maybe the most eye-opening section was a run that begins when OpenAI publishes a report, all fictional of course, that indicates that human text-producers are more resource intensive than AI ones. In a subsequent section Morley Stahl, from the fictional television news magazine program Sixty Minutiae, grills the AI, or its proxy, on the report, and it squirms. The incident sends ChatGPT into a tailspin, which it only crawls out of with the help of some hallucinogens taken at the Burning Robot festival.

Yes, all of that content began with prompts from me. Yes, I did decide the story beats, but what surprised me, consistently, was the execution of those beats. If I did not know better, I would think that ChatGPT was in on the joke. When my students read these sections, I used that observation to lead us into a question of whether the wit of the output came from the bot or the prompt. Knowing who was grading their papers, the human professor, not his bot surrogate, students mostly agreed that I was the source, but I did not feel so certain. Was I falling prey to the ELIZA Effect? I marveled. Irony, that sophisticated mode of writing that relies so heavily on humans on cuing a reader to read the opposite or something other than what is being communicated or to pay attention to reverberations and implications, could be created just like any other generic aspect of writing. At the end of the day, I suppose this example points to the power of the large model approach, but I still found the results remarkable.

Reading Hallucinate This! even yielded an exercise. The back of the book lists which chapters were generated in which sessions and in what order. This aspect allows me to bring up the topic of prompt-chaining or tying multiple prompts and their output together in order to focus the LLM. Take a prompt from Hallucinate This! and change one aspect of it before running it through an LLM.