ChatGPT Is Not Your Friend
Mark C. Marino
The Breakdown Game: A Failed Experiment
Generative AI seemed an incomparable writing class tool. It offered everything from feedback on writing to assistance with reading and research. At least until we tried to use it as a guide through the entire writing process.
During the Future of Writing Symposium, Douglass offered a game that seemed perfectly suited to our goals. He called it the Breakdown Game. The basic exercise: In a pop-corn or sequential participation exercise, have the students take turns listing a step in the writing process. Students can flesh out the process by dividing a step into more steps or adding an intervening step. For example, in between drafting an outline and writing a draft, a student could add giving feedback on the draft. This part of the exercise went quite well, actually, as students were able to spell out a writing process that was detailed and took into account much of the process that experienced writers take for granted. Also, as students began to run out of steps, they added opportunities for creating several versions of say a thesis statement and then choosing the best one, a process I recommend for most writing occasions. It was actually in the next step that we went awry, although perhaps awry in a way I should have expected.
By having students participate in a creative process with ChatGPT, as opposed to their peers or their instructor, I was relying on their ability to follow a path that was most productive. Furthermore, the speed at which the process was occuring did not leave time for reflection or reconsideration. If the three thesis statements were all bad, and a student did not recognize their weakness, all that followed (the outline, the fleshing out of the outline, the drafting) would be off track. But what was more problematic was that the essay that was produced at the end of this process was so foreign to the student, that they were too alienated from the prose to be able to revise it. The exercise amounted to asking a student to revise an essay someone else had written who, in the case of the students needing more assistance, had stronger fluency in conventional academic prose and whose writing seemed so finished even if it was ultimately empty.
That is a long way of saying that while the learning of the models may have been unsupervised, students deserve supervision, for LLMs inserted into the writing process have the potential to go wrong at every step in countless ways. So, while a trained writer versant in genres may have a ball taking ChatGPT out on the court for some street ball, new learners deserve, well, a coach.
That experience prompted me to develop a complement to the acronym for PROMPTS to name the process of responding to ChatGPT: EVALUATE
Explore the output. Read the generated content with an open mind.
Verify all facts because hallucination is real.
Analyze the internal logic because ChatGPT is stochastic not structured
Look up any references because they may not exist
Understand how the system works (and how that system leads to errors)
Ask it to revise what is faulty
Teach it to produce better
Enhance the text with your own writing
Those suggestions may not entirely protect a student who uses the system to generate an entire essay from scratch, but the acronym should provide a starting point for necessary conversations.
But I fear my own essay has taken too serious a turn. Let us return to some play with a question: what’s a writing game that can invite critical reflection while stimulating some creative imagining?