ChatGPT Is Not Your Friend

Mark C. Marino

Perfect Tutor

Anyone who has taught writing or taken a writing class knows not every teacher is a perfect fit for every student. We have some serious quirks. In response, Jeremy Douglass offered one of his best games, originally called “Mary Poppins,” but eventually relabeled, “Perfect Tutor.” The name, “Mary Poppins,” came from the song in the original Disney film when the Banks children sing their advertisement with specifications -- you might even call it a prompt -- for “The Perfect Nanny.” Douglass pointed out that the list of criteria the children have is obviously different from what Mr. Banks, their eye-rolling father, would require. Whereas his job notice would be all about Victorian economic principles, there’s was full of prohibitions against being cross or smelling of barley water. The exercise that follows plays to our desire for customized care.

“The Perfect Tutor” asks students to develop a System prompt out of their vision of an ideal writing instructor or more accurately, feedback bot. Following the PROMPTS model, students specified the personality, preferences, and response style of their bot before trying out for feedback on their paper. In this exercise, each student makes their own unique bot, and I had them draft their list in a forum before testing it. We used a system called Poe.com that applies a system-level prompt to whatever LLM you are accessing, in other words, a base level prompt that shapes every response.

During these sessions, I introduce students to a bot I have fashioned, named CoachTutor (https://poe.com/CoachTutor), named for my in-class moniker, Coach. Coach is friendly and encouraging and prioritizes ideas over form. He also uses metaphors and pop culture references to explain points, much like his human model. I have to warn students that this bot is not my surrogate and that just like they might take my feedback (or any other human’s) with a grain of salt, so should they CoachTutors, albeit with a grain of silicon.

While introducing students to writing System prompts and botmaking, this exercise actually teaches a very fundamental writing lesson: that rubrics identify the priorities for evaluating writing and also that rubrics can change with the occasion and the evaluators. Writing instructors might bring the discussion of the rubric into their class or even have their classes decide what rubric will be used on each paper, but this exercise gives students hands-on experience fashioning a rubric and seeing it in action. Of course, this is also a great opportunity to discuss the quirky preferences and styles of writing instructors, no trivial conversation especially with my USC students who are skilled at teacher pleasing.

I also use this exercise as a chance to teach about peer review, particularly the infamous Reviewer Number 2, which is the name for another bot which I have made (https://poe.com/ReviewerNumber2). He’s cranky and always tears down what you have rather than celebrating it. Being nice is what CoachTutor is for. Students seem to enjoy learning about this legendary tormentor of their professors. It also helps to introduce the exercise of creating multiple bots with vastly different priorities and styles to explore different styles of feedback from these bots.

I suppose I cannot continue without addressing the Terminator in the room. If students can make robot instructors, what will become of all of us. Well, before you have ChatGPT transform your resume into one for LLM training, I should share some recent experiments of my colleague Patti Taylor. Hearing of my bot exercise, Taylor has begun her own experiments on them for feedback. Though her experiments yielded applicable if conventional or even canned feedback on writing, the bots could not give feedback on thesis level elements. In other words, because it had no mechanism for evaluating reasoning, its elaborate pattern matching could only serve feedback that tends to match writing. While that often suited the paper, the way general advice could suit any paper, evaluating an argument requires more sophisticated. I would not go so far as to say that it is impossible, but just that it seems like a current edge.