Large Language Model Applications for Style Pedagogy
Christopher Eisenhart
Discussion
Implicit in any study of computers and writing pedagogy must be the question, “What will happen when my students use this tool to complete their work?” Setting aside the ethical and policy aspects of this question, this study attempts to answer in the most pragmatic and descriptive way possible. Any student could interact with ChatGPT basically as I’ve done here, asking it to complete the exercises in Williams’ curriculum. What we see in the study above is the kind of results they’ll mostly encounter (not exactly of course; responses will change from instance to instance). Overall, CGPT improves sentences in its revisions from their original. It is mostly successful at some tasks of diagnosis and revision. However, it also leaves problems in its revisions which require expertise to then diagnose and revise. Students who input the exercises into CGPT will only occasionally receive results that match the objectives of the curriculum. That CGPT’s failures tend to be those where interpretation, invention, and contextualization is required is not a surprise.
Interestingly, these are the same moments that students tend to struggle with, until they realize that the solutions to the stylistic problems they’re finding often do not exist within the original sentence (e.g. finding appropriate characters, deciding what is a main idea). Fortunately, while CGPT’s horizontally intertextual focus may not improve on this point, students do.
One potential explanation for CGPT’s limitations also resides in its horizontal intertextuality and lack of contextual awareness. Being intertextual in this way, tending to reproduce what it has found to previously be most likely should create a descriptive bias rather than a prescriptive one. In the same ways that critics have discussed the bias of LLMs based on the biases of their corpuses (cf. Byrd, 2023), CGPT will revise sentences toward what other sentences have mostly tended to do, rather than what is “best” to do according to any particular stylistic paradigm. Most sentences probably aren’t clear, cohesive, coherent, concise and shapely. Successive and amended prompts can improve CGPT’s revisions, but its default is going to be to do what is descriptive of what people mostly do, not following the prescriptions of the curriculum.
This then leaves the stakes. How might using ChatGPT impact students’ learning and my teaching of written style? When I’m teaching this curriculum, I use these exercises as the jumping off point for students. Students read each Lesson and attempt to complete its exercises with minimal demonstration from me. This is low-stakes work that I’ve found makes them well and truly invested when I then throw their answers up on the board for everyone to see and discuss what they’ve improved in their revision and what is left to be done. Then we finish those revisions, live and together. I follow up with additional sentences, crafted to represent the problems Williams has given us to work on together in the classroom, and typically find the students almost competitively engaged in the game of revision. I can’t see how this version of LLMs could irrevocably short-change the opportunity to learn these skills of diagnosing and revising the problems in bad original sentences. Yes, it’ll take longer and perhaps we won’t get as far if students don’t perform these initial practices entirely through their own power, farming that work to the tool. But the results above remain consistently problematic so that students’ low-stakes grades will be poor, and when it hits the board, the fundamental work will still need to be done.
CGPT seems to be best at subject-verb identification and revision and the work of revising for concision. For that percentage of my students who struggle with subject/verb identification, I’d welcome practice with the tool that might then help them begin to see the patterns that identify subjects and verbs when their classmates and I have largely moved on. When it comes to the tasks of finding the narrative implicit in sentences to serve as the basis for inventing characters and actions, determining what is given and what is new information, identifying main ideas to position them for emphasis, CGPT tends to struggle with what students struggle with. Revisions to style are not purely intra-textual activities, and the solutions to the problems in bad sentences are not often visible, evident, or explicit. Occasionally, or with just the right prompting, CGPT does find those solutions, but not frequently enough to pass.