Stylistics Comparison of Human and AI Writing: A Snapshot in Time
Christopher Sean Harris, California State University, Los Angeles
Evan Krikorian, California State University, Los Angeles
Tim Tran, California State University, Los Angeles
Aria Tiscareño, California State University, Los Angeles
Prince Musimiki, California State University, Los Angeles
Katelyn Houston, California State University, Los Angeles
Introduction
“ …there is no need to know the truth of the actual matters, but one merely needs to have discovered some device of persuasion which will make one appear to those who do not know to know better than those who know.” (Plato, Gorgias, p. 95)
In Gorgias, Plato’s character Socrates, while debating Polus and Gorgias, explores a sophistic-rhetorical dichotomy still pervasive in rhetoric: As a “knack,” rhetoric “is not a matter of art” but a matter of mere “cookery” substantiated by “flattery,” a means to gratify the body. As a “techne,” rhetoric is an art much like “medicine,” which is substantiated by knowledge of the techniques that can most effectively remedy a specific malady. As one might imagine, medical practice in the fourth century BCE probably entailed some amount of additional pain or misery brought on by the practitioner before the patient felt better. Good cooking, on the other hand, is that, in itself, an instant gastronomical pleasure. But rhetoricians must make difficult choices in the way they employ rhetoric, as it might be easy to draw upon dogma, flattery, or meaningless structure (mixing of ingredients) to win over public opinion, or doxa; it might be difficult to strategically peel away the onion-like layers of doxa to get at the central issue or ultimate knowledge, episteme. So in Gorgias, Plato makes an argument for rhetoricians to put forth appropriate intellectual labor to create discourses that speak soul to soul by investigating lasting truths in ethical ways rather than seeking unethical and untrue ways to persuade or flatter the body. This dichotomy is most likely all too relevant given the current state of politics in America, though we find it all too relevant given the state of technology, mainly the upheaval generative artificial intelligence (GAI), brought to our world.
Generative artificial intelligence has dramatically worked its way into the fabric of our public lives. Even as we write here, in Office 365, the software offers up diamonds in the left margin: “The paragraph was formatted to give your document a cohesive look and feel” it says after removing the extra white space between items in a list. An image would help luminate a position in an essay, it offers. As we reject the paragraph reformatting and add the padding back to the list, it’s difficult not to think, “That was helpful, AI.” But as Plato put it, discoursing is difficult, as rhetors must explore and know their argument as well as know how to effectively assemble their argument to persuade a specific audience and purpose—doing so simultaneously sprouts beautiful and ethical rhetoric, proclaims Socrates in Phaedrus.
The challenge in keeping rhetoric beautiful resides in learning how GAI can be useful and support literacy. Isn’t Plato’s famous exposition against writing an argument about literacy and technology? Much like Palpatine wooing Anakin with promises of eternal life, Socrates woos Phaedrus with the secret “to please God best” (Plato, Phaedrus, p. 165). Writing has many challenges, and in that letters are abstract symbols representing sounds and not ideas, writers will not be able to define terminology and usage to readers without their presence, writers will not know their readers and readers will not know their writers, teachers will not be able to guide students via discourse, and texts as well as topics within texts cannot defend themselves against undue critique or misunderstanding. The distance between reality and the word, between audience and author is too great to ethically conduct dialectic, the search for truth (Plato, Phaedrus). Serious discourse is guided by dialectic, which “plants in a fitting soul intelligent words which are able to help themselves” and the rhetor. Those seeds will interminably spread and bear fruit in other minds, thus making the “possessor happy, to the farthest limit of human happiness” (p. 166). Now that, not offloading dialectic to technology, according to Plato, is pleasing god.
Written text has endured and evolved, thanks to the likes of the printing press and networked archives. Reading and rereading texts is a gateway to literacy and shared knowledge. Yet today GAI dominates our news cycles with news of it passing exams and earning great test scores at the undergraduate and graduate level (Steele, 2023; OpenAI, “GPT-4”, 2023). Academia is seemingly organizing into warring tribes: the AI Users and the AI Punishers. What would the progenitors of rhetoric have to say now, about a disembodied machine predicting the order of words to construct a discourse? The words are not characters of themselves, and they share no intellectual exploration with the rhetor. The words are barren, soulless, human-less and, instead of sprouting new seeds of knowledge, recycle old seeds of knowledge scraped from a large language model with no reviewer or editor. Even the machines producing the writing proclaim their potential to generate bias and misinformation.
“Electrical engineers are at the forefront of developing AI systems that have the potential to revolutionize industries, from healthcare to transportation. However, these systems can also perpetuate bias, infringe on privacy, and even cause harm if not designed and implemented carefully.” (7.1.2000.GPT)
Harris has been experimenting with GAI in the classroom since 2017, when the nascent GPT became public. He asked students to take GPT-written texts to creative writing workshops to see what would happen, but students were too invested in improving their own writing for such an experiment. Now that GAI has eaten its way into the woodwork of academia, educators and students, not software engineers, need to dominate the discourse on how GAI can and should be used to teach writing. Pirsig’s (1974) Phaedrus, in The Art of Motorcycle Maintenance: An Inquiry into Values, grappled with the meaning of quality, ultimately coming to the conclusion that it’s what pleases both the body and soul. It’s appreciating the beauty of writing and understanding how the “underlying form” is comprised of varied labor and parts. Half the beauty of writing is style. Many progymnasmata exercises were exercises in style and structure, the underlying form of good rhetoric. What is AI writing and style and why is it impressive?
Public conversations about GAI note the effectiveness of GAI writing and express concern about ensuring students ethically complete assignments. In a 9 March 2023 email to the CSU English Council, Boak Ferris of CSU Long Beach expresses concern about recent developments in artificial AI writing:
ChatGPT and similar systems can all “compose collegiate-level” essays in response to “prompts.” Within three months, the measurable perceived levels of such essays will easily be commensurate with the essays of graduate students.
In a similar email in October 2023, Sarah Moon asks WRITINGSTUDIES-L listserv members what they are doing about the emergence of GAI writing. In her query, she notes the qualities of GAI writing: “above-register vocabulary, equivocating stances, flowery prose.” This list serv anxiety isn’t isolated, as Cardon, et. al. (2023) found that 46.7% of the business communications faculty they interviewed are “nervous or anxious about using AI-assisted writing” in their courses and 73.6% are concerned that AI-assisted writing will lower students’ writing abilities. As Plato might surmise, 62.2% think AI-assisted texts will be less credible (p. 267–268).
Pederson (2023) states this clamor about GAI is because so many people quickly began using it and because it is bound to alter attitudes about the labor involved in writing; AI writing is causing “cultural disruption” (p. 1). Despite the clamor and concerns of educators, AI is rapidly developing and expanding regardless of concerns from the academy. OpenAI promotes the capabilities of its GPT language processing platforms on the regular. In its 27 March 2023 GPT-4 Technical Report, OpenAI claims that GPT excels in measurable ways and can score in the 80th to 100th percentile in many AP exams, the Graduate Record Examination (quantitative reasoning, verbal, and writing), the bar exam, and various medical licensure exams.
Given GAI’s apparent writing prowess and the anxiety it is causing among educators, they must quickly rethink their pedagogies to help students understand when and how employing the services of AI writing tools is appropriate. However, to do so most effectively, more humanities faculty must study AI writing to explore how and why its quality is lauded. While complex style might not reflect better writing, one way to examine the underlying form of AI writing and to learn how AI writing is stylistically different than human writing is to study its style. Primarily, comparing AI style to human style is important, as Corbett and Connors (1999) hearken Plato by discussing the classical notion of style as the process of transforming ideas and thinking into language. As we all know, that process is quite difficult, and it is the process that AI writing aims to simplify.
The aim of this study is to collect, analyze, and share humanities-driven evidence that identifies key stylistic features of collegiate human and AI writing, as Ferris and Moon’s concerns are widely shared. The data from this study can be replicated to create a large language stylistics model useful for timestamping and understanding how AI writing platforms and their trainers stylistically assemble texts, for understanding how humans and computers write differently, and for ascertaining whether texts are human or machine generated.
This study diverges from the dominant academic conversations about generative AI in that it compares the syntax and style of machine and human writing rather than explore AI censure, anxiety, and wholescale adoption. We propose that others replicate this study with different populations and then share their data. Given that AI writing software predicts word order and does not actually write, the prediction output should be codable, and those codes would be useful for applications including linguistic codex study and machine-oriented discourse analysis.