Interfacing Chat GPT

Desiree Dighton

Chat GPT is to Writing as a Parent is to a Child

When I asked undergraduate students (24) in a Professional Writing and Document Design class how ChatGPT worked, students gave a range of answers from “clueless” to providing some idea of metadata and semantic tagging (2). Most students simply responded that it works through Artificial Intelligence. Even though I asked for detailed and specific descriptions, “AI” has metaphorical (Lakoff and Johnson, 1980; Burke, 1969) and circulatory magic (Gries, 2018; Jones, 2021). We don’t need to understand how AI works for it to compel our engagement. The common phrase, “powered by AI,” creates a scientific frame that persuades us to identify and accept AI as a linear history of human innovation. Antoine Francis Hardy (2020) stated, “public rhetors use frames as a means by which they adopt attitudes towards society and prescribed said frames for audiences” (p. 30). Frames and metaphors, the semantic containers by which we communicate and understand, are also the rhetorical mechanisms by which GPT becomes acceptable and, perhaps eventually, the status quo.

In “AI as Agency without Intelligence,” Floridi (March, 2023) explained that contrary to perceptions about the advanced intelligence and functionality of ChatGPT-4, it does not understand as humanly as it would like us to believe. Instead, it “operates statistically—that is, working on its formal structure, and not on the texts they process” (p. 14-15). ChatGPT doesn’t evaluate texts through principles of information literacy such as the reputation and credentials of the writer(s), understanding the type of publication, its relationship to various convention writing genres, scholarly peer review, and other types of publishing processes and ethics that shape a text’s authority to have meaning and significance for particular audiences and writing contexts. Without source material clearly and accurately integrated or at least tied to GPT’s response, where is the human agency to “pay attention to how it was produced, why and with what impact?” (Floridi, 2023, p. 14).

If prompted, GPT will often provide a list of sources, as it did for me when I asked it to provide a list of sources from the most influential computers and writing scholars. However, I had to further prompt it to include marginalized or non-white scholars. GPT was cheerful about regenerating its response, stating, our field “has been enriched by the contributions of marginalized and non-white scholars, who have brought unique perspectives and important critiques related to race, culture, identity, and digital spaces” (ChatGPT, October 20, 2023). If inclusion is an important shared value in its meaning making processes, wouldn’t GPT generate that inclusion in its initial response? We could further engineer our prompts to try to teach LLMs to include diversity as a value, but we’d be trying to work against its processes. And yet, there are increasing reports of GPT’s texts passing human expert judges and plagiarism detectors alike. When humans and machines can’t discern between human and machine generated texts, who arbitrates if and how these texts circulate to make meaning, for whom, and to what ends?

In the Proceedings of the Digital Humanities Congress (2018), Henrickson observed that few, if any, studies had been done to better understand how ordinary people ascribe authorship to computer-generated texts. Focusing on 500 participant responses, she conducted a “systematic analysis of computer-generated text reception using Natural Language Generating” (NLG) technologies.” Henrickson found the concept of an “author” conformed with “conventional understanding of authorship wherein the author is regarded as an individual creative genius motivated by intention-driven agency” (Conclusion, para. 1). Henrickson observed participants likening the writing process of NLGs to that of a parent (developer) passing along knowledge to a child (an AI NLG/LLM system like ChatGPT). This parent-child analogy humanized, and thereby normalized, the technology. Henrickson stated that these results aligned with others that revealed users automatically respond to digital and computational media as they would to other people in the physical world (The NLG System as Author, para. 4). Henrickson concluded that most “readers feel that the system is capable of creating sufficiently original textual content” and “the process of assembling words, regardless of developer influence, is in itself enough for the system to attribute authorship” (The NLG System as Author, para 3). While the author of a work is conventionally viewed as the owner of its intellectual property and copyright, Henrickson and recent lawsuits against ChatGPT demonstrate that “ownership and authorship are not necessarily linked in NLG texts” (NLG System as author, para. 5).

The MLA-CCCC Task Force on Writing and AI Working Paper (July 2023) provided a long list of concerns about Generative AI. For students, they “may miss writing, reading, and thinking practice because they submit generative AI outputs as their own work or depend on generative AI summaries of texts rather than reading” (p. 7). Students face variations of homegrown institutional and instructor ChatGPT policies, while simultaneously, technologies are being developed and integrated to find and punish students for submitting AI-generated texts. In this climate, it’s no wonder students “may experience an increased sense of alienation and mistrust” even though “such approaches have been proven unreliable and biased” (p. 7). To the progress we’ve made towards linguistic justice and inclusivity in writing studies, the task force warned, “[s]tudents may face increased linguistic injustice because LLMs promote an uncritical normative reproduction of standardized English usage that aligns with dominant racial and economic power structures” (p. 7). The task force identified other risks like “uneven access to models and training” and stated these “risks could hurt marginalized groups disproportionately, limiting their ability to make autonomous choices about their expressive possibilities” (p. 7). If students believe, as the general public observed by Henrickson seems to, that generative responses are “authored” by AI systems, then will they see the need to edit, fact-check, remix, or meaningfully adapt these responses?

If so, students first would need to claim their agency as writers by honing critical lens against AI’s perceived brilliance. This “brilliance” has been developed through its circulatory power, the public perception of Generative AI/ChatGPT, that evades critical thinking and attempts to mesmerize us into compliance and acceptance of its design, processes, and products. Like other myths, ChatGPT’s interface and various public conversations frame Generative AI/ChatGPT at once as larger, more powerful and smaller, more harmless than its actual existing technology. Corinne Jones (2021), connecting with Selfe & Selfe (1994) and (Stanfill, 2015) found that “interfaces play an important role in circulation as a world-making process because they create normative circulatory (1) practices, (2) content, and (3) positions. They perpetuate power and they produce norms for who can circulate what information and how they circulate it” (p. 12). The next sections will provide suggestions for how writing instructors can use computers and writing theory on power and interface to create and contribute Critical AI literacies for engaging with LLMs/NLGs like ChatGPT interface. These Critical AI literacies will also circulate from our classrooms to the complex relations that extend from our students’ lived experiences.