Interfacing Chat GPT
Desiree Dighton
ChatGPT Isn’t a Writing Technology
This isn’t the end. ChatGPT’s interface could have been designed to incorporate more user-based controls along with its conversational NLP technologies. It’s not difficult to imagine users being able to check a box to command ChatGPT to derive its data only from certain kinds of sources, peer-reviewed scholarly sources or art museum websites across the world, as library databases and other data storage and retrieval systems currently operate. ChatGPT’s source data, supposedly, had to be separated from the compressed and stored data, but others have speculated this is more talking points than actual technological limitation. OpenAI’s technologies didn’t have to be designed to prevent its response from identifying the sources of data—they’ve been programmed by humans to do so without calling too much attention to the de-naturing of its information from its human or computational origins. The legibility of that response is at once perfectly clear and completely untraceable. By doing so, ChatGPT remakes information, art, science, propaganda, social media chatter, literature, crime reports, ancient texts, academic research—the entire history of human culture and information, at least as it exists on the internet--in its own image. It transforms and recirculates human-made meaning into its “personalized” AI response to the user. The text is made new again by stripping away its origination. It values data and human language along with efficiency and speed without valuing the writing process, user agency, the actual knowledge and creativity that should be protected by copyright and intellectual property laws, transparency of its LLM/NLP technologies to any given response and its original sources, reciprocity to humans/entities attached to its data, or responsibility for the consequences of its responses beyond the moment of engagement. The user, theoretically, could refuse to accept and consume GPT’s response and try to go against its processes, a “jailbreak” for information literacy.
When I asked ChatGPT to provide me with the full text of an ancient text, the Bhagavad Gita, it summarized the verses and provided an external link to an English translation of the text. When I pressed it further, GPT confessed the link didn’t match its response text because it had provided a generic version. “A ‘generic rendition’” according to ChatGPT, “refers to a representation or interpretation of a text that doesn't adhere to a specific published translation or version but instead provides an overview or essence based on a variety of sources” (ChatGPT, October 15, 2023). This is not my understanding of generic or genre, but all things become new again when they’re wiped of their histories. Cereals can be generic—Cheerios, for instance, are only slightly different from a store brand ‘Breakfast O.’ If I get sick from the generic version, I know who is responsible, and if it turns out I love generic Breakfast Os from Harris Teeter better than Cheerios, I know where to find them again. Will every book become a generic one in ChatGPT? Generic books can’t be named, can’t be attributed to a responsible creator, and can’t be located outside of GPT’s interface.
On my syllabus, I don’t call ChatGPT a generative “writing” technology. I call it what it is: a data collection and surveillance tool that is designed to serve the financial and professional interests of OpenAI and its corporate partnerships. Through its interface design and circulation, GPT wants every person to feel welcome, even befriended. We trust our friends, we open up, and we tolerate their shortcomings and flaws, but GPT asks us to do this at the expense of our own knowledge and agency. In “Data Worlds,” the introduction to Critical AI, Katherine Bode and Lauren M.E. Goodland (2023) situated ChatGPT and other generative AI built on LLMs within the technological and cultural history of data capitalism. Bode and Goodland wrote that ‘the power of AI’ provides an ideal ‘marketing hook’ and a distraction from corporate concentration of power and resources—including data. The focus on ‘AI’ thus encourages an unwitting public to allow a ‘handful of tech firms’ to define research agendas and ‘enclose knowledge’ about data-driven systems ‘behind corporate secrecy’” (para. 8). While this in itself should cause enough alarm—we won’t know what the master’s tools are called, let alone be able to dismantle a generic version of the master’s house—more alarming still is generative AI’s incremental creep into our consciousness (Should I Google it..or ChatGPT it?). OpenAI’s biggest corporate partnership is with Microsoft, and its applications are now powered by OpenAI’s models. Other companies either have their own AI/LLMs, are rushing to create their own, or they’re signing up to pay for an existing model. Once GPT’s technologies are behind the plethora of institutions, corporations, nonprofits, social media platforms and all the other entities with web-based user interfaces, will all information become generic versions unmoored from any original sources? If we are to pull ourselves through our present technological transformation and continue to recognize our human value, it will not be because we learned how to collaborate more effectively with Generative AI like ChatGPT. It will be because we normed our technologies to the values of writing studies.