Generative AI and the History of Writing
Anthony Atkins and Colleen A. Reilly
Masking Mediation
Throughout the history of computers and writing and digital rhetoric, scholars have grappled with and embraced disruptive information and communication technologies and the challenges they pose to their contemporary research and composing practices. As Bolter and Grusin (1999) and Bolter and Gromala (2003) highlight, new technologies often obscure their mediative practices, promising to provide direct, unmediated access to information, entertainment, and production. Any user’s understanding and awareness of a digital technologies’ process and depth of mediation—the degree to which the technology draws attention to its means of production, to how it provides access to content or delivers results—is partly determined by users’ prior experience with similar technologies, proficiency with using new technologies, and curiosity about how the technologies work. Communication and information technologies gain power in part by obscuring the degree of mediation and promising users that they will provide direct access to knowledge and information without requiring users to comprehend how the content is developed and delivered. One way to accomplish this transparency is to use the structural conventions of established communication technologies to achieve transparency (Bolter & Grusin, 1999; Hocks, 2003).
The way that numerous scholars in computers and writing and digital rhetoric have approached teaching with Wikipedia since its inception in 2001 provides a productive model for navigating and composing with technologies that downplay and obscure their processes of mediation like generative AI; as a result, we will explore that model throughout this section. Like Wikipedia, generative AI can be approached as a transparent window into vast amounts of easily accessible knowledge without the user’s need to understand the technical mechanisms facilitating its production. Both Wikipedia and generative AI proffer information that appears reasonable and professional (Lancaster, 2023), persuading users, like our students, to accept the output at face value without critically examining its veracity.
To combat the seductive transparency of technologies like Wikipedia, scholars developed structured inquiry and assignments designed to transform student users from passive consumers into critical producers of content. In the case of Wikipedia, this requires individuals to work behind the surface of the encyclopedic content as displayed to understand the layers contributing to and supporting its production, including the organizational structures and policies, the debates within the Wikipedia community, and the technical knowhow needed to contribute correctly formatted content. As Reilly (2011) explains in her article about teaching Wikipedia as a “complex discourse community and multi-layered, knowledge-making experiment,” empowering student users to become critical producers of content with Wikipedia requires that they literally look behind article layer of the text to interact with and contribute to the layers of the Wiki technology that allow them to engage in conversation with other contributors (Discussion tab), edit the content of the article (Editing tab), and examine the complete history of changes to the text (History tab).
Based on their analysis of large-scale (6000 students) survey research by the Wiki Education Foundation, Vetter et al. (2019) recommend best practices for Wikipedia writing assignments that include making the assignments “extended and substantial” to allow students to “learn about values, processes, and conventions of Wikipedia” (p. 62). Vetter et al. (2019) also recommend critical analyses of Wikipedia articles prior to contributing to develop critical thinking around how Wikipedia is developed and how content is supported and sources cited. To design such opportunities for analyzing and contributing to Wikipedia, instructors need professional development to learn the content and technological intricacies of the platform (Sura, 2015).
In addition to analyzing the content and contributing to it, McDowell and Vetter (2020) argue that Wikipedia’s very practices and policies, particularly those requiring citation and verification for information, serve a pedagogical purpose and can be harnessed to aid students to develop information literacies related to the legitimacy of online information. The polices prompt students to learn how to analyze information for its veracity—not rely on others (McDowell & Vetter, 2020; see also Azar, 2023). In addition, Wikipedia has the benefit of being a community governed by the collective (run by a nonprofit)—new participants can learn to navigate the norms (McDowell & Vetter, 2020) and work with other contributors in asynchronous but interactive manner. As Purdy and Walker (2012) explain, contributing to wiki-based compositions foreground the importance of dialogue for knowledge production. McDowell and Vetter (2020) argue that Wikipedia’s policies requiring verification, a neutral tone, collaboration, and citation educate new users and enlist them to maintain the legitimacy of content on the site and participate in removing or questioning that which is not supported by sources and verifiable knowledge (Azar, 2023)—through such policies, contributors to Wikipedia develop critical digital and information literacies that they can employ in other contexts. Finally, Wikipedia uses community-based policies “to reconstruct more traditional models of authority” that support the legitimacy and veracity of the content; and Wikipedia is transparent about its purpose and intentions unlike most other (commercial) sites and apps online (McDowell and Vetter, 2020).
Many of the lessons outlined above related to working with Wikipedia and exposing its processes of content mediation to conscious examination and interrogation can be adapted to help our students work with and critically analyze the output of generative AI. This process can begin by examining how generative AIs produce content. Byrd (2023), Lancaster (2023), and many others help to demystify how AI technologies like ChatGPT work from a technical perspective for instructors and students. As was the case when teaching with Wikipedia and other new technologies, highlighting the technology’s processes of mediation entails gaining a basic understanding of the technical specifications that power it. Students need to learn that ChatGPT, for example, is a large language model (LLMs) meaning that it has been trained to produce new language modeled on the texts it has processed and was asked to generate (Byrd, 2023, Lancaster, 2023). As Byrd (2023) clearly explains, “[LLMs] have really created mathematical formulas to predict the next token in a string of words from analyzing patters of human language. They learn a form of language, but do not understand the implicit meaning behind it” (p. 136). As a result, AIs can produce false information when the corpus it uses does not contain the accurate content (Byrd, 2023; Cardon, et al., 2023; Lancaster, 2023, Hatem, 2023). Understanding these technical processes can help students to approach the output from AIs more critically and skeptically, just as they are taught to do in relation to Wikipedia. Such instruction provides inoculation, demystifying the output and opening it to interrogation.
Edzordzi Agbozo, Assistant Professor, University of North Carolina Wilmington, describes innovative assignments that he uses to help students interrogate the output of generative AIs.
Students also need to be taught the protocols for productive use of generative AIs as they do with Wikipedia. Prompt engineering is the process of iteratively developing instructions and queries to submit to the AI to garner superior output (Korzynski et al., 2023; Lo, 2023). As Korzynski et al. (2023) emphasize, prompt engineering is a human language process requiring collaboration with AI. Just as students must learn how to structure and tag their Wikipedia articles to meet the genre specifications approved of by the community, students also have to structure queries to gain the best results from the AI. They also need to dialogue with the AI as they did with other contributors in the Talk tab in Wikipedia to participate fully in a successful collaboration. The obvious difference is that when writing for Wikipedia, students collaborate with other users not an AI. A number of scholars have developed frameworks to guide prompt engineering. For example, Lo (2023) outlines the CLEAR Framework: prompts to AIs should be concise, logical, explicit, adaptive, and reflective. Importantly, this framework emphasizes that success is produced iteratively and contextually in response to the output of the AI and purpose of use (Lo, 2023). Korzynski et al. (2023) review a range of other similar approaches to prompt engineering; they outline the essential elements of useful prompts, including the context or role, the instruction or task, the relevant data, and the genre or form for output (p. 30). Such discussions of prompt engineering emphasize that scholars, instructors, and students can learn to productively collaborate with generative AI, as they do with Wikipedia and its corresponding community, and overcome hurdles to engaging with it productively.
Just as scholars recommend critically analyzing Wikipedia articles prior to using content from and contributing to Wikipedia, so must students learn to do the same with the output of generative AIs. Lancester (2023) recommends finding sources to corroborate and support the veracity of content generated by AIs. Scholars already report developing assignments that ask students to investigate the veracity and usefulness of text produced by an AI (Byrd, 2023). As noted in the previous section, once students understand that the quality of output is driven in part by the quality of the input, they may gain agency and confidence to critique the resulting information produced by that AI.
Some of our lessons from teaching with other technologies like Wikipedia do not apply to generative AIs. For example, as discussed above, the mechanisms of mediation by generative AIs are often proprietary, making it impossible to fully comprehend how the technology delivers content as we can with Wikipedia. AIs are commercial enterprises unlike Wikipedia, which is a nonprofit. In response, Byrd (2023) recommends using open-access LLMs instead to produce more ethically and fully transparent content. Finally, generative AI’s rapid evolution may eventually make detecting its output as the work of machine-generated mediated content less possible by human readers. Lancaster (2023) proposes a process of adding watermarks to AI-generated content but acknowledges the potential futility of that approach. Additionally, this approach would require coordination and cooperation with corporate entities that, as noted above, maintain control over their technologies and standards and have little to gain from revealing their proprietary information and demystifying the power of their chatbots to magically anticipate users’ needs and surprise them with content they can use as their own.
As the above discussion reflects, the work of scholars and instructors with previous technologies like Wikipedia can provide insights about what questions to ask and how to advocate for restrictions and guidelines to protect students and the public in their work with generative AIs. Developing educational policies and best practices around writing with and using digital content, like Wikipedia, was necessary, and invested scholars in our profession need to do the same for AI.
Gavin P. Johnson, Assistant Professor, Texas A&M University, Commerce, emphasizes the importance of remembering the intersections between identity, power, and technology.