Generative AI and the History of Writing
Anthony Atkins and Colleen A. Reilly
Challenges to Writing and Authorship
During the early 21st century, scholars in computers and writing and digital composition theorized and interrogated the relationships between technologically-mediated forms of composition and traditional concepts of writing, texts, and authorship. The research in this vein highlights the challenges to traditional standards of composition and authorship posed by a range of digital content and development processes, including coding, writing markup and metadata, and authoring in multimedia. Teachers and scholars examined and proposed strategies to aid students to reconceive of their roles as writers and authors when working with a myriad of digital compositions and learn to use the technologies necessary to produce them. For example, authors of digital compositions, such as hypertexts, cede meaning to users who chose their own pathways through the linked content, acting as co-authors and emphasizing the inability of authors to control the integrity of their text and make sustained linearly structured arguments (Cripps, 2004; Hocks, 2003; Purdy & Walker, 2012). This past scholarship grappling with a range of challenges related to new composing practices and context proves relevant for composing and teaching with generative AI.
In light of the recent escalation in the development of writing and communication technologies, it is easy to forget that at the turn of the last century, arguing for importance of non-discursive texts and other types of compositions as literate practices that require equal emphasis in writing curricula was a disruptive and even radical move. For example, in a nonlinear print article, Wysocki (2001) explores how visuals assume primacy in meaning-making in digital environments, forcing a reconsideration of what it meant to author content at the start of the 21st century (see also Cripps, 2004; Hocks, 2003). Wysocki asserts that words are “always visual elements, but the assertions of these CDs cannot even be found primarily in ‘words’” (p. 232). In a similarly disruptive and multimodal text that is at once article and representation of an oral address originally supported by synchronized slides (84 in total), Yancey (2004) questions the nature of writing and the teaching of composition in response to the proliferation of the genres spurred by new information and communication technologies. Her remarks situate the communicative revolution of the early 21st century historically in relation to previous moments of change in literacies such as the serials and newspapers of the 19th century and the evolution of writing instruction in higher education during the 20th century. Yancey (2004) calls for a new composition focused on circulation of texts, genres, media, and content production across domains. This revisioned composition necessitates interactions between media, an integration of the canons of rhetoric including assembly, and processes of mediation and remediation facilitated by and represented in digital technologies that challenge reconsideration of established rhetorical strategies.
Christopher Andrews, Associate Professor, Texas A&M University, Corpus Christi, also highlights the arguments instructors and scholars had in previous eras around bringing computers into classrooms. Andrews emphasizes the importance of critical pragmatism as a response to working with evolving digital technologies.
Other scholars focused on the expansion of literacy standards and instruction to include producing non-discursive content that underpins and facilitates the production of texts in digital environments. For instance, Sorapure (2006) highlights four categories of composing practices in Flash, a commercial product, that contribute to meaning making, including the text and images displayed to readers/users and the code and comments underpinning the structure of the composition, making the content possible, and enabling social interactions between developers. When Sorapure (2006) published her article, these categories posed a significant challenge what it meant to be a text, a writer, and a user—blurring established boundaries by shifting the roles depending on the specific use and moment of composition and consumption. Similarly, Cummings (2006) emphasized the importance of viewing coding (specifically markup languages, like XML) as an act of writing albeit requiring expanded audience considerations as the “coder’s first audience is a machine” (p. 434); unless the machine can comprehend the code as written the content will not be visible and usable by human audiences. As Eyman and Ball (2014) also detail, digital compositions require a greater range of literacies to produce successful compositions, including proficiencies in technical infrastructure supports for the development and dissemination of multilayered digital compositions. Designers of digital compositions needed to integrate optimal coding, metadata, file formats, and other technical affordances to compose usable, accessible, and visually appealing webtexts and other digital designs (Eyman & Ball, 2014). To improve their products, writers of code, like writers of text, also “refine ideas” (Cummings, 2006, p. 433) to achieve desired output. Interestingly, as Lancaster (2023) highlights, chatbots like ChatGPT also refine ideas by remembering a certain amount of content from a conversation or interaction with a user and building upon it to provide more targeted responses.
The challenges to traditional concepts of writing and authorship that scholars have been grappling with over the last two decades (and before) have been magnified exponentially by the introduction of generative AIs like ChatGPT. In his discussion of new and challenging forms of visual compositions, Kress (2005) argued that writing would remain a powerful force because “elites will continue to use writing as their preferred mode” (p. 18); however, the introduction of AIs like ChatGPT destabilizes that mode of communication making it not the province of human elites but of the machine—open and accessible to all who can use it effectively (Byrd, 2023). When generating content with an AI, writers direct the output in part by authoring appropriate prompts directing the AI to respond according to specified parameters. While collaborating with the AI, writers are also collaborating in remove with the creators and writers of the content on which the AI was trained. As Lancaster (2023) explains, language model AIs such as ChatGPT “responds in a predetermined way, based on its trained model, the input data, earlier parts of the conversation and a random number, known as a seed” (p. 3). Creating the best strategic input for the AI in the form of successful prompt engineering is essential to facilitating the most effective and relevant output from the AI, recalling Cummings’ (2006) identification of the machine as the first audience for code-based compositions. As Cummings (2006) noted in relation to writing code, “The act of writing for the machine and writing for a human audience develop similar skills, and one experience can be harnessed to inform the other” (p. 442). However, in this case, the AI also participates as an author, a conversant, and an active collaborator in producing texts with human actors.
The participation of generative AIs in the composing process further disrupts established norms related to writing and authorship by prompting a reassessment of the standards by which any composition is evaluated as “original writing.” This aspect of working with generative AIs like ChatGPT has, as of this moment in 2023, caused the most widespread consternation in scholarly communities and the public. However, examining past scholarship in digital composition reveals that questioning notions of individual authorship and plagiarism preceded the introduction of generative AI. As noted above, proficiency in digital composition requires collaborating and coauthoring with digital tools to produce multilayered content, some of which is only readable by and may be developed through machines. The codes, metadata, and other digital constructs may be derived through using models and templates produced by human and nonhuman actors, which, as Johnson-Eilola and Selber (2007) argue, forces us as writing teachers to reconsider our standards for identifying and evaluating authorship, originality, and plagiarism. Johnson-Eilola and Selber (2007) highlighted the importance of assemblage in digital composition—using existing codes, templates, digital objects, and even texts and reconfiguring and repurposing them to solve specific problems or address contextual needs (see a more recent discussion of assemblages in Yancey & McElroy, 20217).
This reconsideration involves revisioning of authorship, originality, and even creativity (Johnson-Eilola & Selber, 2007, p. 400). Authoring with generative AI extends and magnifies these redefinitions, because AI the technologies now have the potential to produce content more creatively; as such, these technologies are no longer just a tool that aids the human user to achieve their ends. A recent publication by Johnson-Eilola and Selber (2023) emphasizes this point—agency not just human—people have to think like technology to be successful users in “complex sociotechnical systems” (p. 83). They argue for an object-oriented ontology (OOO) that focuses on the creative reuse of digital elements to make new assemblages to solve communication and design problems. Particularly relevant for working with AIs is Johnson-Eilola and Selber’s (2023) admonition that “OOO asks us to think like objects, decentering ourselves or flattening the normally hierarchical ontologies that put humans on top.” (p. 86). In our new writing and communication environment, humans certainly cannot presume to be paramount in creativity, production, or importance. Rather than focusing their efforts on detecting students’ use of AIs with software or embedded watermarks (Lancaster, 2023), an effort that promises frustration resulting from a never-ending cycle of design, detection, and redesign of AI, instructors should focus on teaching students to productively use AIs for specific sorts of writing and design tasks (Lancaster, 2023) and seek AI chatbots developed by more ethical human actors (Byrd, 2023). As Lancaster (2023) concludes, assignments that can easily be completed by AIs should be rethought and potentially eliminated.
Lance Cummings, Associate Professor, University of North Carolina Wilmington, addresses the challenges to concepts of authorship and plagiarism posed by generative AI. He views digital technologies from a posthumanist perspective, arguing the humans and machines mutually construct and are constructed by their technologically facilitated interactions.
Writing with technologies in digital environments has also highlighted the need to examine the infrastructure—technical, cultural, and organizational—that do and do not enable the technologies to function, be accessible, and participate in communicating content with human users. In considering the importance of infrastructure concerns related to digital composing, past scholarship again provides useful insights. As DeVoss et al. (2005) argue, we notice infrastructure at points of breakdown and disruption. DeVoss et al. (2005) highlight the when of infrastructure—and the systems that construct it and determine its use—that comes into play at points of disruption and continues to evolve with attempts at use and intervention when composing. Infrastructure is ubiquitous and relies on standards and other policies, often textual in nature (Frith, 2020), that facilitate its function and imbue the system with values, perspectives, and ideologies. DeVoss et al. (2005) highlight the importance of examining the “often invisible issues of policy, definition, and ideology” (p. 16) that underpin infrastructures essential for digital compositions and composing practices. More recently, Frith (2020) provides an example methodology for investigating such technical standards in his study of the Tag Data Standard “which is the major standard for the Electronic Product Code and a key piece of the Internet of Things” (p. 403). Frith (2020) highlights the role of standards written by people as “discursive infrastructure” (p. 403) in making the physical and technical assets of digital spaces function but in a way that is generally invisible to humans interacting with those technologies:
Obviously, for people who create standards, these documents are a major part of their job. The standards seemingly disappear, on the other hand, when their guidelines are built into material objects and rendered invisible to end users. And related to relationality, one of my major arguments in this article is that technical standards show how writing can become an infrastructure upon which other infrastructures are built. Take the Internet as an example. The Internet is enabled by layers upon layers of material infrastructure, including cables, modems, and so forth. Those material infrastructures are built upon and shaped by international standards documents. (Frith, 2020, p. 406)
As DeVoss et al. (2005) and Frith (2020) both emphasize, numerous texts contribute to building technological infrastructures that enable digital compositions, including policies, standards, and codes.
Users often only pay attention to and consciously investigate infrastructures \when they break down or prevent them from performing desired tasks; the scholarship of DeVoss et al. (2005) and Frith (2020) alert us to examine the structural texts that help to yield such disruptions and locate the strategies and ideologies behind them. Generative AI is no different—the technical standards that make it work come into question when the technology causes problems or functions in an unexpected or seemingly aberrant manner. For example, generative AI can “hallucinate,” meaning that it produces rational and real-sounding content that is actually false (Lancaster, 2023). Hatem et al. (2023) cite Chat GPT 3.5’s own definition of hallucinations in which the AI explains that such false information is generated when “a machine learning model, particularly deep learning models like generative models, tries to generate content that goes beyond what it has learned from its training data” (p. 2). The modeling done by generative AIs to produce text responses based on established paradigms without concern for content veracity could be seen as an extreme version of the need to harness identifiable genres for participatory communication as outlined by Bawarshi (2000). As Hatem et al. (2023) emphasize, AI hallucinations pose serious consequences for those relying on AI for healthcare information; such problems are structurally part of how current generative AIs function, making it crucial for humans to interrogate the veracity of the information they receive. As a side note, using the word hallucinations instead of the more accurate misinformation “is inaccurate and stigmatizing to both AI systems and individuals who experience hallucinations” (Hatem et al., 2023, p. 2).
That the technical constructs running generative AI result in the dissemination of false information highlights the need for closing examining the infrastructure, including the standards, making these technologies work. As we write this in October 2023, the Biden Administration issued an Executive Order requiring standards for AI safety and security (White House, 2023). This executive order addresses some of the issues that have been raised regarding the dangers of AI identified by those who have worked on developing and using the technologies (e.g., Hatem, et al. 2023). For instance, the Executive Order requires that “developers of the most powerful AI systems share their safety text results and other critical information with the U.S. government” and “develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy” (The White House, 2023). Additionally, the National Institute of Standards and Technology (NIST) is in the process of monitoring, developing, and encouraging adherence to standards for AI. As their website reflects, these standards are in flux, and AI continues to evolve. The research in our field proves useful in providing all scholars and writing instructors with the motivation, methodologies, and theoretical perspectives to use to examine infrastructures, including those important for and in development around AIs.
The final category in this section relates to the professional development needed to engage productively with and help students to learn how to work effectively with digital composing technologies. Parallel with the calls discussed above asking for writing scholars and instructors to redefine composition practices to include coding and collaborations with technologies is the recognition of the significant learning curve that scholars and instructors face. For example, Sheppard (2009) highlights the additional efforts required of faculty in computers and writing and digital rhetoric to help students to productively learn to use new information and communication technologies in a sophisticated way requiring the “work of theory, analysis, and argumentation” (p. 123) related to these technologies. Sheppard (2009) highlights the skills needed to develop useful multimedia texts—both new literacies and technological skills. Generative AI makes even more demands through exponential developments in technological complexities coupled with a lack of transparency in terms of the design and structures that constitute these new technologies; as a result, a robust collaborative peer education environment is needed to meet this challenge (Byrd, 2023; Korzynski et al., 2023; Lancaster, 2023). As the immense scholarly interest around writing with AI demonstrates, scholars and instructors in computers and writing and digital rhetoric have embraced their responsibility to help their students navigate working successfully with this and all new information and communication technologies just as they did in previous eras (Sheppard, 2009).