Generative AI and the History of Writing

Anthony Atkins and Colleen A. Reilly

Access, Accessibility, and AI

Another theme within the scholarly tradition of computers and composition is access and accessibility. Digital access, we tend to believe, is ubiquitous, that everyone has access to the Internet and to information. While the digital divide has decreased since the year 2000, it continues to affect the same demographics now as it did then. Grabill (2003) explains this divide by noting that it affects the people from diverse backgrounds: “In terms of these broad demographic categories, the divide still exists and is still highly correlated to income, education, and race/ethnicity.” (p. 462). Composition scholarship has addressed access many times, most notably in a special issue of Kairos: A Journal of Rhetoric, Technology, and Pedagogy called Accessibility and Multimodal Composition (Yergeau et al., 2013). In this section, we extend previous discussions of access and accessibility as they relate to Generative AI.

Access is an issue with many previous writing and communication technologies and refers to users’ ability to obtain the machine, the tools, and/or the information needed to complete tasks that access claims to provide. Without the fundamental tools to access information and engage in critical thinking about information found in online environments, students’ educational growth is stifled, leaving them unprepared for rapidly changing workplace and organizations contexts. At the college level, we know that some universities and departments within universities provide more access than others. For example, Atkins and Reilly (2009) note this imbalance in their local ecology:

However, although students in our writing courses are often not aware of the roadblocks imposed by university infrastructures and institutional politics to developing sustainable new media composition initiatives, they are certainly cognizant of the personal consequences of these impediments: inadequate resources, inconvenient or irregular technological access, and inconsistencies in educational experiences across the same degree program. (n.p.)

We recognize the importance of access not only within college structures but in rural communities (among other kinds of communities) where access to the Internet, for instance, remains significantly impeded.

While access to the machine and the internet continues to improve, there are other, sometimes, invisible challenges related to access. Access becomes complicated further when we, more specifically, investigate who has the access to knowledge about how interfaces are and can be constructed and who can affect algorithms that control the data delivered in digital environments. As Grabill (2003) notes when referring to similarities between multiple variables that influence digital access to information:

Understanding such a complex of connections allows for the development of a rhetoric of the everyday that has theoretical power and empirical relevance. It allows some understanding as to how culture is constructed, how identity is conceived and practiced, and how any number of public acts of persuasion are carried out and given meaning within concrete (and discursive) contexts. (p. 458)

Arola (2010) interrogates this hidden content that leaves some users disempowered when working with design templates, such as those rolled out by Web 2.0 technologies and social media platforms like MySpace and Facebook. As Arola (2010) points out repeatedly from the title to her piece, “The Rise of the Template, the Fall of Design,” to her explanations of how users are “discouraged” from attempting to make adjustments or design decisions with the platforms of Facebook and MySpace:

In spite of what seems to be pedagogical attention toward modes beyond the alphabetic, we need to acknowledge that in practice Net Generation students, as well as ourselves, are discouraged in Web 2.0 from creating designs. We are certainly posting information, but this information has become “content” placed in a “form” beyond the user’s control. (p. 6)

Thus, when we talk about access, we also mean not just access to the machine, to the mobile, or even to platforms, but rather to the ways with which any of them operate below the surface. Arola (2010) explains that what is missing from social media platforms is access to creatively alter designs and make the space a place of their own. Access to algorithms, interfaces, design, and other tools that may be considered “back-end” operations are rendered invisible by labeling them as confidential and proprietary or accessible only to those with specialized knowledge. In the case of Web 2.0 and social media platforms, design is lost, is invisible, and inaccessible.

Similar issues related to a lack of access to basic aspects of how technologies function and a related inability to control their output manifests when working with generative AI. For example, most commercial chatbots do not reveal the corpus used to train their AI—thus users cannot interrogate the content drawn upon to produce the output they receive—giving them an incomplete understanding of the rhetorical context for the information they receive. This is a different but related sort of access problem to those described above. McKee and Porter (2020) acknowledge the inability of AIs to address a rhetorical context of communication between machines and between the machine and humans. Generated output from AI can be unpredictable and lacking considerations of ethics and rhetorical principles:

The ethics of human-machine writing requires of both humans and machines a deeper understanding of context and a commitment to being a good human, a good machine, and a good human-machine speaking well together. (McKee & Porter, p.111)


They highlight a key problem with both past technologies and generative AI: generative AI, like many new technologies we have encountered before, is rhetorical because of its dependence on users, speakers, or writers and has no ability to understand ethics and the rhetorical situation. McKee and Porter (2020) argue that AI like past technologies cannot and do not address rhetorical concerns, noting that Microsoft’s Twitterbot, for instance, was made available to the public without any contextual knowledge “particularly, of what constitutes racism, sexism, homophobia and anti-semitism” (p.111), thus creating a communicator who lacked rhetorical knowledge or any way of considering it when generating responses initiated by a user. Users’ lack of access to the functioning principles and texts informing the output of the AI makes them lack agency and be forced to accept output informed by a repository of bigoted texts, as Byrd (2023) highlights. No training in prompt engineering (discussed below), which also presupposes access to instructional resources or informed teachers, can fully protect users and prevent encounters with content based upon the vast expanse of biased texts on which the AI was trained. As Byrd (2023), argues, the refusal of OpenAI to reveal details of their chatbot’s architecture, supports the idea that “ChatGPT may not be an ethical tool for our purposes as writers and researchers” (p. 138).

In contrast, generative AI may promise some advances when it comes to providing accessibility in digital spaces for individuals who experience content differently or have issues with visual processing. In previous scholarship that, as a result, uses older types of approved terminologies, Browning (2014) expands on two models of disability: the medical model of disability and the social model of disability. She writes, “Many efforts at accommodating individuals with disabilities, though often well intentioned, coincide with a medical model of disability in that accommodations are simply added on to existing structures and systems” (p. 98–99). To provide better access and accessibility, Browning says, “Rather than simply retrofitting our universities, our classroom spaces, and our pedagogies, we must actively integrate disability, in thoughtful and critical ways, into all aspects of our teaching.” (p. 99). Wood (2017) agrees, arguing that the basic conceptions of time that structure in-class writing and longer writing projects developed outside of class need to be rethought to assist all students and that the policies developed should be created with student input. Furthermore, Fox (2013) argues that focusing on disability studies in composition classrooms helps to highlight the mind body connection embedded but often elided in the use and development of digital technologies and foreground the ways that universal design helps to increase accessibility for all users.

Henneborn (2023) notes that society has not done well when it comes to providing people with disabilities accommodations or acknowledging a disability that prohibits a user from participating in technologies of the workplace as she notes that the history of the workplace has not been thoughtful when addressing accessibility:

“We haven’t done well as a society with the digital divide that exacerbates the barriers between persons with disabilities (as well as other marginalized communities) and others” (n. p.). As Kerschbaum (2013) highlights, even multimodal texts that are partially accessible by, for example, providing a transcript of a video, prove to be inaccessible overall because another part of their content, such as the images or navigation, are essential and yet not designed to be accessible and so restrict readers with particular disabilities from using the content.

In contrast to access, which is complicated by generative AI, these new technologies may have some potential to support accessibility in innovative ways. Businesses, corporations, and other institutions are ready for generative AI and the emergence of ChatGPT to provide hope for their employees who identify with disabilities. At its 2023 Ability Summit, Microsoft outlined its plans to employ AI tools in its products, such as Office 365, to alter contrast for some users and generate descriptions of images on demand for others (Cuevas, 2023). Henneborn (2023) argues that generative AI can address some accessibility challenges by creating “inclusive interfaces.” Tools like keyboard navigation, alternative text, voice-enabled interface/speech-to-text, text/image-to-speech, color contrast, dyslexia-friendly fonts, and clear language are all considered basic requirements for inclusive interfaces and appropriate accessibility and all can be enhanced by AI. Henneborn (2023) offers a few examples of such tools: “For instance, Google’s Dialogflow has built-in integration with Google Cloud Speech-to-Text API, allowing developers to create chatbots that support voice-enabled input” (n.p.). Dialogflow CX is the “advanced” edition and Dialogflow ES is the standard. This tool allows individuals to create chatbots and/or voicebots, and while this tool is not free, there appears to be a free trial (https://cloud.google.com/dialogflow). Another AI-powered tool is Be My Eyes or what may also be referred to as Be My AI (https://www.bemyeyes.com/"), which is visual assistance for users with low visibility, and this particular tool is used by a number of large corporations like Verizon, P&G, Google, and Microsoft, to name a few. Generative AI also seems to have the ability to aid users who experience dyslexia by supporting additional add-ons or plug-ins. For example, Dyslexie Font is a plug-in designed to address dyslexia by making reading and understanding easier (https://www.dyslexiefont.com/). While many of these tools to assist with accessibility may currently be free, we know from the development of past technologies that monetization is almost inevitable. Support for accessibility will inevitably collide with issues of access when commercial enterprises seek to make money from the developing technologies. Even ChatGPT currently has a “freemium” model where users can use ChatGPT for free, but if one wishes to extend its use to other tools or to a “premium” version then one would need to pay for an upgrade.

More information about the many effects of generative AIs on access and accessibility will be known moving forward. As the technology develops, we need to draw upon the lessons from our scholarly tradition that tell us to interrogate the rhetorical context in which these technologies operate and examine the access users have to their architecture and to the full range of affordances that enable creative and innovative composition and use. The next section focuses on some pedagogical approaches to these issues through examining how scholars have proposed interacting with other technologies, such as Wikipedia.