Generative AI and the History of Writing

Anthony Atkins and Colleen A. Reilly

Conclusion

Our chapter highlights several of the major themes of scholarship related to computers and writing and digital composing and connected that legacy of scholarship to current issues related to writing with generative AI. Numerous additional themes are relevant to this discussion, including issues of privacy and surveillance, which Johnson also highlights in his video response (above) to our question. Our scholarly tradition of interrogating privacy and surveillance in digital spaces is robust and growing and includes numerous recent articles, special issues, and books including a publication in Kairos based on a Town Hall at Computers and Writing in 2015 (Beck et al., 2016), a collection of essays edited by Beck and Hutchinson Campos (2020), and a special issue of Computers and Composition edited by Hutchinson Campos and Novotny (2021). The work in these publications highlights the largely invisible intrusions into privacy and ubiquitous surveillance that individuals encounter when learning, working, and playing in digital environments. They also emphasize the lengths to which the corporations and developers of the technologies will go to hide those risks and keep the infrastructure and algorithms powering the technologies invisible to users and unavailable for scrutiny as proprietary information. This line of scholarship provides guidance for examining the potential threats to privacy posed by generative AI, which, given its newness, are somewhat murky and speculative. Our scholarly tradition warns us to be skeptical and wary, but we, like government researchers (Busch, 2023) and data privacy enterprises (Securiti Research Team, 2023), must extrapolate the potential harms based on what is known about how generative AIs function and how privacy has been compromised by similar technologies. For example, a review of basic privacy scholarships highlights that any corpus of data is vulnerable to hacking, exploitation, or accidental leakage, and AI is no different. The vast stores of data collected to train a Chatbot like ChatGPT have already been compromised and will continue to make users and those whose data has been secretly collected to train AIs vulnerable to the release of personal medical, financial, social, and other information (Securiti Research Team, 2023). As Morris (2020) explains, users of chatbots who have rare disabilities may be at even greater risk of privacy violations when using the AI to seek information to learn about their conditions and seek treatment options. As she notes, “past incidents of re-identification of individuals from anonymized datasets…indicate the difficulty of truly anonymizing data” (Morris, 2020, p, 36). In 2023, not only do individuals with disabilities or illness risk exposure, but so do women seeking reproductive healthcare options and individuals from other marginalized communities, such as youth who identify as trans. Our scholarly tradition helps us to identify risks related to privacy and alert our students, colleagues, and others, but, unfortunately, provides few tangible solutions to these and other significant problems outlined above related to generative AI and future technologies to follow.

That the problems are intractable and yet unknown cannot cause us to give up. As Johnson’s video highlights, as teachers we cannot surrender because we must help our students to resist the apathy that can result from the enormity of the task of negotiating the rapid and risky technological changes that must be faced pedagogically, organizationally, and socially. The scholarship explored above reflects that our field has faced such upheaval before and found ways to work with and through the technologies, whatever form they take. We take inspiration and instruction from that history.