Journal and Publisher AI Policy Statements

James P. Purdy

The AI Conversation in Language and Literacy

Soon after its release, computers and writing scholars were quick to offer critical analysis of ChatGPT. In the May 2023 issue of Computers and Composition, for instance, Salena Sampson Anderson explained the value and limits of using metaphors such as tool and collaborator to frame students’ interaction with ChatGPT. However, while ChatGPT has become the face of generative AI that relies on LLMs, studies on other forms of AI in language and literacy scholarship are nothing new, even outside computers and writing. Before ChatGPT’s release, AI’s role in language and literacy instruction was already receiving heightened attention. Xinyi Huang et al. (2023), for instance, conducted a bibliometric analysis of 516 papers on language education published between 2000 and 2019 and found that attention to AI increased during this period. They discovered most articles in their corpus addressed AI tools that facilitate automated assessment of writing and tutoring systems for writing and reading.

Like the papers in Huang et al.’s (2023) corpus, scholarship in computers and writing has also studied AI tools that facilitate the automated assessment of writing. For example, Charles Moran and Anne Herrington’s (2001) foundational work on software for automated grading of writing raises concerns about authorship and audience. Via their analysis of WritePlacer Plus and Intelligent Essay Assessor, they argue that AI-driven scoring technology threatens teachers’ jobs, changes students’ conception of writing, and ”defines writing as an act of formal display, not a rhetorical interaction between a writer and readers.“ They profess that AI that grades writing creates a new writing situation: ”writers writing to computers“ rather than on or with them (pp. 481, 496; italics in original). Generative AI applications like ChatGPT likewise create a new writing situation. With generative AI, however, the computer becomes the creator rather than just the audience. It handles invention, not just delivery and reception. This attention to writing for and by computer algorithms has continued, including in the 2020 special issue on ”Composing Algorithms: Writing (with) Rhetorical Machines“ (Beveridge et al.) and helpfully reinforces the rhetorical consequences of algorithms that drive AI (e.g., Gallagher, 2020; Crider et al., 2020), though this scholarship has yet to explicitly explore AI policy.

Scholars in computers and writing, of course, have a long history of studying AI and its role in writing production and evaluation. In fact, in the 1983 inaugural issue of Computers and Composition, Hugh Burns called for composition scholars, especially those designing software programs, to turn to the field of artificial intelligence. He was prescient both in predicting that ”natural language processing and intelligent computer-assisted instruction“ would be two areas of significant AI advancement and in reminding us that applications of AI in writing programs have ”both good and bad consequences the humanistic composition teacher should consider“ (p. 3). Forty years ago, Burns anticipated that in solving certain writing problems AI would introduce new ones. For him, this recognition did not mean turning away from AI but rather realizing that AI’s goal need not—and perhaps should not—be replicating the human brain (p. 4). While Burns does not go so far as to argue for policy, he offers views useful in drafting and implementing policy. From his perspective, for instance, the standard by which to judge generative AI’s effectiveness should not be how well it performs the activities of the human brain or replaces human behavior.

Historical responses to new writing technologies provide helpful context in understanding responses to ChatGPT and similar generative AI. As Dennis Baron (2000) reminds us, panicked reactions to new writing technologies are typical. Baron reports that we usually go through a cycle of response that includes concern and distrust before acceptance. He identifies five stages: The new literacy technology first has a ”restricted communication function“ available to only a select few; then that technology is used by a larger population to imitate previous literacy technologies. Next, the new technology is used in new ways that influence the older technology it used to imitate. Then opponents argue against these new uses of the technology as problems of fraud and misuse are made evident. Finally, proponents seek to demonstrate the ”authenti[city]“ and ”reliability“ of the new technology so it is more widely accepted (pp. 16–17). Especially pertinent for this study is stage 4 when opponents bewail misuses of the technology. The policies analyzed for this study suggest that many journals and publishers are at stage 4 in their approach to ChatGPT at the time of this writing. Baron illustrates this stage by recounting how when erasers were first added to pencils, teachers worried students would become lazy and sloppy because they had the opportunity to erase mistakes (p. 31). Similarly, teachers now worry students will become lazy and dishonest because they can have ChatGPT generate prose for them. Academic journals and publishers worry scholars will, too.

Perhaps as a result, early responses to ChatGPT parallel initial responses to Wikipedia. As with Wikipedia, much of the concern about ChatGPT has focused on its use to create textual products—that is, as a tool that can write texts for people, especially students, rather than as a tool that people can use in their process of writing. Thus, as with Wikipedia, many initial responses have been to ban ChatGPT. Entire countries, including China, Cuba, Iran, Italy, North Korea, Russia, and Syria (Martindale, 2023; Satariano, 2023); school districts, including Los Angeles Unified School District, New York City Public Schools, and Seattle Public Schools (Johnson, 2023; Rosenblatt, 2023); and employers, including Accenture, Amazon, Bank of America, Citigroup, JPMorgan Chase, and Verizon (Sharma, 2023; Wodecki, 2023b), have banned its employees from using ChatGPT. Closer to academia, the International Conference on Machine Learning banned any papers including AI-generated content from their 2023 conference (Wodecki, 2023a). When this study was conducted, all but one of the journal and publisher policies analyzed for this study do not go so far as to ban the use of ChatGPT or similar generative AI. As I revise this chapter, none do. However, all forbid citing it as an author. Treatment of Wikipedia has largely changed over time, especially to transition from banning it to recognizing it as a potentially beneficial part of writing practices in ways beyond just a source to cite (Cummings, 2009; Purdy, 2009). Treatment of ChatGPT and other generative AI may do the same, to recognize the futility of banning it and the need to devise best practices and thoughtful policy.