Journal and Publisher AI Policy Statements
James P. Purdy
Writing as a Product
As part of their policies about generative AI, these publishers and journals offer frames for what writing is and does. Most policies analyzed for this study treat writing as a product. They focus on the final text submitted for publication with attention to the accuracy (Elsevier, WAME) and correctness (C&C, WAME) of its content. Their concern is that an error generated by ChatGPT or another generative AI, be it an error in language, content, source, or citation, will not be caught and will be included in the final publication. In other words, these policies are motivated by a desire to prevent the published version of a text from being marred by error, and they place the responsibility for avoiding such error on the author (rather than, say, on themselves to citation check).
Two policies, however, mention writing as a process in explaining why ChatGPT cannot be listed as an author. According to Oxford University Press, for instance, authorship entails “significant contribution to the design and execution” of a study. Being an author entails more than writing the final publication. It requires planning and performing the research. Oxford UP’s policy argues that ChatGPT cannot be an author precisely because it contributes only to the textual product. ACL likewise asserts, “participation in the writing process is a requirement for becoming an author” (p. 3). Its explanation for why AI cannot be listed as an author is based in its assertion that authors must contribute to the writing process, not just the product. ACL, moreover, recognizes multiple ways generative AI may be used throughout the writing process, ranging from assistance with “paraphrasing” and “polishing” an author’s “original content” to producing “low-novelty” templated text to conducting literature searches to generating new ideas and new textual material. According to ACL, writing entails multiple activities during which generative AI might intervene. It favors using AI for the former activities rather than the latter two.
Computers and Composition, perhaps not surprisingly as the journal from the study corpus most connected with English studies, framed its policy as the most process oriented. The first sentence of C&C’s policy reads, “The below guidance only refers to the writing process.” C&C declares that “authors should only use these [AI] technologies to improve readability and language.” In this way C&C frames the writing process as the purview of humans and presents AI’s role as to edit the text after it has been written. Process is for people; product is for machines.
Writing as Transactional
In addition to addressing writing primarily as a product, the publisher and journal policies analyzed for this study present writing, in Britton’s (1982) terms, as transactional rather than expressive or poetic. Such policies frame writing’s role as delivering information, as fulfilling a transaction between sender and receiver. Their main concern is that generative AI can communicate incorrect or biased information. They endeavor for that information to be accurate, as well it should be. While unsurprising given the purpose of academic journals and publishers, this framing is limited. These policies do not discuss writing as something to be studied for its aesthetic qualities or something that fosters idea development and self-reflection. An exception is WAME (World Association of Medical Editors), which warns that “the mere fact that AI is capable of helping generate erroneous ideas makes it unscientific and unreliable, and hence should have editors worried.”
These policies focus not only on text accuracy, but also on text readability. Indeed, the main role they support for generative AI like ChatGPT is to enhance the readability of the text. For example, C&C’s policy explains, “authors should only use these [generative AI] technologies to improve readability and language” (p. 5). While readability connects to issues of style and can thereby connect to aesthetics, these policies address readability in terms of comprehension, of making the text understood to its audience. At its most extreme, this suggested use of generative AI offers style as something that can be outsourced, privileging writing as completing a transaction. As computers and writing scholars know, such a separation is deeply engrained in web authoring principles (e.g., the separation of html and css in computer programming) but is not always easy or possible in practice.
That these are policies for academic journals and publishers clearly leads, in part, to their focus on transactional writing. Literary, creative writing, or other kinds of journals might place more emphasis on the expressive or poetic. Moreover, I, like Britton (1982), may be guilty of separating these functions too artificially. Writing is never simply just transactional or just expressive or just poetic. Still, these policies perhaps unwittingly perpetuate the problem to which Britton responded: the need to recognize that writing also has expressive and poetic functions. Misinformation is not the only consequence of ChatGPT writing for people. Consequences also include a limited notion of writing itself and the ways in which the tools and technologies of writing inevitably shape the text that is produced.
Though not included in the corpus for this study, as it is a statement from a professional organization rather than a policy of a publisher or journal, the Association for Writing across the Curriculum (AWAC) published a statement that is noteworthy in offering a response to ChatGPT that discusses writing differently than most of the journal and publisher policies. AWAC explains the loss to learning when generative AI writes for people: “writing is ‘a fundamental means to create deep learning and foster cognitive development’; it is 'an intellectual activity that is crucial to the cognitive and social development of learners and writers.” In this way, AWAC presents the stakes of generative AI’s intervention in writing differently. Its concern is less the possibility for error in textual products and more the possibility of a loss in learning and knowledge production when people spend less time doing the “intellectual activity” of writing. AWAC and the policies analyzed for this study have somewhat different purposes, of course. AWAC focuses more directly on pedagogy; the journal and publisher policies in the study corpus focus more directly on scholarship. Still, AWAC provides an alternative response to generative AI like ChatGPT that journal and publisher policies might consider. These policies, in other words, might lament less the possibility of getting in trouble for publishing flawed content and more the possibility of generative AI outsourcing the intellectual work of scholarly writing.