The Construction of Authorship and Writing in Journal and Publisher AI Policy Statements

James P. Purdy Duquesne University

AI-generated header image

Introduction

To say ChatGPT and similar generative artificial intelligence (AI) chatbots have captured the attention of academia is a vast understatement. Indeed, since Microsoft-funded U.S. company OpenAI released ChatGPT to the public on November 30, 2022, popular postsecondary education publications like Inside Higher Ed and The Chronicle of Higher Education have published an article or editorial about ChatGPT in nearly every issue, as of the time of this writing. Not since Wikipedia has there been such panic about a digital technology’s potentially negative affect on education, especially for writing. ChatGTP has even been called a “plague” comparable to COVID-19, a disease that killed millions, with its release characterized as a “superspreader event” (Weissman, 2023).

Stakeholders have probed ChatGPT’s impacts on pedagogy (Geher, 2023; Heaven, 2023), cheating and plagiarism (Cotton et al.; Dehouche, 2021), privacy (Cuthbertson, 2023; Satariano, 2023), labor (Chen et al., 2023), and other areas. Given ChatGPT’s accessibility to students, universities have scrambled to update their academic integrity policies (Barnett, 2023), and, in turn, software developers have hurried to create tools for identifying texts written by generative AI (Heikkilä, 2023; Newton, 2023). Some academic journals and publishers likewise have been quick to draft new or update their existing publication policies in response to generative AI. These policies themselves have yet to receive critical attention, however. This chapter will redress that gap. After briefly identifying affordances and constraints of generative AI like ChatGPT, it situates policy responses to ChatGPT in relation to existing computers and writing scholarship, including Baron (2000), Burns (1983), and Herrington and Moran (2001). The chapter then describes the study method and explains its results, including what ChatGPT itself says about publisher and journal AI policies.

These AI policies merit careful attention for two reasons. First, concerns about generative AI center around its capacity to write prose that reads as if written by a human. We worry about generative AI, in other words, because of its potential to masquerade as human and create intellectual property. Second, all the AI policies analyzed for this study forbid listing AI as an author because generative AI does not meet the definition of authorship. That is, in arguing generative AI cannot be listed as an author, these policies define what authors, and by extension writing, are and should be. We should care about these definitions because they are at the core of our work as computers and writing scholars. Based on content analysis and close reading of ten journal and publisher AI policies published within six months of ChatGPT’s public release, this study reveals that while these policies establish authors as needing to be humans, they construct writing as transactional, in James Britton’s (1982) terms, and as a product to be assessed. Missing from these policies is what intellectual growth and knowledge production are lost when AI writes the prose that circulates in academic publications.