Journal and Publisher AI Policy Statements

James P. Purdy

AI as Author

All policies analyzed for this study agree that AI should not be listed as an author for an academic publication. All base this decision on how they define authorship. These policies assert authors are people defined by three characteristics: Authors possess integrity, they can assume responsibility for content accuracy, and they can be held accountable for research-writing decisions. These policies argue that because AI chatbots like ChatGPT cannot fulfill these criteria, they cannot be authors.

The first characteristic is that authors are honest. The policies associate authorial activities with a desire for a text to have “integrity,” a term used by six policies: Elsevier, JAMA, Nature, PNAS, Taylor & Francis, and WAME. They define authorship as being “accurate” (e.g., Elsevier), “original” (e.g., Science), and “aware” (e.g., ACL). These policies suggest that authors are people who care about the quality of the texts they write.

The other two characteristics these policies associate with authorship extend from this idea. Because authors have integrity, they take responsibility for the quality of the text and can be punished for failing to do so. The second characteristic is that authors assume accountability for the accuracy of textual material. In other words, according to these policies not only do authors desire to behave with integrity, but they also actively take responsibility for doing so. For example, arXiv explains authors must be able to take “full responsibility” for textual content. Similarly, Elsevier indicates that authors have the responsibility to ensure the “accuracy or integrity” and originality of their work. Following Elsevier’s policy, Computers and Composition, which is published by Elsevier, contends, “authors are ultimately responsible and accountable for the contents of the work.” Similarly, JAMA affirms authors must be able to “take responsibility for the integrity of the content” generated by other tools. ACL explains that authors are people who have “cognition, perception, agency, and awareness” (p. 7). PNAS agrees, defining authorship as by the ability to take “responsibility for the paper.”

The third characteristic follows from the second. Several policies advance that authors not only are people who can be held responsible for their decisions but also are people that can be punished for failing to do so. For Elsevier, being an author requires the “ability to approve” the text. That is, authors must have the capability to make a judgement about the suitability of the text for publication. According to Nature, taking “responsibility for the content and integrity of scientific papers” includes taking “legal responsibility” for those decisions. In other words, authors must be entities who can suffer legal consequences for including inaccurate content or violating sanctioned source use practices or standards of integrity. Similarly, PNAS explains authors must be able to “[b]e held accountable for the integrity of the data reported.” For Taylor and Francis, being an author means being “accountable for the originality, validity and integrity" of a publication. The ability to be held accountable starts with the ability to consent to publication and ends with suffering punishment for publishing flawed content, which these policies assert ChatGPT cannot do.

Though not all publisher and journal AI policies explicitly mention all three characteristics, these characteristics are deeply intertwined in the policies. According to these policies, because authors have integrity, they take responsibility for the accuracy and originality of their writing and can be punished for violations. As WAME puts it, authors ultimately (must) have the ability to “understand” what it means to be an author. Taken together, these policies suggest being an author requires a level of metacognition. Authors must know they are authors and understand the ramifications of their authorial decisions.