Journal and Publisher AI Policy Statements

James P. Purdy

ChatGPT Weighs In

As has become typical of scholarship about ChatGPT, I asked ChatGPT to write content germane to the topic of this article. I prompted ChatGPT2 to “write a policy for academic publishers about accepting text written by ChatGPT.” Appendix A provides its full response. As part of this policy, ChatGPT, in section 2, notes that ChatGPT should not be listed as an author. Like the policies analyzed for this study, it explains that authors are those people who use ChatGPT. Also like most of the policies analyzed for this study, ChatGPT indicates authors should disclose what text was written by ChatGPT. Prescient in anticipating the policy of WAME, ChatGPT’s response indicates academic publishers may require authors to provide the prompts submitted to ChatGPT and the responses it generated.

I also asked ChatGPT directly, “Should ChatGPT be listed as an author on academic articles?” Its full response comprises Appendix B. Again, ChatGPT agrees with the policies analyzed for this study that ChatGPT should not be listed as an author on academic publications. Again like these policies, ChatGPT bases its decision on the definition of an author. It indicates ChatGPT “does not possess authorship capabilities.” ChatGPT goes on to identify these capabilities as “the ability to conceive ideas, conduct research, or contribute to the scholarly content of an article.” Even for ChatGPT, then, being an author means generating ideas, researching, and creating content. Perhaps ironically, it focuses more on process and less on product.

Conclusion

This analysis of journal and publisher AI policy statements in response to ChatGPT reveals that we as computers and writing teacher-scholars still have work to do to circulate more broadly the notion that the technology of writing makes meaning. The policy statements analyzed for this study reinforce that writing is an ethical, human activity. They also construct writing primarily as a transactional activity and as a product to be assessed. These policies center on the concern that generative AI like ChatGPT will fabricate incorrect data or introduce errors that human authors will not review, find, and correct—in other words, that the written product will be flawed. This concern is well founded to be sure.

But that is not the only—or even the most concerning—problem. With a few exceptions, missing from these policies is what intellectual growth and knowledge production are lost when AI writes our prose. They focus on what happens to the writing we create over what happens when we no longer create our writing.

Incumbent on us as computers and writing teacher-scholars is to seize this opportunity to evangelize what we already know well: writing makes meaning in the world and that meaning is shaped by the tools and technologies of writing. It is the process of writing that generates new knowledge and the product of writing that shares that knowledge with others. Academic journals and publishers would do well to promote this view in their policies on generative AI. Along with lamenting the possibility of getting in trouble for publishing flawed content, they should lament the possibility of generative AI outsourcing the intellectual work of scholarly writing—for ourselves, our discipline, and our intellectual property.

Furthermore, given their classification of chatbot text as generated by a writing tool rather than a human author, these policies might also reinforce that all writing tools, including but not limited to generative AI, should be identified or cited in a text. For instance, such tools might be referenced as part of methods sections, particularly for scientific writing that conventionally lists the materials used for research. Though members of the computers and writing community often position them as objects of analysis for their work, they rarely describe explicitly what word processing software, citation managers, image editors, apps, or other writing tools and technologies they used to compose, create, deliver, and circulate their texts. Perhaps they should. Doing so would make such technologies more visible—and reinforce that writing itself is one of many technologies on which textual production depends. Now generative AI will increasingly be added to that list.

This study is limited by its attention to a convenience sample of a small number of publisher and journal policies published shortly after ChatGPT’s public release. Conclusions cannot be drawn about all such policies. Future work could compare and contrast later journal and publisher AI policies with the early policies studied here to identify to what extent they have changed and consider what those changes mean for our evolving understanding of authorship in a world of generative AI. Future work might also study additional policies to determine whether the views identified in this chapter represent prevailing perspectives. Still, this chapter provides a starting place for understanding the policies that regulate academic publication and what they say about major foci of our subfield, computers and writing and writers.

2I used the freely available ChatGPT-3 to write this content.