Journal and Publisher AI Policy Statements

James P. Purdy

Inclusion of Generated Content

Similar to their consistent position on generative AI not qualifying as an author, the policies analyzed for this study widely agree about when including AI-generated material is acceptable. At the time of this research, all but one policy allowed for including AI-generated content in submitted texts. Only Science Journals forbade the use of AI when writing an article for its journals. It declared authors must be capable of producing work that is “original,” and generative AI chatbots like ChatGPT do not produce original content. Like other publication venues, however, Science now permits authors to submit articles with content from generative AI, if that use is cited appropriately. All other policies permit use of AI to generate content, if that use is cited appropriately.

While not all policies specified how use of AI should be cited, those that did offered guidelines for appropriate citation. These guidelines entailed explicitly acknowledging use of generative AI, usually in the methods section or acknowledgements of an article or chapter. Table 3 provides particulars from the policies studied.

Table 3. Policy Guidelines for How to Cite Generative AI Use
Publisher / JournalHow to Acknowledge Use of AI
Accountability in Research (ACL) Conference“[E]laborate on the scope and nature of their use.”
arXivReport use in ways “consistent with subject standards for methodology.”
Computers and Composition (C&C)Disclose use by “adding a statement at the end of their manuscript in the core manuscript file, before the References” titled “Declaration of Generative AI and AI assisted technologies in the writing process.”
Elsevier“[I]nsert a statement at the end of their manuscript, immediately above the references, entitled ‘Declaration of AI and AI-assisted technologies in the writing process.’”
Journal of the American Medical Association (JAMA)Provide a “clear description of the content that was created and the name of the model or tool, version and extension numbers, and manufacturer.”
NatureDocument use “properly” in the methods section or elsewhere, if the text has no methods section.
Oxford University Press“[D]isclose” use in the methods or acknowledgements section and in the cover letter to the editors.
Proceedings of the National Academy of Sciences (PNAS)“[C]learly acknowledge” use in the materials and methods section or elsewhere if no such section is part of the text.
Science JournalsAI-generated text is not allowed.
Taylor & Francis“[A]cknowledge” and “document” use “appropriately.”
World Association of Medical Editors (WAME)“[D]eclare this fact and provide full technical specifications of the chatbot used (name, version, model, source) and method of this application in the paper they are submitting (query structure, syntax).”

Taken together, these policies dictate that readers are to be made aware of exactly what textual content was written by AI, what kind of AI wrote that content, what input led to that content, and why it was generated. Rather than provide specific guidelines about how to do so, Taylor & Francis and arXiv defer to disciplinary conventions for what these citation gestures comprise and where they appear. This approach suggests AI documentation practices may differ across fields as they develop, an important consideration for postsecondary institutions as they work to craft or update their own academic integrity policies in response to generative AI.