Reconsidering Writing Pedagogy in the Era of ChatGPT

Lee-Ann Kastman Breuch, Asmita Ghimire, Kathleen Bolander, Stuart Deets, Alison Obright, and Jessica Remcheck

The Rise of AI and ChatGPT

Generative AI technologies have advanced and become more accessible, drawing the attention of writing instructors across the world. Generative AI technologies such as ChatGPT are called “pre-trained” large language models (LLMs). These models are trained by large data sets to predict next words in phrases and sentences based on language patterns. While it feels new, Duin and Pedersen (2021) noted that Generative AI technologies have been around for some time. And, as Anson and Straume (2022) explained, generative AI recalls a long list of technological innovations that have given educators pause, such as the Internet and web, and even technologies such as the telegraph (p. 1). In short, we have experienced the shockwaves of new technologies before. Yet ChatGPT has produced something we have not seen before, which is the capability of producing complete texts in a very short time in response to a prompt. Writing scholars are grappling with this latest technological innovation.

With the emergence of ChatGPT, new questions about writing, writing pedagogy, and writing processes have emerged. Some scholars understand this advancement as a new milestone in the history of writing and composition, while others like to consider it a glimpse into the future. Scholars in writing studies seem to embrace this future, outlining how our writing theory and practice might shift and change, while at the same time noting caution and ethical considerations. For example, in Writing Futures: Collaborative, Algorithmic, Autonomous, Duin and Pedersen (2021) outlined a framework for future writing that involves collaborative, algorithmic, and autonomous technologies. They described ways in which autonomous AI agents can develop a co-constitutive relationship with a human agent by learning from them and mining their data (p. 89-90). However, they also raised questions about AI writing through technologies such as GPT 3.5, GPT 4, and chatbots, especially regarding literacy and human agency. They write, “How will writers contextualize future uses of digital-assistant platforms throughout writing? How will literacy practices change with the uses of autonomous agents? What affordances of autonomous agents lend themselves to more ethical, personal, professional, global, and pedagogical deployment?” (Duin and Petersen, 2021, p.86). Similarly, the Association for Writing Across the Curriculum articulated both hope and concerns about generative AI in their draft statement “Artificial intelligence Writing Tools in Writing Across the Curriculum Settings” (Hesse et al., 2023). They asked, “how might the act of critiquing, rewriting, or discussing AI-generated text foster growth? Are there scenarios where student writing could productively be complemented, supplemented or assisted by AI language generators? Could this happen in ways that do not preempt student learning and growth?” (para. 6, 2023).

Another perspective in writing scholarship forwards the idea that technologies such as ChatGPT do not fundamentally change our pedagogy but rather offer writing scholars and teachers an opportunity to reexamine writing, taking account of both benefits and drawbacks of these technologies. In “Amazement and Trepidation: Implications of AI Based Natural Language Production for the Teaching of Writing,” Anson and Staume (2022) opined that ChatGPT would not replace the fundamental writing process of thinking critically, weaving ideas and experiences, engaging with peers and their feedback, and developing communication in response to rhetorical situations (p. 6). For them, ChatGPT invites us to rethink the fundamental skills of writing while encouraging us to engage with the tool. In other words, Anson and Staume suggested that tools such as ChatGPT can help us reexamine the pedagogical theory of writing. They write,

Perhaps we should take a step back and ask ourselves if the greatest problem is whether students write per se. For are there not larger issues at play here?….Rather than trying to combat or extinguish tools that we fear will subvert students’ learning, instructors could bring them into class, have students work with them, and analyze their outputs. By creating awareness, not least among the students as a group, ethical and practical dilemmas could be addressed. (p. 7)

Akin to this suggestion, Baidoo-Anu and Owusu Ansah (2023) assessed the pros and cons of ChatGPT in their article “Education in the Era of Generative Artificial Intelligence (AI): Understanding the Potential Benefits of ChatGPT in Promoting Teaching and Learning.” Baidoo-Anu and Owusu Ansah identified benefits of ChatGPT including personalized tutoring, annotated essay grading, learning translation, interactive learning and adaptive learning (p. 7-9). Additionally, they identified downsides of ChatGPT such as the lack of human interaction, limited understanding, bias in training data, lack of creativity, and lack of contextual understanding (p. 9-12).

Elsewhere, scholars contributing to the Composition Studies Special Issue in AI (2023) have suggested that ChatGPT has not entirely changed the dynamics of writing but rather see this new technological innovation as a needed impetus to rethink our pedagogical theory and practices. As an example, in “Don’t Act Like You Forgot: Approaching Another Literacy Crisis by (Re) Considering What We Know about Teaching Writing with and through Technology,” Johnson (2023) argued that the emergence of ChatGPT has invited us to rewrite our threshold concepts of writing. He suggested that the earlier principles in threshold concepts of writing might be rewritten in the following ways:

  1. Technologies are never neutral and always political, material and rhetorical,
  2. New technologies build on and expand from old technologies,
  3. Technologies must be taught,
  4. Policy is not pedagogy (Johnson, 2023, p.170-173)

In another instance of “re-seeing” our pedagogy through generative AI technologies, Scott Graham (2023) reexamined the idea of post-process pedagogy and proposed that ChatGPT provides an opportunity to consider writing as a multidimensional recursive process. His idea of multidimensional recursive process would integrate fact checking, curating, prompting and revising as important stages in writing (p.165-167). In a similar vein, in “Large Language Models Write Answers,” Annette Vee (2023) noted how writing tasks change through the use of LLMs, which is to say, writing becomes a quest to answer a specific question or prompt. While this indeed may be a new way of viewing writing, Vee reminded us that writing is still about critical inquiry, meaning that writing is a process-based inquiry into productive uncertainty. We need not abandon this understanding of writing, even though tools like ChatGPT allow us to structure writing tasks differently. Said differently, whereas technologies like ChatGPT are very good at producing answers to prompts, they do not necessarily replicate the ways writing is a means to productively locate oneself in the world.

Scholars have also begun discussing ethical concerns about Generative AI technology in relation to plagiarism. While scholars have noted the obvious concerns about plagiarism (Anson and Straume, 2022), few have outlined strong stances about prohibiting the use of ChatGPT or related AI technologies. Sidney Dorbin (2023) addressed plagiarism through the concept of authenticity, noting that technologies such as ChatGPT challenge notions of authenticity and authorship in writing. While he did not outline a policy regarding ChatGPT and plagiarism, he suggested that in using ChatGPT, students will need to be self-directed and consider authenticity, academic integrity, and learning together simultaneously (p. 12). He cautioned against using technologies to police or surveil student writing and encouraged students to critically think through situations in which generative AI might (or not) be useful. He further added that questions of authenticity and plagiarism are also related to culture: “It’s important to note as well that concerns about GenAI and plagiarism are very much Western worries, as the very idea of plagiarism is not inherently universal…instructors, researchers, and administrators will need to reassess and redefine plagiarism and academic honesty both for student work and for their own research” (p.15). Dobrin offered helpful starting points but ultimately suggested that generative AI technologies need to be considered contextually.

Combined, these scholarly arguments provide a backdrop for our study. Writing scholars have outlined helpful questions about writing and AI, benefits and drawbacks of the technology, ways our threshold concepts might change, ethical implications of using the technology, and reminders of how we might sustain writing as a practice of critical inquiry. Overall, we sense that scholars in writing studies advocate embracing generative AI technology through a constructive yet critical lens.