Reconsidering Writing Pedagogy in the Era of ChatGPT
Lee-Ann Kastman Breuch, Asmita Ghimire, Kathleen Bolander, Stuart Deets, Alison Obright, and Jessica Remcheck
Discussion
Our sessions with undergraduate students yielded many rich insights. To help make sense of our findings, we return to our overall research questions which included the following:
- How are undergraduate students understanding ChatGPT as an academic writing tool?
- To what extent are students incorporating ChatGPT into their writing product(s)?
- How are students thinking about ChatGPT in their writing process?
How are undergraduate students understanding ChatGPT as an academic writing tool?
Our findings suggest that students in our study saw both the pros and the cons of ChatGPT as an academic writing tool, and they also had several questions. In addition, as our filter analysis demonstrated, students had a multidimensional response to ChatGPT. Regarding pros, students in this study consistently rated ChatGPT texts on the high end of the scale when asked about expectations and satisfaction. Students also rated ChatGPT highly in terms of “relevance” of information provided in ChatGPT texts. In addition, students used overwhelmingly positive words to describe their experience with ChatGPT texts, but often in terms of ChatGPT functionality (noted as “function”), citing words such as “time-saving” “fast,” “convenient” and “efficient.” Similarly, qualitative student comments coded in the filter of “low order concerns” demonstrated the ways students might view ChatGPT texts uncritically, noting its ability to produce texts quickly, with relevant content, correct grammar and mechanics, and overall clear writing. Perhaps the biggest benefit students noted was the way ChatGPT texts might help them at various points in the writing process, whether getting started with writing assignments, organizing ideas, or even editing content. Overall, students were impressed with the initial capabilities of ChatGPT in multiple areas: error-free prose, logical organization, relevant content and ideas, and fast production. Said differently, students’ first impressions of ChatGPT texts might reflect an overly positive response to prose that is quickly and clearly produced.
However, these initial reactions to ChatGPT changed as students read ChatGPT texts more closely. As qualitative comments demonstrated, students also pointed out a number of concerns and questions about ChatGPT texts. Through qualitative coding, our study categorized these concerns across eight filters, suggesting that students had a multidimensional response to ChatGPT as they considered the tool within larger rhetorical contexts of academic writing. The most frequent issue noted by students was a concern about “information literacy” which was expressed through critiques about information included in the texts, whether it was credible (or even real), where the information came from, and the presence or absence of citations. Students also noted the basic nature of ChatGPT texts and the lack of depth or idea development, which we coded through the filter of “logic and organization.” Several students questioned whether ChatGPT texts actually reflected quality college level writing. Students were also concerned about the ethics of using ChatGPT texts, which they expressed through questions or assertions about plagiarism, and the ways in which ChatGPT displaced the voices of students as authors. In addition, regarding ethics, some students expressed concerns about the ways ChatGPT might stifle the learning process, especially if writing is seen as a learning activity in the academy. As one student asked “what does [ChatGPT] mean for research?” and what does ChatGPT do to the future of education? These ethical concerns were connected to student comments coded in the “self and experience” filter, in which students expressed ways they would want to revise or change ChatGPT texts to include more of their individual ideas and thinking. Many students said they would not use ChatGPT produced texts for academic assignments simply because it was not their own work. They emphasized the importance of ownership in their work and even said it would be easier to do the work themselves rather than editing ChatGPT produced texts. And, as our findings showed, students had a number of questions about ChatGPT including how it worked and whether it was acceptable for use.
To what extent are students incorporating ChatGPT into their writing product(s)?
Our study could not answer this question about using ChatGPT for homework adequately because it presented sample prompts that were hypothetical and not placed in the context of real classes. We had hoped to learn from students whether they would be inclined to integrate ChatGPT texts as their academic homework; however, because most of the prompts were not connected to any actual class contexts, students could not answer this question. One task that was closer to academic homework involved the task that asked students to write a prompt that would address a writing assignment they might complete for their major. While we did not require students to add a prompt from an actual class for this task, many of them wrote prompts reflecting work they had done for a class in their major. We did ask students to rate the likelihood that they would use ChatGPT texts “unaltered” as their academic homework; average ratings from students for each of the five ChatGPT texts ranged between 1.5 and 2.5 on a scale of 1 to 5 with 1 being not very likely and 5 being very likely to use texts unaltered. These were the lowest ratings our study recorded, suggesting that students were not overly enthusiastic about using ChatGPT texts for their own homework, or at the very least, they had some big questions about doing so. While these ratings reflect student caution, again, we cannot be sure of these results due to the overall hypothetical nature of the usability test. We also note that students may have reacted negatively to this question because all research team members were faculty and graduate students from the Writing Studies Department. This reality of the usability sessions may have impacted student answers to this question.
While we cannot make any conclusions about whether or not students would use ChatGPT texts unaltered based on this study, our data showed that students have a lot of questions about using ChatGPT texts for their academic homework. As noted in earlier sections, students articulated questions coded in the “ethics as filter” category. Those questions included concerns about plagiarism and the ethics of using auto-generated texts instead of one’s individual ideas and expression.
How are students thinking about ChatGPT in their writing process?
Of all of our research questions, our study yielded the most information on this question about writing process. Through ratings of ChatGPT texts, we learned that students highly rated the likelihood that they would use ChatGPT to generate ideas as part of their academic writing process. In addition, these results were supplemented by our coding of qualitative comments in the “process as filter” category, which was the most frequently coded category in our data set.
In response to many of the prompts, students articulated ways that the ChatGPT texts were useful as starting points as illustrated by student comments such as: “I would definitely take a look at those sources and maybe use those sources as a jumping point” (Participant G, ST5 PT3). Similarly, another student responded to the literacy narrative text produced by ChatGPT and said “seeing this as an overall like as a story kind of gave me ideas of . . . what I would need to do and then I could apply that to my own my own life for a different story that I wanted to tell” (Participant L, ST1 PT2). Students also commented on ways that ChatGPT would be helpful for creating an outline or initial organization of ideas: “If I was like really struggling, I can see how it'd be helpful just to get an idea of like an outline” (Participant QQ, ST1 PT2). Students were also aware of the line between idea generation and copying ChatGPT: “I think this is like a good way to generate an idea for like a paper or something. but not necessarily copying and pasting” (Participant WW, ST1 PT2). Students were impressed by the ways ChatGPT generated ideas, included relevant content, provided initial outlines of organization, and provided a springboard of sorts for further writing.
The findings of this study are important because they go against the depiction of ChatGPT as a technology that promotes writing product over the notion of writing process. As Sid Dobrin (2023) noted, ChatGPT naturally brings up the oft cited binary in writing pedagogy of “product versus process” (p. 22). This binary means that writing can be described both in terms of a finished writing product such as a report, a poster, a tweet, a memo (etc.) and in terms of writing process that includes writing activities such as prewriting, writing, and rewriting that lead to the development of written products. ChatGPT raises this binary because the technology produces a product, literally in seconds, in response to a specific prompt, and this function of ChatGPT immediately raises the question of whether or not students would use ChatGPT as a substitute for writing process, much like calculators have been said to replace the thinking processes involved in working through complex calculations. It is tempting to think of ChatGPT as a “writing calculator.” As we have conducted this study, we have often heard the question “has ChatGPT replaced writing?” and “is writing over as we know it”? The results of our study–while exploratory and limited to a small sample–support the idea that ChatGPT does not replace writing, that it is not a writing calculator, and that it could even be useful in one’s writing process. Student responses noted the usefulness of seeing a sample text from ChatGPT as a way to generate ideas, and also to see an initial organizational structure of ideas.
Limitations
Reliability and validity are key concerns for all research. With that in mind, we caution against extending the results we found in this study to other populations of students for a few key reasons and suggest further areas for research and exploration. First, it is possible that students were not likely to reveal their true thoughts and responses to the texts because we shared that as moderators, we were graduate students and a faculty member in the Writing Studies department. The students may have been unconsciously intimidated, and this may have biased our results. While all moderators were instructed carefully and followed the script we developed, this potential bias is difficult to correct for given the nature of the study. Second, our sample of students was heavily weighted towards students in the liberal arts at a large public university in the Midwest, although we did work with some students in other categories of major. Further studies may reveal different likelihoods, satisfaction, and responses to using ChatGPT and LLMs, given different academic backgrounds. While we think that the results we got here are reasonably extendable to other groups of students, we caution against extending this work too far. Moreover, there may be different attitudes at small private universities, which may have a different make-up of students, or geographic differences across the country that influence the results. And third, we chose to use only particular model of LLM: the freely available ChatGPT 3.5. At the time of the study, 4.0 had been released, but as a team we agreed to focus on the 3.5 model as it was freely available to students and it is most likely that students would not pay the $20 monthly subscription fee for access to 4.0. Both model 3.5 and 4.0 are regularly updated, and so changes since then may influence and change results. We suggest one future direction for research is to use more advanced models, and models from other companies such as Google Bard, LLaMA, PaLM, among many other tools, when doing research on students’ attitudes towards using LLMs.