Making It So: What Students Actually Think about Generative Artificial Intelligence Use in Their Academic and Personal Writing Lives
Jeanne Law Kennesaw State University
James Blakely Kennesaw State University
John C. Havard Kennesaw State University
Laura Palmer Kennesaw State University
Introduction
Academic responses to the emergence of generative artificial intelligence have followed a perhaps predictable pattern, similar to previous technologies that disrupted writing instruction, such as the word processor and world wide web. Tech evangelists such as Bill Gates cautioned that concerns about artificial intelligence have historical precedent and that although the technology would require adaptation, those challenges could be navigated (Gates). However, early academic responses tended to be more histrionic in nature. For instance, in the widely circulated Chronicle article “Will Artificial Intelligence Kill College Writing?”, Jeff Schatten succinctly summarized the early fears provoked by the technology by asking, “If anyone can produce a high-quality essay using an AI system, then what’s the point of spending four years (and often a lot of money) getting a degree?”
Due to these fears, early discussions often hinged on the dangers of a decline in student writing due to student usage of artificial intelligence. The Association for Writing Across the Curriculum, for instance, was compelled to stress the dangers to learning posed by student reliance on artificial intelligence, stating that “As scholars in the discipline of writing studies more fully explore the practical and ethical implications of AI language generators in classroom and other settings, we underscore this: Writing to learn is an intellectual activity that is crucial to the cognitive and social development of learners and writers. This vital activity cannot be replaced by AI language generators.” As such, much early discussion hinged on policing student AI use through the development of AI detectors and other means. However, while detectors may be useful as a starting point for identifying inappropriate AI usage, they have also thus far been revealed to be unreliable and to produce false positives (Nelson), and moreover to be biased against non-native English speakers (Liang et al). Therefore, it may be impossible to police artificial intelligence entirely at this time, and in any event it is pedagogically problematic to focus on surveilling, disciplining, and punishing students as a pedagogical response to their use of a new technology that will be widely used in non-educational sectors.
With this growing realization, scholars have turned to discussing more nuanced models for asking students to engage with artificial intelligence. This shift is reflected in the statements of the MLA-CCCC Joint Task Force on Writing and AI. For instance, in a correspondence statement on National Priorities for Artificial Intelligence, the Task Force represent a number of concerns regarding AI, calling for an AI Bill of Rights ensuring citizens and consumers are apprised when writing is AI-sourced; consideration of linguistic diversity in the regulation of AI that ensures AI does not further diminish endangered languages; and transparency around the corpuses used to train large-language models. However, they also acknowledge writing instruction as a critical site for ensuring that students are educated on such issues. For instance, they stress that given the potential challenges posed to democracy by the proliferation of AI-generated content, writing instructors have a responsibility to teach students how AI writing works and how to recognize it. Given this, they call for support for teachers at all levels as we adapt our teaching methods and materials. If we want to prepare an educated citizenry to interact critically with AI-generated content, the United States must provide resources to educators. However, we think it essential to recognize that given the scale of the time and resource commitment required of teachers and institutions, existing educational institutions and funding streams are not adequate to support the rapid development of curricula for critical AI literacy to supplement existing digital literacy curricula.
Similarly in their first published working paper, while the Joint Task Force affirms “that higher education’s specific institutional role of credentialing the achievements of students as individuals means that generative AI cannot simply be used in colleges and universities as it might be in other organizations for efficiency or other purposes” (2), they also stress that the technology “has the promise to democratize writing, allowing almost anyone, regardless of educational background, socioeconomic advantages, and specialized skills, to participate in a wide range of discourse communities. These technologies, for example, provide potential benefits to student writers who are disabled, who speak languages other than English, who are first-generation college students unfamiliar with the conventions of academic writing, or who struggle with anxiety about beginning a writing project. They also augment the drafting and revising processes of writers for a variety of purposes” (8). They similarly point out a number of pedagogical usages, such as applications in the writing process and classroom applications such as using the technology as a proxy for dictionaries (9-10).
Further evidence of this trend may be found in the introduction to the WAC Clearinghouse’s TextGenED repository of AI assignments, in which Tim Lanquintano, Annette Vee, and Carly Schnitzler explain the different paradigms composition instructors use to approach AI in their classes: “prohibition,” “leaning in,” and “critical exploration.” They explain that “We are skeptical this will be a viable model,” and moreover that “complete prohibition might very well lead to an eventual de-skilling of students.” At the same time, they argue that “leaning in” is “an uncritical stance that accepts the discourse of inevitability” and “is unlikely to empower students or educators.” However, they present a third model, “critical exploration,” in which instructors teach student the ethics of using AI, AI literacy regarding how the technology works, and develop strategic assignments meant to help students understand those issues while also ensuring the integrity of writing instruction in those courses. The TextGenEd collection provides a uniquely comprehensive repository of assignments pertaining to “rhetorical engagements,” “AI literacy,” “ethical considerations,” “creative explorations,” and “professional writing.” They ultimately appeal that “As Big Tech rushes ahead in its AI arms race with the intention of having large language models (LLMs) mediate most of our written communication, writers and teachers are forced to consider issues of prompt engineering, alignment, data bias, and even such technical details as language model temperature alongside issues of style, tone, genre and audience.” The growing presence of such repositories suggests the progress with which writing teachers have begun to accommodate AI in their teaching practices.
While the authors of this paper concur with Lanquintano, Vee, and Schnitzler that it is important for faculty to think about how to engage students with AI, we find that in the foregoing scholarly discourse there has been limited systematic attention paid to the experiences of students. Yet students bring with them preconceptions and assumptions about AI that will inevitably impact their reception and engagement with AI assignments. The present study intervenes in this discussion by providing data regarding student perceptions of AI at a large state R2 institution and discussion of the pedagogical implications of those findings.