Drafting a Policy for Critical Use of AI

Daniel Frank and Jennifer K. Johnson

Taking it to the Classroom

Jennifer:
In addition to spurning my involvement in the development of the policy statement, Dan’s presentation also ignited a turning point in my pedagogical approach to the rise of LLM’s in that while I was initially committed to showing students how these models would not meet their needs, my thinking quickly evolved to embrace what Dan had been peddling all along. As of Summer 2023, I began encouraging my students to use ChatGPT or similar tools at any or all stages of their writing processes, so long as they commit to later reflecting on how they used it, what it yielded, what revisions they did or did not make to AI-generated text, etc. I’ve come to believe that the real learning happens in students’ articulation of these questions and that as they write about their experiences with ensuring the effectiveness of their texts, they are honing their rhetorical skills and their understanding of their writing and what makes it work. In this process, they are becoming better and more engaged writers.

I have embraced two key principles about students’ use of AI text generators. First, I have accepted that our students will be entering a workforce where these tools will be readily available and where they will be expected to use them effectively. It then stands to reason that I would be remiss in not encouraging them to begin using the tools now, while they are still in school. Second, I have realized that in order for these tools to be used effectively, users must develop solid prompt engineering skills, which may not come as naturally to folks as the more intuitive google searches have come to most people. As such, students need opportunities to develop these skills, and opportunities to do so should be built into their writing classes.

It is ironic to me that the chat mechanism that enabled ChatGPT to explode in popularity—thus inciting panic in many educators—is also the means to enabling students’ critical thinking and engagement about writing in new and fruitful ways. Whereas previous technologies such as browser search bars require users to simply input key words or phrases, ChatGPT works best when users engage it in a dialogue. Doing so effectively requires critical engagement, as users must consider how to prompt it to yield something valuable and then carefully consider that output to determine how rhetorically effective it might be, given the rhetorical situation. This process is exactly the sort of engagement that I hope for my students to participate in, as I want them to develop an ability to determine what does and does not work in a given text and to be able to articulate why. In this way, LLMs like ChatGPT have the capacity to support my teaching goals, rather than to detract from them. As I recently overheard someone say at a conference, “In the age of LLMs, writing will likely become rewriting.” While both the veracity and the efficacy of this statement can be debated, it does seem clear that at least for now, ChatGPT requires much more engagement and authorial intent to yield results than does say a google search.

And while some students may intuitively recognize and be prepared to engage in a fruitful dialogue with tools like ChatGPT, others may need support in learning how to prompt these tools to produce useful output. If my role is to prepare students for the writing they will do personally, professionally, and civically once they leave the university, then arguably the advent of tools such as ChatGPT have expanded my responsibilities to help students prepare to do this work effectively in a society that will undoubtedly feature AI tools and expect those entering the workforce to utilize them and engage with them in productive ways.

Moreover, as someone who has taught upper-division business writing courses for the past 20-plus years, I have realized that ChatGPT and tools like it can help solve a conundrum I have long wrestled with in those courses. My business writing courses have been the site of dual but conflicting messages that on the one hand, students should have a right to their own languages, as the National Council of English Teachers argued in 1974, while on the other hand, business writing requires adherence to Standard American English conventions. Until recently, I had three not-so-great avenues for handling these conflicting principles: 1. I could give these students a poor grade for their inability to generate texts with “flawless” Standard American English; 2. I could offer them extra help (read: I could work with them line-by-line to show them the grammatical issues in their texts) in their efforts to meet these standards; or 3. I could send them to the writing center in the hopes that the tutors there could help them identify patterns of error and help them edit their texts. ChatGPT gives me a fourth, and I am now thinking, far better, option, which is to invite them to run their prose through the technology for help with grammar and style conventions. These models thus harness the potential for increasing equity as students—particularly those for whom English is not their first language—can utilize them to “clean up” their prose and make it conform to academic and professional expectations.

Dan:
Such interesting pedagogical potential in that. Every student now can have a personal tutor if they know how to ask for it. But knowing what to ask for, and how, is the key, here. Students can ask an LLM to rewrite/restructure/modify a paragraph, or a sentence, or even revise a word. Or they can take this one step further and ask for multiple revisions. Or they can ask for multiple revisions alongside a paragraph that reflects on what choices were made in the construction of each revision and why. Every approach falls at different points on a spectrum of student agency, reflection, and potential for growth. Our position as teachers, then, is to push them along that spectrum to get them to really interact with the tool, think through the choices at play, and critically engage. In my classes, I present a version of the same info I produced for my colleagues with the AI Writing Primer and the Policy. These are the three golden rules I drill in each class:

Here are a few guidelines to help you make the most of AI writing tools while maintaining your own voice and rhetorical agency:

Think Critically: AI writing tools such as ChatGPT do not "know" things, they just produce language patterns. They are prone to “hallucinating” what may be convincing, yet often incorrect and/or uncitable information. It is essential to critically evaluate the output and cross-check any facts, claims, or sources provided by the tool. In general, you shouldn't rely exclusively on these tools for research or content. You need to personally provide the intent, the ideas, and the research.

Acknowledge the Use of AI: If you incorporate ideas, text, or inspiration from AI writing tools in your work, you must acknowledge its use in your author's notes or attributions. How did you use it? To what extent? In what part of the writing process? How did it help? This will help you develop your reflection and metacognition, and will promote a culture of reflective, responsible, transparent AI usage.

Protect your Rhetorical Sovereignty: It is crucial to ensure that your own voice, creativity, and innovation are not surrendered to the AI. Remember that rhetorical utterances are powerful: a statement can shape our view of reality. If the tool creates a sentence, you will be influenced by it, and you may never know what network of thinking could have been created if you had built the sentence yourself. We write to learn and we write to think; Don't let these tools control your thinking.

After frontloading the areas of critical concern for them, I try to guide students into thinking through the technology conversationally, iteratively, and rhetorically. I tell my students the following:

ChatGPT tends to produce repetitive, formulaic text. It won't necessarily innovate or surprise you. It also can't do real research—it may even fabricate information. It doesn't actually “know” anything on its own. It just mimics patterns. However, ChatGPT can help you brainstorm ideas, play with language, and iterate on drafts. Ask it for multiple revisions to improve responses. Give it explicit instructions and source material to produce better results. Treat it as a tool for generating language, not facts. Verify anything it says through outside research.

Don’t stop at just one input and output: ChatGPT is a conversation. Go back and forth. Iterate with it. Use ChatGPT to experiment with different voices, genres, styles, hooks, and argument flows. Ask it for word choice suggestions and metacommentary on your writing. Just don't let it override your own thinking and goals. Make sure to attribute any language you get from ChatGPT. The key is using ChatGPT as a mediated tool to augment your skills, not replace them. Our discussions will focus on using AI ethically and effectively to support your writing process.

In my classes I’ve run multiple discussions with my students about this tool, asking them about their experiences with it, and running through the critical points, cautions, and potential uses of the tool. I would then break the students up into groups and ask them to play with this tool themselves, creating and then discussing, evaluating, and revising (either through prompt iteration or by hand) the writing that it produces.