Rhetorical DIKW

Patrick Love

What’s in a Prompt? Narrowing Ecologies to Situations for ChatGPT

Interacting with ChatGPT is a rhetorical situation in which ChatGPT uses prompts as the rhetorical situation to inform its responses. As such, one can impact ChatGPT’s output by being more direct about the situation you want it to write in by: 1) giving ChatGPT an identity, 2) making a request, and 3) specifying the output (“As a freshman college student,” + “write a discussion post on rhetoric” + “in a paragraph of at least 200 words”). After seeing results, one can further adjust, provide examples to emulate, or add other miscellany by conversing with ChatGPT about the identity, task, or output. Detail and specificity in prompts tunes the response ChatGPT provides; prompts narrow the data ChatGPT will include in its response and specify the information the writer wants. Prompts, in other words, narrow the ecology-as-data down to a rhetorical situation for ChatGPT’s work, meaning the writer needs a sophisticated command of that situation to get the ‘best’ results. The writer, therefore, will need to know about the ecology and situation to know if ChatGPT has produced something useful or valid; the writer needs an idea of the data necessary to produce the information wanted, and lived experience to effectively agree or disagree with the information ChatGPT assembles. For instance, consider the differences between starting a draft with “write an explanation of rhetoric” (figure 8) and “as a writing program administrator with over 20 years of experience who is known by your faculty for being approachable and accessible both in person and in writing, write a paragraph introducing the concept of rhetoric, highlighting the connection between its classical existence and the digital writing landscape of today in plain English without filler words” (figure 9).

Figure 8: Screenshot of prompt and response from ChatGPT captured October 13, 2023 depicting ChatGPT describing Rhetoric with short paragraphs and lists in response to a generic prompt.
Figure 9: Screenshot of prompt and response from ChatGPT captured October 8, 2023 depicting ChatGPT describing Rhetoric from the perspective of an experienced and friendly Writing Program Administrator as directed.

ChatGPT will respond to both affably because it is programmed to comply with most requests, but the user always supplies their own assurance that the difference or nuance between the two responses is significant. The lived experience of the user determines how they will perceive differences in the responses and decide to trust one over the other. If the user has no more insight into the rhetorical situation of the second prompt than the first, they may treat it as a coin flip or choose the response of the second prompt because it appears more precisely formed—of course, how would one form the second prompt without rhetorical insight to construct that situation? If one has more insight/knowledgeable lived experience on which to draw concerning the second prompt, that user will bring a more critical eye based on the data and information they have incorporated for their lived experience as part of their professional knowledge: what characteristics is “over 20 years of experience” imbuing? “Approachable” to what faculty and under what assurance? What subject position does the WPA occupy, either to themself or to other faculty? Why not “approachable” by students? What authority does ChatGPT derive from “20 years of experience,” “approachability,” and social “accessibility?” Are these the best traits for a WPA? Is there an assumed whiteness or masculinity in the image of WPA ChatGPT is conjuring? Would we be more or less pleased with the result if we asked ChatGPT to write as a person of color or a woman specifically? Would ChatGPT insist people of color or women have no distinguishable language uses? What does “plain English” mean? Someone without the lived experiences (and data and information informing them) to know these questions are important would be equally ill-equipped to judge the reliability of ChatGPT’s response.

Using ChatGPT for research presents a different set of issues. Users asking ChatGPT to explain something or answer a question may expect an interpretation of the ‘world’ rather than a synthesis of spatiotemporally contingent ‘data’ without concern for the difference. In the expanded DIKW pyramid from Kitchin (2014) (figure 2), the ‘World’ is the basis of the pyramid as a reminder that we are always studying and trying to understand the ‘world’ as a nonstop process. The rhetorical DIKW pyramid (figure 7) carries this notion forward because the ‘world’s’ spatiotemporal unfolding (Gries 2013) is unending, so there is no end what we can ‘know’ about the world (social, biologically, ecologically). DIKW rests on the World because it assumes knowledge-production reconciles people’s lived experiences through gathering data and making informative arguments, creating a check on mis- or disinformation. With ChatGPT we have, in essence, a (generative artificial) intelligence whose conception of the world is entirely from collected data; it does not have a lived experience of its own. All of its ‘intelligence’ comes exclusively, it seems, from writing scraped from the internet and other digital sources (then coached by untold OpenAI workers and wrapped in a black box). ChatGPT is, in this sense, a being of pure circulation. The implications of this are manifold, particularly in light of the examinations of mis- and disinformation on the internet and calls for renewed information literacy pedagogy in the last ten years.

Figure 2: Expanded DIKW Pyramid from Kitchin (2014) adding the World as the expanded base of the pyramid and offering verbs and descriptions at each level to fleshy out the relationships.
Figure 7: Full DIKW Pyramid in the style of Kithin’s from figure 2, with new verbs and descriptions that emphasize the ecological and rhetorical processes that connect the same levels (World up through Data, Information, Knowledge, and Wisdom).

In the context of this chapter, Rhetorical DIKW metalanguage helps introduce ChatGPT’s capabilities and limitations in assisting with research, since its ability to summarize data (a conversion of data to information in DIKW parlance) is one of its immediately attractive use-cases. ChatGPT’s connection to the world is only through data, whereas humans live in the world and draw inspiration from it (Suchman 2007), so users must be prepared to compare ChatGPT’s results to their lived experience and be ready to check ChatGPT’s work (find data and information and learn the lived experiences of others to confirm or correct ChatGPT output).

ChatGPT likely fits exploratory research best, similar to how one would use Wikipedia or google to explore a new topic, learn discourses, and develop research questions. Most likely, ChatGPT will draw from pages Wikipedia editors and Google can access, anyway. In research, GenAI’s offer more expedient usability (than Wikis or search engines) but their lack of ‘knowledge’ famously produces ‘hallucinations’ or ‘dreams’ in their output: such earnest commitment to fulfilling user requests makes the user, again, responsible for believing ChatGPT. Hence, ChatGPT can assists exploration, but still requires precise prompts and user verification as part of the writing/research process.

While it is most expedient to say to ChatGPT “tell me about X,” one can also direct ChatGPT to filter data and information through an identity, task/purpose, and output/genre. Identities may include: “You are a (SUBJECT) expert with 30 years of experience and lots of awards for excellence,” or “You are an expert (PROFESSION). You are highly experienced at (SUBJECT) research and finding valuable insights.” Again, how ChatGPT constructs the identity is rhetorical. How does one approximate this through data? Does ChatGPT have access to the writing of these people? Is that where the totality of their techne is captured (Van Ittersum 2014)? Who are we picturing in these identities? As before, to know if ChatGPT has captured the identity, one needs to know about it, too. ChatGPT may end up the basis for one’s own exploratory research as point of comparison. Adding “with citations of real sources in APA/MLA/etc.” can approximate using ChatGPT similar to Wikipedia: for farming scholarly sources.

What Rhetorical DIKW adds to this is language to describe the tasks and labor necessary to use ChatGPT effectively this way. Command of the rhetorical situation is the difference-making factor between users when it comes to how well they can use the output of a prompt in/as a draft or if they can form an effective prompt in the first place. Therefore, invention with ChatGPT involves gathering together data and information (observations, experiences, and patterns) to judge if ChatGPT’s products pass the sociability test—if they are worth agreeing with or, more productively, what tweaking and modification is required to fit the situation more effectively. If ChatGPT is a viable way to generate rough drafts, in either school or workforce, the difference-making labor a user will do with it is build and maintain ongoing understanding of the rhetorical ecology and specific situations in which they will consult ChatGPT through having their own data and information at-hand to provide social approval or critique of their AI’s proposals. Therefore DIKW-informed composition and communication classes can teach students the role rhetorical data and information play in constructing rhetorical situations from ecologies before presenting them to ChatGPT. In doing so, composition classes have an opportunity to stress the relationship between individual and society, and introduce the importance of ‘wisdom’ —the consequences of worldviews on the ecology. Therefore, writing classes need to emphasize rhetorical situation and ecology more as ways to produce discourse and engage students in invention activities that build their understanding of writing projects as always the product of spatiotemporal situations in larger ecologies.