Rhetorical DIKW
Patrick Love
Rhetorical DIKW Pedagogy with GenAI
The language of (Rhetorical) DIKW translates the core value proposition of GenAI/ChatGPT (used interchangeably hereafter) writing thusly: ChatGPT remixes data (stored writing) to produce information (new writing) on-demand. Without lived experience of the world, ChatGPT needs a user/human to ‘make knowledge’ with, so it can’t autonomously stray upward past information or downward past data. From the viewpoint of the user, ChatGPT may regularly produce novel information, but since ChatGPT only deals with data and information, that information likely (like a used car) is new-to-you. Overlayed on the DIKW pyramid, the user decides if what ChatGPT produces is ‘knowledge’—that is, a pattern aligned with their own experience that could facilitate action/wisdom. That moment, where the user has the opportunity to decide to agree with ChatGPT or not and why, is the pedagogical moment the remainder of this chapter focuses on. The tradition of institutional knowledge-making that DIKW translates for machines models this decision-to-agree interaction (i.e. information-to-knowledge) as happening between people: the Enlightenment found use for rhetoric in using communication to convey an argument about the world, convincing people to adopt it, for example (Bacon 1605). ChatGPT prospectively offers the possibility that another human is not strictly necessary at-scale, with ChatGPT implicitly positioned as the user's multifaceted partner: librarian, tutor, secretary, and copyeditor all in one.
An overarching way to view ChatGPT’s impact, informed by Rhetorical DIKW, is its manipulation of time for the user. Marche implies that ChatGPT solves the problem of “write a paper” for students by having incomprehensible amounts of data at its disposal to remix, hence this chapter argued that Marche positions writing as a past- and present-oriented affair, converting data into information. Data abstracts the world to preserve things outside the moment of collection, meaning ‘data’ represents its spatiotemporal moment. It may seem pedantic to claim all data is of the past, but there’s a practicality to it that cannot be ignored, particularly when collected data is modeled to predict the future based on past performance . Therefore, when ChatGPT take a user prompt, picks data, and remixes it into an informative response, ChatGPT fundamentally applies abstractions of the past to the present concern to help someone with their future. Granted, ChatGPT will comply with requests to predict the future, but it, too, uses past data to predict future performance.1 Therefore, when ChatGPT takes a user prompt, picks data, and remixes it into an informative response, ChatGPT fundamentally applies abstractions of the past to the present concern to help someone with their future. Granted, ChatGPT will comply with requests to predict the future, but it, too, uses past data (training data relevant to the prompt) to predict future performance. No matter what kind of information ChatGPT makes for the user, both ChatGPT and the user will be accepting that the past contains an answer, as the myth of transience dictates. ChatGPT, as an information technology, signals a truth about knowledge: all new knowledge is personal until it’s not—knowledge-making is a process of social acceptance. This makes ChatGPT an interesting and potentially highly-valuable tool to accelerate social acceptance of information as knowledge because it produces information in record time, at-scale as an uncanny conversation partner with a technocratic, expedient ethos (Katz 1992).
The ultimate issue with this view of time, the world, and knowledge is that it tends to presume 1) that history is (or can be) complete and commonly understood and 2) the past is ‘right,’ and along with it the history of Eurocentric imperialism and colonialism, inequality, exploitation, and ecological destruction that produced it. There is no running from the past, but we must consider how remixing it as a de facto starting point will help us break from those traditions and, in fact, overcome them, with or without GenAI. As argued above, attention to the future we wish to inhabit better promotes our role in producing it, contra the myth of transience. ChatGPT’s meditation on past and present cannot promote our role alone; we must assert our role. Hence, ‘skilled’ use of GenAI (whether in school, work, or other pursuits) will likely be influenced by command of one’s own lived experience, along with data, information, and knowledge that informs one’s understanding of ecological conditions to adequately interrogate and mold what ChatGPT produces. As Star and Strauss note, new technology displaces work rather than reducing it, and the user takes on new tasks (1999 p. 20).
In that spirit, this chapter ends with analysis of using ChatGPT in drafting and research as these (along with revision) are use cases students will likely try and workplaces will likely expect: drafting shorter work or parts of larger work, assembling information for easier consumption, and revising existing writing for readability or adjusting audiences. The chapter will make use of DIKW language frequently to further illustrate the metalanguage in action. These use-cases are, barring regulation or labor agreements like the Writers’ Guild of America’s with movie producers, some of the likely new work we will do (Star and Strauss 1999). Ultimately, because ChatGPT displaces liability for itself onto users, users at all levels have more responsibility for the writing they produce with ChatGPT because we are ChatGPT’s manager, not its students, regardless of the feeling of wonder and discovery the product (and its media advocates) hopes to engender.
1O’Neil’s work on data-driven policing perpetuating asymmetrical policing policing of non-white and low-socioeconomic neighborhoods, and Noble’s work on search engines perpetuating discrimination by shaping available information demonstrate this (O’Neil 2016, Noble 2018).