Composing New Urban Futures with AI
Jamie Littlefield
Working with the Past to Compose the Future
While the purpose of generative AI is to produce something new, the process itself is inextricably bound to the old. Generating a new image through AI is an act of engaging with textual history. In practice, Large Language Models (LLMs) like ChatGPT and text-to-image synthesis tools such as Midjourney are fundamentally rooted in the past, trained on vast datasets that encapsulate historical texts, images, and multimedia. This historical orientation allows AI systems to access a rich mosaic of human conversation and creativity, generating images that resonate with cultural, historical, stylistic, discursive, or artistic contexts. These tools have the potential to amplify the intelligence of previous generations, offering a synthesized form of collective human experience. They allow for an interaction with a broad swathe of human history and culture, potentially making the past more readily accessible and interpretable to the present.
However, AI systems also present new risks: they may inadvertently perpetuate the prejudices and limitations of past data by parroting "encoded biases without understanding the significant harm of the language it produces" (Johnson, 2023, p. 170). Historical stereotypes, outdated norms, and factual inaccuracies can be unwittingly regurgitated, potentially reinforcing harmful social constructs or misleading users (Anderson, 2023). When exposed to extensive training sets, generative AI tools have been shown to exhibit bias towards the people and subjects most prevalent in the data (Byrd, 2023; Getchell et al., 2022). The overall size of a dataset is not an indicator of its proportionality or inclusivity.
As it re-mixes meaning from the past, generative AI does not compose on a blank slate. Instead, it generates content overtop a virtual palimpsest—a layered canvas of data, code, and pre-existing human input. This palimpsest is not neutral; it carries the biases, assumptions, and limitations of its human developers in addition to the dataset it was trained on. Because of this, the output of AI is a complex interplay between machine learning and the past socio-cultural contexts from which its training data originates. Both humans and generative AI are engaged in a continual process of layering new information over old, negotiating between past influences and present intentions. Understanding the complexities of AI-generated content requires recognizing it as part of a broader communicative ecosystem, where human and machine are influenced by layers of pre-existing context.
AI's entrenchment in the past generates a sort of algorithmic resistance to change, creating synthetic output that is subject to "value lock." Bender et al. (2021) describe "value lock" as the way in which "the LM-reliant technology reifies older, less-inclusive understandings" (p. 614). Rapidly spreading social shifts in the way people discuss an issue—be it the discourse surrounding the Black Lives Matter movement, the #MeToo movement's focus on sexual harassment, or the global conversation on energy production—may not be fully captured by LLMs reliant on training data from years ago. Accelerated changes in public opinion, such as Western attitudes towards same-sex marriage between the late 1990s and the mid-2010s, may be attributed, in part, to what Ridolfo and DeVoss (2009) refer to as "rhetorical velocity." These significant social shifts can occur as texts are rapidly circulated, re-mixed, and re-circulated throughout a population.
Faced with a kairotic situation, a human communicator may choose to give more weight to a new concept in the creation of content, while an algorithm may favor the replication of patterns more widely represented in decades of past training materials. Rather than facilitating a change in discourse or even remaining neutral, AI systems may actively obstruct social change by generating content that is untouched by the ideological shifts spreading rapidly through human networks.
Essentially, when it generates synthetic content, AI re-assembles units of meaning drawn from the materials of the past. While the historical orientation of generative AI tools offers certain benefits, deliberate approaches are needed to address its limitations. At this critical moment, we might stop to ask ourselves: If new texts continue stochastically (randomly) parroting the meanings found in the past, what kind of futures will result? When generative AI is designed to shoulder the weight of history, how does it impact discourse and design? What interventional strategies can we use to change course now?