Introduction: Foundry AI Paradigm Series

by Jaimey Fisher


“Besides admonishing about the smoke-and-mirrors obfuscations of what seems merely a marketing pitch for a technological miscellany, the contributions all elucidate the broad as well as fundamental character of AI’s impacts.”

In May 2024 UCHRI hosted the “AI Paradigm: Between Personhood and Power” conference co-sponsored by UC Irvine’s School of Humanities, its Digital Humanities Exchange, and its Critical Data Studies Initiative. The conference brought together various UC scholars like Qian Du (UC Irvine), Peter Krapp (UC Irvine), Colin Milburn (UC Davis), Lisa Parks (UC Santa Barbara), Todd Presner (UC Los Angeles), and Rita Raley (UC Santa Barbara), with external experts like Catherine D’Ignazio (MIT) and Lauren Goodlad (Rutgers) to probe the past, present, and future of the technologies traveling under the rubric of “AI,” an overly elastic term I use advisedly herein. Now, UCHRI is extending, broadening, and deepening the conference’s pioneering work with this Foundry series, in which the contributors deepen their presentations into sustained contemplations of the current state and future prospects of generative AI and machine learning. To this ruminating end, UCHRI has also recently convened a taskforce on critical AI and media literacy, coordinating the work (in research and teaching) of over twenty UC scholars on the impact of the new technologies. The task force will be connecting and coordinating researchers from across the UC system as well as exploring and issuing statements on AI’s impact on teaching and universities in general. The essays point such inquiries in intriguing yet highly divergent directions of these technologies and their socio-economic and educational impacts.

Besides admonishing about the smoke-and-mirrors obfuscations of what seems merely a marketing pitch for a technological miscellany, the contributions all elucidate the broad as well as fundamental character of AI’s impacts. Manifesting a multifaceted engagement with these phenomena, the pieces raise key questions about AI and machine learning from humanistic perspectives, emphasizing how important both syncretic and historical analyses are for technologies so ubiquitously and vociferously celebrated. They probe whence the technologies come, on what foundations they are built, and whither they might be, deliberately or not, leading us. Of special interest is not only the history of the technologies, but also the centrality of language and linguistics and reading and writing, all conventional concerns of the humanities. With their myriad of humanistic modalities, the analyses all conjure unlikely connections, surprising insights, and shocking correspondences—above all insisting that these technologies be analyzed and explored in their complex unfolding, in their sundry contexts, and for their uncertain futurities.

Lauren Goodlad’s provocative piece reads against the authoritarian grain by suggesting that Orwell’s Nineteen Eighty-Four (1949) invokes a mode of refuge that generative AI increasingly forecloses, namely, the “writing subject.” The editor-in-chief and co-founder of the Critical AI journal, Goodlad explores how humanities scholars might better approach issues around language and writing that are raised by the massively scaled, mimetic mechanisms of large language models (LLMs). For her, these machine-based manipulations of language occasion revisiting post-structuralism and especially Ferdinand de Saussure’s separation of parole (shortly put: everyday speech) and langue (the broader and underlying signifying system), a duality that Jacques Derrida deliberately probed and problematized. But, for Goodlad, much of post-structuralist (and even post-post-structuralist) humanities has drifted in a New-Criticism-inflected fixation on close reading of “the text” that has only served to reinforce this distinction between langue and parole. These tendencies have, in fact, led much of the field to minimize the complex conditions and contexts of writing and language, conditions and contexts that are all the more occasioned now, by the way that LLMs operate and are impacting society generally and education specifically. The operations and influence of LLMs among our students, universities, and society demand a rethinking of how we deploy reading, writing, and the humanities broadly.

In Lisa Parks’ discussion of what she terms “AI mania”—that is, both the wonder and dread ubiquitously associated with AI—Parks ventures into two particular sectors that are already central to AI’s impacts: military technologies and the entertainment-industry. The former area is a well-known research focus for Parks, and here she homes in on Palantir, the suspect darling of the financial markets, founded by Peter Thiel and Alex Karp. Palantir’s Gotham system—with its curious comic-book connotations—has developed technologies to offer its system (alleged) “situational awareness,” sensor-based data that determine (allegedly) the specific environment requisite for military decisions. Situational awareness was always important for military endeavors, but the sheer scale of data and the computational analysis of it render such awareness, Parks argues, now a new “way of being.” Parks also provocatively engages with a divergent direction for “AI,” namely, the existing and potential impact of entertainment “content generation.” Here the mimetic trick is not of a sentence or even a paper—a problem instructors run into all too often—but an entire episode of the celebrated animated satire Southpark, created by Trey Parker and Matt Stone. These two highly divergent case studies suggest the broad but suspect application of technologies under the guise of AI.

“Humanities perspectives on the phenomena mushrooming under the catch-all of ‘artificial intelligence’ can effectively query the bombastic boosterism of the technologies’ relentless promoters.”

In analyzing the “AI paradigm,” Ricky Crano reminds us that the old-new shibboleth of “artificial intelligence” is a “fundamental misnomer” that may just be, in most instances, a marketing strategy for relentlessly expansionist corporate overlords. The phrase is used, imprecisely, to capture highly divergent phenomena (such as LLMs and Generative Adversarial Networks or GANs) that do very different things, to varying degrees of success—whatever “success” might mean in these very varied contexts (usually profit generating or, more financially stated, equity dealing and loans leveraging). By foregrounding not so much the misused term, but rather what he emphasizes as the AI paradigm, Crano exhorts us to a good dose of discourse skepticism, here eloquently unfolded in the senses of Thomas Kuhn, Michael Foucault, and Giorgio Agamben. A discourse approach recalls, Crano argues, how the early, heady days of the internet promised greater “freedom” and more efficacious democracy, but have only ended up highlighting the contradiction between a “neoliberal fantasy of individual freedom with a techno-optimist obliviousness to structural and material harm.” In fact, in Crano’s hands, AI arcs from its misnomer “Artificial Intelligence” to the AI of an “Authoritarian Internet,” which increasingly spells the end of the liberal subject—a development on which he quotes Horkheimer and Adorno, who watched the sunset of said subject in the Europe of the 1930s and 1940s.

Raley and Milburn’s essay is more site-specific in this age of AI-facilitated deterritorialization: They focus on the particular relationships, both historical and contemporary, of universities to AI and the technologies required to create it. The key term relating to AI for them is not so much “mania” (Parks) or “paradigm” (Crano) as “appropriate,” as in AI’s “appropriate” use for higher education. They note that this vague adjective is used regularly in universities’ policies around the technology, which often permit, or even cheerlead, the “appropriate” and/or “responsible” use of AI—without adequately defining either term. With these guidelines for AI in a higher-education context, the technological devil is definitely in the university details. They point out the telling proximity of appropriate to “appropriation,” which underscores the way that AI technologies have been built on the intellectual property as well as on the broader technological innovations that universities have provided the societies around them. They point out the costly irony of universities’ licensing technologies that have extracted so much of the products and labor of higher education.

These essays all underscore how humanities perspectives on the phenomena mushrooming under the catch-all of “artificial intelligence” can effectively query the bombastic boosterism of the technologies’ relentless promoters. But none of the pieces rejects out of hand these technologies that already dominate the headlines, on which US financial markets are already highly dependent, and on which many of our students already (over)rely. Rather, by marshalling critical analyses of generative AI/machine learning’s broad industrial applications (Parks’ essay), their discursive and economic histories (Crano’s), and their site-specific impacts on higher education (Raley and Milburn’s), one begins to appreciate AIs’ highly variegated impacts on culture and society and how humanities scholars can productively engage with them (Goodlad’s). Together, the essays highlight the fruitfulness of the humanities’ historical, contextual, and syncretic approaches, all tending, we hope, to something akin to critical AI more generally.

Image Banner Credit: Geronimo Gigueaux