AI Mania

by Lisa Parks


‘AI mania might be an apt term for the current conjuncture of endless cultural processing and speculation about the sociotechnical phenomenon of AI.”

“AI mania” refers to the persistent intensity and whirl of anxiety and excitement, dread and glee, cynicism and wonder surrounding AI as it circulates in and, increasingly, constitutes media culture. It is present in press headlines and on social media platforms, in content streams and LCD screens. If the contemporary symptomatology of “mania” includes increased talkativeness, rapid speech, racing thoughts, erratic behavior, and an unusually high level of physical and mental activity, then “AI mania” might be an apt term for the current conjuncture of endless cultural processing and speculation about the sociotechnical phenomenon of AI. To register and respond to the uncertainties and grandiose proclamations related to AI, humanities scholars have analyzed the historical emergence of AI and written about its technocultural precursors, complex materialisms, hallucinations, biases, and ethics, bringing forth an array of critical perspectives.1

In their introduction to a “Critical AI” special issue of American Literature, Rita Raley and Jennifer Rhee approach AI “as an assemblage of technological arrangements and sociotechnical practices, as concept, ideology, and dispositif” and suggest that critical AI involves being “situated in proximity to the thing itself, cultivating some degree of participatory and embodied expertise, whether archival, ethnographic, or applied.” They draw on AI artworks by Anna Ridler and Stephanie Dinkins to demonstrate what this “embodied expertise” might entail. In a more literary vein, Dennis Yi Tenen’s Literary Theory for Robots historicizes “how computers learned to write” by detailing early prophecy wheels, probability analyses, and cipher systems. Tenen’s account emphasizes the collective dimensions of “intelligence,” and provocatively asks: “How does a mere tool (AI) move into the subject position of the sentence—where it detects, devises, and masters—gaining a sense of agency and interiority in the process?”

In this short essay, I want to think not only about how AI has become the subject of the sentence but also about the kinds of paradigms and mediations that this subject’s leaky and speedy circulations point us to. AI mania can be a trope for specifying and working through historically shifting cultural anxieties around an emergent AI assemblage of large language models (LLMs) and natural language processors (NLPs), chatbots, image, story, and video generators, facial recognition and recommender systems, situational awareness interfaces, and more. The anxieties that suffuse AI mania have tended to coalesce around basic questions: What can AI do (for/to me/them/us)? How fast and when will AI replace human workers? How can AI be designed to be ethical, responsible, unbiased, and safe? And what is the relationship between AI and creativity? Scholars, journalists, and corporate leaders have written countless articles in response to these questions, engaging with age-old issues of accuracy, fidelity, and credibility, exploitation and obsolescence, and safety and security.

Humanities scholars are also often tapped to address questions of AI and ethics. Yet as Wendy Chun provocatively asks, why would people think consulting humanities scholars would necessarily make AI projects more ethical, given the humanities’ own historical and ongoing biases and exclusions? Chun further observes that when humanities scholars participate in AI ethics projects, they often go straight to Aristotle rather than to scholars such as Gayatri Spivak or Achille Mbembe. Such moves result in what Chun calls “humanities bleaching”—a situation in which critical efforts in the humanities to confront major issues of ethics, power, and justice are suddenly erased in conversations about AI.2 Some scholars have explicitly analyzed race and AI to confront these tendencies as well. In Artificial Whiteness, Yarden Katz draws on the work of Toni Morrison, Cedric Robinson, and Edward Said to analyze “how AI—as a concept, a field, a set of practices and discourses—works to flexibly serve a social order premised on white supremacy.” Like whiteness, Katz explains, AI “aspires to be totalizing: to say something definitive about the limits and potential of human life based on racialized and gendered models of the self that are falsely presented as universal.” In different ways, Chun and Katz critique the tendency to neutralize and universalize humanity in relation to AI, raising crucial points about its underlying racialized assumptions.

There is much more to say and debate about the myriad ways humanities scholars are conceptualizing and critiquing AI. As a media scholar I am interested both in the broad paradigms and the specific AI tools that are being used to reshape the materialities, industries, and power relations of media cultures. In thinking about these issues, I recognize that I am immersed in the AI mania that I have described, and that the kinds of research questions humanists like me are able to ask about AI are, in many ways, inseparable from a broader media ecology of hurried AI speculations. Rather than deny this, I want to probe and push into AI manias and further explore the kinds of critical inquiries that AI’s emergence demands. Toward that end, I briefly describe two paradigms that inform and inflect the current AI mania—situational awareness and creative content generation. By focusing on military and entertainment paradigms I hope to suffuse “AI mania” with a critical awareness of the logics that reverberate in the shaping of AI tools.

Situational awareness emerges from a military context but the ongoing production of myriad massive data sets and AI tools transform it into a way of being.”

The first is exemplified by Palantir’s Gotham system, a situational awareness platform used to monitor and manage conflict zones and security concerns by military organizations around the world. Palantir was co-founded in 2003 by Peter Thiel (co-founder of PayPal) and neoliberal eccentric Alex Karp. Much of Palantir’s work uses AI for surveillance, weapons deployment, and warfare in support of the West and its allies. In 2007, Palantir launched its Gotham platform, described as an “operating system for global decision-making” that can achieve “AI-driven combat superiority” in areas from “space to mud.” Palantir’s website insists: “Gotham enables the autonomous tasking of sensors, from drones to satellites, based on AI-driven rules or manual inputs for human-in-the-loop control. Gotham empowers you to make informed decisions, maximizing the effectiveness of your assets in even the most dynamic operational environments.” The system relies on the publicly-funded US global positioning satellite system (GPS) and decades’ worth of US and commercial aerial and satellite image data to shape its interactive interface, as well as Palantir’s proprietary software and AI tools. Gotham’s geospatial interface is layered with dynamic graphic displays: flying icons, flashing circles and targets, and emerging menus that list possible responses to conditions in view. These graphics index the availability of datasets and coordination of AI tools on the backend of Gotham’s interface.

In the current conjuncture of climate disruption, natural disasters, pandemics, school shootings, racial violence, surging authoritarianisms, and resulting uprisings, situational awareness AIs like Gotham are becoming paradigms for daily life. Smartphone users will not only be guided by GPS and virtual assistants to locations; movements will be correlated with any number of geo-annotated datasets that notify users of the relative likelihood of things like allergy attacks, mosquito bites, flash floods, viral encounters, heat exhaustion, gun owners, shark attacks, flirtations, and so on, and generate possible responses. Individual apps already operate as a kind of unbundled situational awareness system, anticipating a civilian version of Palantir’s Gotham (think Purple Air, Windy, Google Maps, Waze, allergy maps, lightning apps, charging stations, heat mapping.) Situational awareness emerges from a military context but the ongoing production of myriad massive data sets and AI tools transform it into a way of being.

The second paradigm, creative content generation, is related to commercial entertainment, and involves the use of AI tools to automatically generate entire television episodes from script to screen. In 2023, Fable Studio’s software developers used a combination of LLMs, including Chat GPT-4 and custom diffusion models to train a generative story system to produce an episode of the TV series South Park, originally created by Trey Parker and Matt Stone.3 Paramount had paid $900 million for the rights to South Park in 2021. Despite this, transcriptions of most of the show’s scripts were already part of Chat-GPT 4’s training set, and developers used images of 1200 characters and 600 background images from South Park to train diffusion models. The developers then delivered a story prompt to their Showrunner AI, which they call “SHOW-1.” It generated an episode that imitated the show’s characters, voices, structure, and comedy with 60% accuracy, using no human screenwriters, voices, animators, or editors.

How do humanities scholars tackle the question of what is researchable or unresearchable relative to AI, especially given the practices of corporate blackboxing as well as limits in expertise?”

The results catalyzed enormous buzz and extensive press coverage about the implications of the Showrunner AI. Developers of SHOW-1 published an article called “To Infinity and Beyond,” indicating that “with the right guidance users will be able to rewrite entire TV seasons.” Suddenly, the vaults of media companies seemed to have brighter economic futures as television reruns were recast as giant training datasets. The experiment not only prompts questions about intellectual property but raises questions about content creation in the future. SHOW-1 allows TV production to be based not only on lucrative generic formulas, but literally on formerly aired content. Such practices have been tested by multiple companies and stand to violate the contracts and copyright of writers and actors, which is one reason AI became a concern in the 2023 Writers and Screen Actors Guild strikes.

In closing, I want to return to AI mania as a trope for intensive speculation about AI’s potentials and raise a few questions for consideration: What sites, objects, or processes should critical AI scholars be focusing on and why? Relatedly, why do codes and algorithms attract humanities scholars’ attention more than motherboards and microchips? How do humanities scholars tackle the question of what is researchable or unresearchable relative to AI, especially given the practices of corporate blackboxing as well as limits in expertise? How can learning about how an AI system works be linked to broader concerns in the university and the humanities, policy interventions, or social issues? Finally, how do we not efface the critical work that has been done in the humanities when analyzing AI?

Image Banner Credit: Geronimo Gigueaux

Notes

  1. See, for instance, Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York: New York University Press, 2018); Trevor Paglen, “Invisible Images,” The New Inquiry, Dec. 8, 2016; Kate Crawford and Vladan Joler, “Anatomy of An AI System,” 2018; Jonathan Cohn, The Burden of Choice: Recommendations, Subversion, and Algorithmic Culture (New Brunswick, NJ: Rutgers University Press, 2019); Joanna Zylinska, AI Art: Machine Visions and Warped Dreams (Open Humanities Press, 2020); Yarden Katz, Artificial Whiteness: Politics and Ideology in Artificial Intelligence (New York: Columbia University Press, 2020); Kate Crawford, Atlas of AI (New Haven: Yale University Press, 2021); Jonathan Roberge and Michael Castelle, eds. The Cultural Life of Machine Learning (Palgrave Macmillan, 2021); Wendy Hui Kyong Chun, Discriminating Data: Correlations Neighborhoods and the New Politics of Recognition (MIT Press, 2021); Dennis Yi Tenen, Literary Theory for Robots: How Computers Learned to Write (New York: Norton, 2024); and Laurent Dubreuil, Humanities in the Time of AI (University of Minnesota Press, 2025).
  2. Wendy Chun, paper presented at the Society for Cinema and Media Studies conference, 2024.
  3. A custom diffusion model is a generative AI model that has been fine-tuned to produce outputs tailored to a specific style, subject, or dataset (such as an animated series). For instance, it can learn the visual patterns, colors, and motion cues unique to that style, enabling it to generate consistent animated frames or key visuals. Such models allow for the rapid prototyping of scenes, characters, or backgrounds that match a specific aesthetic. Technically, a diffusion model creates images by starting with random noise and gradually turning that noise into a recognizable picture through many small steps. It learns how to do this by being trained on lots of real images and figuring out how to “reverse” the process of adding noise to them.