Rocks, Water, Words: Life in the AI Paradigm
by
“The real trauma of AI (its original sin, in an alternate key) is the same wreckage of people and ecosystems that stands as the hallmark of all settler-colonial and racial capitalist development.”
Since its origins in the early Cold War, AI has never been easy to define, appearing in popular culture and public discourse in the guise of myriad technologies to refer to a vast range of affective response. With the launch of ChatGPT in late 2022, the trends have grown more divisive than ever. Hollywood prevailingly proffers a near-future riven by anthropocidal compute, as in the recent Mission Impossible: The Final Reckoning (2025), surmised to be the most expensive film ever made, largely for its well-known abnegation of digital subterfuge and simulation. News headlines over these years provide ample evidence to justify such bleak cinematic visions—with the myopic pursuit of profits by data capitalists to blame for, among other things, increasingly fractious political discourses deepening disregard for facts, truth, and expertise; popular acquiescence to totalitarian policing and mass surveillance methods; proliferating cases of psychotic depersonalization and derealization; and catastrophic levels of resource consumption and environmental degradation.1 Advertisers meanwhile blanket our smaller screens with sleek, upbeat, and often gentle assurances about an AI future that is already here, an all-empowering AI present that gives everyone, anyone at all, the tools to self-optimize by offloading everything from tedious work tasks to weekly meal planning to academic research to amorous composition.2
Behind this push is an unprecedented market bubble; between 2024 and 2025 alone, Meta, Amazon, Microsoft, Google, and Tesla—five of the so-called “magnificent seven” US tech kingpins—will see their combined capital expenditures on AI surpass $560 billion while their combined AI revenues top off around $35 billion, amounting to a negative return-on-investment of over ninety-three percent. Despite its increasing ubiquity, artificial intelligence is a fundamental misnomer. In its au courant manifestations of large language models (LLMs) and generative adversarial networks (GANs), AI has little if anything to do with either artificiality or intelligence. It has everything to do with basic materials and Byzantine probability calculations. A nebulous outgrowth of the Cold War military-scientific-industrial complex, AI has come to encompass computational advances in fields as disparate as economic modeling, algorithmic recommendation, and vehicular automation. To give these all the same name feels arbitrary and heedless, at best.
“AI should be understood to mark a precipitous cultural—and arguably civilizational—embrace of an Authoritarian Internet.”
Profitless, yet omnipresent; flippant and fatuous, yet posturing profundity at every turn: We might verily call this an “AI paradigm.” As theorized by the science historian Thomas Kuhn, a paradigm defines what counts as “normal” science. To think of AI as “normal science” is to spurn the standard parlance and recognize the essential boundary work a paradigm performs, delimiting the questions to be asked, methods to be deployed, and results to be sought in any given discipline at any given time.3 The proverbial “shifts” in paradigms are not gradual evolutions but tectonic ruptures. For Kuhn, the history of science moves not linearly but laterally, not continually but in fits and starts. “Revolutions” occur, but knowledge does not progress; rather, it gets reconstituted whenever something intractable and utterly non-conforming comes along. Michel Foucault gives a political twist to Kuhn’s epistemological heuristic, with the paradigm accounting for everything that can be said or shown sensibly within a bounded cultural epoch.4 Giorgio Agamben explicates this Foucault effect with a nod to the Roman distinction between exemplar and exemplum. Not only does a paradigm, as exemplar, provide a particular model to follow; as exemplum, it “allows statements and discursive practices to be gathered into a new intelligible ensemble and in a new problematic context.” These two functions often sit in tension, for the exemplum “calls into question the dichotomous opposition between the particular and the universal” upon which the epistemology of the exemplar depends, “instead… present[ing] a singularity irreducible to any of the dichotomy’s two terms.” A paradigm, in turn, is equally the thing and its powers, both substance and force. Paradigma—originally a technical term designating a “pattern or table” in Greek and then Latin grammars—combines the prefix for “alongside” (παρα-, para-), the base for indexicality or context dependency (δείκνυναι, deictic), and a suffix indicating “swelling” or a “tumorous growth” (-μα, –oma).5 Altogether, it makes a powerful heuristic for today’s algorithmically-fueled moment of post-truth politics, unabashed narcissism, and manic investment.
It is worth recalling that the internet was initially promoted as a collaborative, consensus-building tool and an inevitably democratizing force.6 The major digital cultural innovations thereafter—secure online payments, social media, smartphones, blockchains—have all been ushered in alongside similar promises that marry a neoliberal fantasy of individual freedom with a techno-optimist obliviousness to structural and material harm. The AI of American Big Tech, with its fetishes for evermore data, speed, and compute, operationalizes this version of the internet, prioritizing neither community nor communication but vast corporate and governmental control of information, attention, consumption, and social behavior, breathing new life into the old “web” metaphor: Without qualification, the internet is a trap.
“The entire human sensorium has been abstracted and appropriated for profit by the largest corporations to ever stalk the Earth.”
If anything, AI should be understood to mark a precipitous cultural—and arguably civilizational—embrace of an Authoritarian Internet. Despite its pretenses of empowerment and inclusion, AI extends the worst universalizing impulses of the European Enlightenment, enhancing neocolonial extractive techniques and making maximally efficient economic violence and social domination.7 Ironic for a technology so commonly associated with calculated judgment and cool rationality, LLMs advance by way of a haphazard concatenation of logical blunder, moralizing jargon, and quasi-religious devotion, all motivated by a singular commitment to the bigger-is-better/builder AI paradigm, where language is a problem to be “solved,” no different from infrastructure or transportation.
The entire epistemology of today’s popular LLMs is rotten to the core. In the AI paradigm, not just human communication and exchange but the entire human sensorium has been abstracted and appropriated for profit by the largest corporations to ever stalk the Earth. With LLMs, everything we’ve ever said online becomes grist for the language mill, which also incorporates data tagging of images, sounds, and anything else we humans can (typically) perceive but a computer cannot. Most of this work is routed through atomizing, anonymizing crowdwork platforms like Amazon’s Mechanical Turk and done cheaply and in poor conditions in the Global South or cheaply and in poor conditions in the Global North; much of it is also done for free by internet users everywhere in the form of ReCaptchas. Google has absorbed some 819 million hours of free human labor “solving” ReCaptchas to train its machine vision models. With this particularly crystalline example of the vast expropriation of human capacity and communication, we might think of ReCaptchas as a “colonization” of visual culture, of image space, of leisure time, and of course of language, but I find a metabolic framework even more accurate, which I don’t intend to be the least bit metaphorical. LLMs consume human language and turn it into bland corporate “speech.” They also eat rocks, drink water, and disgorge a lot of excrement, polluting the Earth, its atmosphere, and its inhabitants at every step of the way, culminating, for now at least, in new geological layers of e-waste and global climate catastrophe. The forecast is dim: While the data is admittedly limited, many projections have generative AI’s energy consumption matching the current total global output somewhere around midcentury should trends continue apace. One is tempted to see the AI buildout of 2024–25 as one massive energy play, at a historical moment when the influence of fossil fuels over markets and minds was just beginning to wane.8
This, apparently, is what “superintelligence” looks like. A banal old tweet, authored by the man who now owns the platform, exemplifies the paradigm: “The percentage of intelligence that is not human is increasing. And eventually, we will represent a very small percentage of intelligence.”9 Exhibiting an ugly yet increasingly common strategy that cloaks hubris in profundity, “not human” means “artificial intelligence”—that is, as created by humans, trained on human data, screened and scrubbed and reinforced by humans, and baked through with human biases. It does not take a primatologist, an entomologist, or a marine biologist to understand the damage done by this stunningly limited understanding of “not human intelligence” that today underpins the entire Big Tech AI space, its image on Wall Street, and much of its uptake in the broader public discourse. We commonly hear how general purpose AI will surpass human capacity within a decade, a year, or even a matter of months, but how far can we possibly get with such an utterly boorish, all-too-human intelligence as the benchmark for AI success? For one, this anthropocentrism is quietly (and often not so quietly) premised on patriarchy, racial hierarchy, and religious order. Moreover, in the AI paradigm, it is more common to see AI likened to godly or even “alien” intelligence than to the diverse and wildly varied intelligences that actually exist terrestrially, right under our noses and before our eyes: animal intelligences, plant intelligences, mycorrhizal intelligences—none of which have evolved to burn coal, trade slaves, drop bombs, or appear in any way interested in self, species, or planetary extermination.
“Artificial intelligence—that is, as created by humans, trained on human data, screened and scrubbed and reinforced by humans, and baked through with human biases.”
In our current state of stunted intellection, we have been keen to taste the “AI snake oil.” Corporate administrators, higher education administrators, and popular media have typically come to accept as AI systems that simply augment and expand the same old all-too-human “intelligence” that has justified some five hundred years of insatiable imperialism, colonialism, patriarchy, and eugenics. With the Authoritarian Internet comes, finally, the wholesale collapse of the liberal subject; and with it, the whole project of liberal governmentality into a regime of ambient authoritarianism exemplified not by any sort of political class but by our asymmetrical relationship with the corporate platforms and networks that house our data, host our sites, control our communications channels, and extract rents on our various memories and projects. Max Horkheimer and Theodor Adorno, referring to Hitler and Mussolini, may well have been discussing social media and its extensions into AI in sizing up what they called “the fascist masters of today,” who “are not so much supermen as functions of their own publicity apparatus, intersections of the identical reactions of countless people.” The “publicity apparatuses” of Mussolini and Hitler were the newspaper and the radio, respectively, which helped to foment the various joys and hatreds that sustain the affective infrastructures of their fascist regimes. Today, it is AI, alongside crypto, the metaverse, Mars, and whatever else the Musks, Zuckerbergs, Altmans, and Trumps of this world happen to be selling—so many targets for human investment, so many purchasers of human attention. Insofar as we exist for the “supermen” of today’s Authoritarian Internet, it is as producers of data, both data monetized for advertising and data to train AI. Participatory digital culture, once upheld as a guarantor of free thought and the apotheosis of Western civilization, now means that every social media imprint, every online engagement, every click, tap, and swipe variously and complexly “intersect” to produce today’s most nebulous and diffuse “fascist masters.”
“AI is nothing and has nothing without our sensory impressions, our sense of others, our language, our imagination, our capacities and powers.”
Among other weapons for us humanists to bring to this fight, psychoanalysis might prove helpful in further dissecting the au courant obsessions with “scaling up” this irrefutably White intelligence that Silicon Valley has on offer, particularly given the eager and abundant prognoses of soon-to-be AI selfhood, consciousness, and rights. Jacques Lacan’s linguistically tuned analysis—specifically, his articulation of the irreconcilable heterogeneity between the Imaginary, Symbolic, and Real registers of relationality and experience—seem singularly apt. Lacan emphasized how, in the clinical setting, the analysand struggles to square their memory of an event (the Imaginary) with the event itself (the Real) and to translate that memory into words (the Symbolic).10 While statistical prediction of word sequences is hardly the same as a subject’s being-in-language, Lacan’s investigations of psychosis and the ways trauma gets scored into the psyche are nevertheless instructive for understanding how Big Tech’s monstrous accumulations of data are mobilized in the transmigration of value from rocks to words.
Conjuring sentences and images and sounds from precise arrangements of ultra-refined silicon and glass, LLMs like ChatGPT obviously operate along the Symbolic register, and they are unmistakably constituted by Real material components, expenditures, and effects. But, with Lacan in mind, one might argue that they have no access to the Imaginary register—that is, no access to ego or to a sense of oneself as an individual, embodied organism, and everything that follows from occupying space and having a point of view. AI is nothing and has nothing without our sensory impressions, our sense of others, our language, our imagination, our capacities and powers. That generative AI itself has no imaginary register can be observed, somewhat counterintuitively, in the notorious “hallucinations” of these systems, which evince a sort of psychosis stemming not from an excess imaginary, as the term would have us think, but from an excessive encumbrance in a world of signs, with the subject, in this case the model, incapable of escaping the refracted, distorted shapes of its own outputs or sounds of its own voice. Lacan’s ISR schema thus offers a way to both disentangle and see connections between (1) the reams of data and discourse fed into and issuing forth from LLMs (the symbolic); (2) the way humans understand their relationships with technology and their relationships with others through technological mediation (the imaginary); and (3) the material extraction, exorbitant energy costs, and toxic legacies at all stages of the AI consumption cycle (the real).
But we do not need Lacan to understand how our imaginative capacity is being strip-mined to serve the data monsters of Meta, Google, and OpenAI as they consume the Earth and spit out words that are themselves but hieroglyphs of energy cost and emissions, consumption and waste. In the AI paradigm, the human imaginary becomes the means of AI’s social reproduction, “feeding” AI in at least two distinct stages: first, in tagging the training data, work performed by vast legions of Turkers, taskers, and subcontractors mainly in the Global South; and second, in propelling the techno-cultural fantasy that something more than accelerated exploitation and amplified inequality is taking shape. No matter how we imagine it, the real trauma of AI (its original sin, in an alternate key) is the same wreckage of people and ecosystems that stands as the hallmark of all settler-colonial and racial capitalist development. But we have to believe that it is not too late to regroup and project a better, healthier, more just collective sociotechnological future. What might our consumer-facing computational tools and systems achieve if oriented not around service and self-optimization but around something like liberation, care, or beneficence?11 What might our societies achieve if their popular eco-technics sought not efficient extraction but something like social justice, universal compassion, or anarchic rewilding? How might the planet and its many forms of life thrive if we were to implement more capacious, imaginative, and adventurous ideas of “artifice” and “intelligence”?
Image Banner Credit: Geronimo Gigueaux
Notes
- David Eliot and Rod Bantjes, “Climate Science vs Denial Machines: How AI Could Manufacture Scientific Authority for Far-Right Disinformation,” in Political Ecologies of the Far Right: Fanning the Flames, ed. Irma Kinga Allen, Kristoffer Ekberg, Ståle Holgersen, and Andreas Malm (Manchester University Press, 2024); Mirca Madianou, Technocolonialism: When Technology for Good Is Harmful (Polity, 2024); Kashmir Hill, “They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling,” The New York Times, June 13, 2025; Miles Klee, “People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies,” Rolling Stone, May 4, 2025; Marlynn Wei, “The Emerging Problem of ‘AI-Psychosis,” Psychology Today, July 21, 2025; Qiong Chen, Jinghui Wang, and Jialun Lin, “Generative AI Exacerbates the Climate Crisis,” Science 387, no. 6734 (2025): 587; Wacuka Ngata et al., “The Cloud Next Door: Investigating the Environmental and Socioeconomic Strain of Datacenters on Local Communities,” arXiv preprint, arXiv:2506.03367 (2025).
- See, e.g., Asana, Clickup, Grammarly; MealMate, Eatr, or Meal AI; Consensus, ResearchRabbit, or Semantic Scholar; HyperWrite or UPDF AI.
- Arvind Narayanan and Sayash Kapoor, “AI as Normal Technology: An Alternative to the Vision of AI as a Potential Superintelligence,” Knight First Amendment Institute at Columbia University, April 15, 2025.
- Michel Foucault, The Order of Things (Routledge, 2002); Michel Foucault, The Archaeology of Knowledge (Knopf Doubleday, 2012).
- R. W. Burchfield, The Compact Edition of the Oxford English Dictionary: Complete Text Reproduced Micrographically (Oxford: Clarendon Press, 1987).
- A number of telecoms and chip advertisements from the era likewise exemplify this belief, such as MCI Communications Corporation, “NetworkMCI Commercial (No Race, No Gender, No Infirmities… Only Minds),” television commercial (1997).
- Paola Ricuarte, “Algorithmic Assemblages of Power: AI Harm and the Question of Responsibility,” Teknokultura: Revista de Cultura Digital y Movimientos Sociales 22, no. 2 (2025): 201–208; Emily M. Bender et al., “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?🦜,” in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (2021).
- Sharon Kelly and Bailey Chambers, “AI Energy Demand Can Keep Fossil Fuels Alive, Tech Backers Promise World’s Two Biggest Oil Producers,” DeSmog, April 22, 2025; Cornelia C. Walther, “The Hidden Cost of AI Energy Consumption,” Knowledge at Wharton, November 12, 2024; Shoko Oda, Mark Chediak, and Josh Saul, “AI to Prop Up Fossil Fuels and Slow Emissions Decline, BNEF Says,” Bloomberg News, April 15, 2025.
- The tweet, originally from 2018, was discussed at length on Elon Musk’s Joe Rogan appearance (show #1169), but has since been removed.
- Jacques Lacan, The Psychoses: The Seminar of Jacques Lacan, Book III, ed. Jacques-Alain Miller (Taylor & Francis, 2013); Jacques Lacan, Formations of the Unconscious: The Seminar of Jacques Lacan, Book V, ed. Jacques-Alain Miller, trans. Russel Grigg (Polity Press, 2017).
- The latter was suggested by eminent computer scientist and AI pioneer Stuart Russell during a 2024 lecture at UCI, “How Not To Destroy the World with AI—On Second Thoughts,” Feb. 15, 2024.