Toward a Critical Theory of Active Language in the Age of “Generative AI”

by Lauren Goodlad


“One of the world’s most lucrative, heavily capitalized, and monopolistic corporations has set out to interpellate me into a new subject position: that of a generative AI user.”

In the opening chapter of George Orwell’s Nineteen Eighty-Four (1949), Winston Smith contemplates a momentous speech act: a “thoughtcrime” that could end in a death sentence “or at least twenty-five years in a forced labour camp.” As he steels himself for the “decisive act” of putting pen to paper to begin a diary, Smith’s bowels react and he experiences “a sense of complete helplessness.” “It was curious,” the narrator relays, that “he seemed not merely to have lost the power of expressing himself, but even to have forgotten what it was that he had originally intended to say… He was conscious of nothing except the blankness of the page in front of him.”

Many readers of this passage will recognize Smith’s experience as a species of writer’s block. However, unlike any version of that phenomenon that the novel’s implied readers are likely to undergo, Smith must hide his efforts to write from the two-way telescreen blaring in the background. Through such intense defamiliarization (in Viktor Shklovsky’s sense of that term), the quotidian act of starting a diary has become a bowel-tremoring capital offense. 

No doubt Orwell’s dystopic opening tableau compels readers to notice their comparative freedom—even if they are conscious (as many may be at the present moment) of living at a time of rising autocracy. But I have in mind a somewhat different context for Orwell’s primal writing scene. The opening of Nineteen Eighty-Four, I contend, sparks the recognition that—unlike Smith—the novel’s implied readers are writing subjects: active users of (written) language whose “power of expressing” themselves remains largely intact. 

Writing subjects thus defined use language in something like the way that Jacques Derrida first articulated at a 1971 conference when he argued that “communication” in both speech and writing depends on people who wish to say “something meaningful.” The dilemma of that embodied “wish to communicate,” he proposed, is the post-metaphysical condition best expressed by the situation of writing—a speech act that foregrounds the “nonpresence” between “my intention of saying something meaningful [mon vouloir-dire, mon intention-de-signification]” and “the emission or production of the mark.” From this view, Nineteen Eighty-Four is about the precarity of writing subjects. The novel’s opening chapter, in concert with the evocation of Newspeak it anticipates, dramatizes the contingent speech acts through which human users of language turn something they have “intended to say” into intelligible marks on a “blank page.” 

Now let us contrast Orwell’s fictional scene of writing to my own encounter with a blank “page” as I opened Microsoft Word via Windows 11 to compose these remarks (see Figure 1).


Figure 1. Screenshot from HP PC running Word on Microsoft Windows 11, August 1, 2025

Microsoft, a company whose bloated software I have grudgingly purchased for some thirty years, has now embedded its Copilot chatbot into the Office software that runs my laptop—an invasive gambit, undertaken without my consent, to hail me in a wholly new way. One of the world’s most lucrative, heavily capitalized, and monopolistic corporations has set out to interpellate me into a new subject position: that of a generative AI user

To say the obvious, Big Tech is watching me. 

In sharing these thoughts from a book project titled The Lifecycle of Writing Subjects: On the Futures of Human Poiesis in an Age of Generative AI, I’m aware of the paradox of invoking Orwell—who despised hackneyed language—when “Orwellian” long ago became a byword for surveillance and the authoritarian manipulation of language. Already common during the first Trump administration, reflections on Nineteen Eighty-Four took on new relevance when “Make Orwell fiction again” became a common refrain of the “No Kings” protests in October 2025. Nonetheless, whatever “Orwellian” and “Newspeak” may once have signified, their potential valence has transformed with the advent of generative AI (gen AI). In this essay, I explore this dynamic condition while sharing thoughts on how scholars in the humanities can respond, both theoretically and practically. 

In a pioneering essay from 1983, the political theorist Langdon Winner described technologies as “forms of life” that, when misrecognized, may smuggle in new “social contracts” and “vast alterations” to a “common world” as if they were mere technical updates. As such, a new technology’s potential impact on a common lifeworld should alert critical theorists (among other citizens of the world) to two possible pitfalls: a bleary-eyed technological somnambulism that amounts to “sleepwalking” through paradigm shifts as tech companies move fast and break things; and an equally damaging technological determinism that endorses “permissionless innovation” as if it were a force of nature. The literary critic David Golumbia, nodding to Winner, describes the latter mentality as “cyberlibertarianism”: “the belief that digital technology is or should be beyond the oversight of democratic governments.”

“Literary criticism has surprisingly little to say about the onto-epistemic and socio-technical conditions for writing and for the cultivation of writing subjects—that is, active users of (written) language.”

Paradoxically, gen AI’s claim to the status of nature-like inevitability is a manufactured resource-intensivity so staggering that it is right now colonizing the world’s supply of energy, water, and chip-making materials in plain sight. This “generational infrastructure buildout”—a multi-trillion dollar expenditure that currently drives a speculative bubble in an otherwise lackluster US economy—has been estimated at 17 times the size of the dot com bubble and 4 times that of the subprime mortgage debacle. Sprung from the monopolized concentration of politico-economic power in lucrative modes of surveillance and data extraction, the “hyper-scalers” who dominate this world-historical expansion have been greenlit by the Trump administration and are toying with the potential for federal guarantees. According to antitrust expert Matthew Stoller, “between ten and twenty men” are leading this vast spending regime—an oligarchic time-bomb inextricable from the array of social, political, cognitive, and environmental harms that gen AI exacerbates and entrenches. The toll in question includes the extraordinary drain on energy, water, and rare earth resources; the amplification of stereotypes, biases, and dominant languages like English; the looting of intellectual property and pollution of the digital commons; the all but invisible exploitation of human labor; the proliferation of false, malicious, and propagandistic content; the deliberate leveraging of the ELIZA effect to manipulate and further monetize users’ engagement with little care for harms to people and the social fabric; and the aspiration to subordinate creative, professional, and knowledge work to an AI-mediated, winner-takes-all gig economy.1  

That this evolving paradigm points to extraordinary circumstances that exceed the question of how generative AI affects language and culture does not obviate the urgency of that inquiry. Especially since the release of ChatGPT in November 2022, humanists and interpretive social scientists have increasingly begun to answer the call for “humanities in the loop.” Literary scholars—the audience I particularly hope to address in this essay—have begun to contend with this elephant in their classrooms. But the question of how complexly engineered and personified chatbots operate—including their effects on human socio-cultural, cognitive, and emotional wellbeing—can easily stumble over deep-seated interdisciplinary challenges. 

As I elaborate in a forthcoming essay, when literary critics contemplate the “language” in Large Language Models (LLMs), they often apply insights from the field’s own linguistic turn.2 This focus on the “reading” of texts (as critics often style their interpretive practice), is, however, far from the only legitimate framework for studying a technology that seeks to reshape human writing, research, and creative practices. Indeed, as writing studies and digital humanities scholar Annette Vee observes, literary critics only “rarely” discuss “the process of writing,” even though that very process undergirds their scholarship and teaching.

“The legacy of poststructuralism sustains a persistent tendency to understand “language” primarily through texts, while regarding active theories of language use as topics for communication scholars, linguists, or scholars of writing studies.”

In my forthcoming work, I trace this lacuna to the particular way that late twentieth-century literary scholars assimilated Derrida’s revision of the structural linguistics of Ferdinand de Saussure as well as how that poststructuralist legacy has come to stand in for the field’s most salient theoretical underpinnings on questions of language. As most literary critics learn in graduate school, Saussure hived off parole (the dynamic speech of people using a particular language) from his study of langue (the synchronic sign system that, in theory, structures that language). The idea that “language is an internally organized system of signs that are arbitrarily linked to concepts that are distantly, if at all, related to things in themselves,” as Rafael Alvarado explains, was a preeminent insight of the late twentieth century. That insight took shape in part through a “distributional hypothesis” that became influential in linguistics and Natural Language Processing (NLP).3 

But it is important to remember that late-twentieth-century literary critics registered important exceptions to—and tensions within—the Saussurean privileging of langue— debates that have been poorly integrated into the commonplace precepts of legacy poststructuralism. Mikhael Bakhtin and his Russian formalist circle, for example, argued that Saussure’s splitting off of langue from parole was an “objectivist” decoupling of language from the dialogic situation of human communication. In The Prison-House of Language (1972), Fredric Jameson argued that language makes “its existence felt at every moment of our thought, in every act of speech” even though language is “nowhere present at once, nowhere taking the form of an object or substance.” Thus, Saussure’s failure to reunite parole and langue, according to Jameson, pointed to the stubborn “contradictions” in his thought. Finally, Derrida’s response to Saussure’s untenable split between langue and parole was to overturn these emphases. Against the supposed fixity attributed to langue and the mythic immediacy associated with live speech, Derrida’s pivot to parole in its written form emphasized the contingency, indeterminacy, and unpredictability of active language. That is why deconstruction’s signal inspiration is the “nonpresence” that haunts writing.

If these complications seldom come to the fore in today’s conversations over language and gen AI, that is partly because, in the 1980s, literary deconstructionists leaned into langue. The austere textual hermeneutics that came to dominate literary criticism perversely cemented the conceptual split between langue and parole that Derrida had set out to reject. In treating select literary texts as if they met the systemic criteria for langue, such “deconstruction” ignored Derrida’s premise of writing as parole. Influenced by the pre-existing habits of mid-century New Criticism, this doctrinaire “reading” practice divorced the textual products of meaning-making in literature from their enactment in diverse material and embodied processes. Although literary scholars long ago moved on from this “idiosyncratic deconstruction,” the legacy poststructuralism left in its wake sustains a persistent tendency to understand “language” primarily through texts, while regarding active theories of language use as topics for communication scholars, linguists, or scholars of writing studies. It follows that the need to grapple with LLM-based technologies offers an ideal opportunity for literary critics to revisit forgotten faultlines in dialogue with scholars of writing and in conversations that cross humanities disciplines and AI-adjacent fields such as linguistics and NLP.4  

To be clear: I am neither dismissing the importance of “reading” texts nor ignoring the dynamism of contemporary literary studies. I am, rather, emphasizing that despite such vibrant eclecticism, literary criticism has surprisingly little to say about the onto-epistemic and socio-technical conditions for writing and for the cultivation of writing subjects—that is, active users of (written) language.  Instead, inured to the simplifications of legacy poststructuralism and mindful of a chastening “post-critique,” today’s literary scholars remain surprisingly detached from their own commitments to active language (parole).5 Hence, while critics doubtless recognize themselves as embodied creatures whose written and spoken words support their “intention of saying something meaningful,” they typically lack the means to theorize or account for their own investments in an active writing process. At the same time, when literary critics theorize “language” they often appeal to the legacy of text-centric deconstructive practices without remarking Derrida’s emphasis on “active thought and dialogue.”

“The need to grapple with LLM-based technologies offers an ideal opportunity for literary critics to revisit forgotten faultlines in dialogue with scholars of writing and in conversations that cross humanities disciplines and AI-adjacent fields such as linguistics and NLP.”

These misgivings toward action are especially lamentable at a time when literary critics could be among those humanists to develop new critical theories of active language that speak to their own experiences in the classroom. As educators encountering the dilemmas of cognitive debt, learning loss, and deteriorating academic integrity, literary scholars could, for example, speak up for the importance of reading and writing as the necessary building blocks for a wide range of creative and critical thought processes and practices. They could call out the need for institutional support to develop the teaching of critical AI literacies. Yet, in comparison to scholars of writing studies, few seem equipped to do so.6 

Note too that the downsides of a pervasive focus on language as textual product, unleavened by any theory of active use, can show up in a range of research endeavors. For example, as strong proponents of decolonial critique, literary critics may lack the sociolinguistic insights necessary to recognize that Western thought (including its invention of a eugenics-inflected and anthropomorphized discourse of “AI”) occludes the materiality of speech acts—especially the “bodily and material coordinating practices” through which language “comes into being.” The lack of such insights can be especially damaging in the face of text-generating chatbots: for it can inhibit the ability to theorize this new socio-technical paradigm in ways that encompass its diverse socio-technical affordances; its increasing impact on human writing and creative cultures; and its dependence on the politico-economic conditions of an extractive, highly monopolistic, and technofeudal industry that demands strict adherence to the dubious doctrines of cyberlibertarianism and permissionless innovation. 

“Literary scholars could, for example, speak up for the importance of reading and writing as the necessary building blocks for a wide range of creative and critical thought processes and practices. They could call out the need for institutional support to develop the teaching of critical AI literacies. Yet, in comparison to scholars of writing studies, few seem equipped to do so.”

As a baseline, literary critics today might look to the twin perils Winner laid out in “Technologies as Forms of Life”: technological somnambulism and technological determinism. To avoid those pitfalls, they might at minimum undertake: 

1) a long overdue reckoning with critical theory’s implicit commitments to action and a corresponding readiness to explore the “language” in LLMs through fresh eyes, in conjunction with 

2) the cultivation of in-depth understandings (potentially explored through cross-disciplinary collaborations) of chatbot technologies, their material conditions of possibility, and their myriad effects;

3) and the recognition that the tech industry’s quasi-theistic fixation on scale reduces “language” to positivistic abstractions of “data” such that data-driven technologies are both the product and amplifier of a pervasive and growing data positivism.7 

Equipped with such knowledge, literary critics can be among those humanists to propose an array of collective political, technical, and professional (teaching- and research-adjacent) practices suitable to the current conjuncture. 

The risk of doing otherwise, I contend, is to tether critics to the under-examined tenets of legacy poststructuralism—the effects of which, right now, tend toward effective erasure of human action in writing and other communicative practices; a reluctance to engage in critique; and the conflation of “theory” with the products of data positivism and probabilistic mimicry. By contrast, the goal should be a field alert to the dangers of an unwitting or complaisant sanction of permissionless innovation. 

Image Banner Credit: Geronimo Gigueaux

Notes

  1. For an expanded discussion of and resources about these topics, see Goodlad and Stone and the “Teaching Critical AI Literacies: Living Document.”
  2. See Goodlad (forthcoming in Representations).
  3. The idea of the “distributional hypothesis” derives from the mid-century linguistic theory of Zelig Harris (Noam Chomsky’s teacher). In a longer version of the current essay I highlight the difference between Harris’s concept and Saussure’s langue and build on Alvarado in explaining the importance of the former to understanding what state-of-the-art LLMs do and how scholars today should view them. In brief, I argue that LLMs are a closer approximation to Harris’s “distributional hypothesis” than they are to Saussure’s langue. That is, as an idealizing conceptual abstraction of the systemic totality of a single language, Saussure’s langue derives its salience precisely through its opposition to parole (the dynamic and unpredictable terrain of language use). This is why Derrida’s theories of language turned toward parole and, in doing so, toward the indeterminacies of speech acts. It is also why literary scholars should consider LLMs as large-scale statistical products that are heavily fine-tuned by human workers—not as empirical instantiations of langue or, still less, socio-technical implements for integrating langue and parole.
  4. My understanding of language use derives partly from Derrida and partly from the influential “action tradition” of the psycholinguist Herbert H. Clark, whose Using Language (1996) builds on the philosophy of J.L. Austin. According to Clark’s paraphrase of Austin, language (which includes speech, gesture, and writing) involves the actions people take when they want “addressees to recognize what they mean” (133). Computational linguists working in the Clarkean tradition, along with sociolinguists and like-minded AI researchers, have been among the most vocal in arguing that LLMs do not meet the criteria for human-like language understanding and use. Discussions from the 1980s that paired Derrida and Austin (e.g., Mitchell) could also help to enrich hybrid theories of language use that scholars might bring to their engagements with Clark’s Austin-inflected action tradition as well as a broad spectrum of interdisciplinary researchers studying the design, development, and implementation of LLM-based chatbots. These inquiries might also consider a Wittgensteinian tradition of ordinary language that has been important to AI research for some time (e.g., Wilks)—though ideally such conversations would avoid the post-critical tendency to offer ordinary language as a cure for the (tendentious) proposition that “suspicion is the only possible attitude for a serious literary critic” and/or as a radical alternative to theory.
  5. See the linked article for discussion of “post-critique,” a Great Recession-era set of arguments that charged critical theory (with some justice) with problems of excessive suspicion, narrow historicism, human exceptionalism, and critical arrogance. Space does not permit a fuller elaboration of how “post-critique” plays a role in recent critical engagements with generative AI. 
  6. For exemplary work from scholars trained in writing studies, see, for example, Halm (forthcoming in Critical AI 3.2), Losh, MacArthur (forthcoming in Critical AI 3.2), and Vee. To be sure, literary critics have begun to endorse comparable positions in Critical AI; in journals such as American Literature and PMLA; and venues such as The Atlantic Monthly.
  7. As I explain in a longer version of this essay, when industry enthusiasts augur the coming of supposedly human-level “AGI” (artificial general intelligence) by invoking pseudo-scientific scaling “laws,” they indulge in a rhetoric of data positivism that papers over the distinctions between language and training sets by erasing the origins of this data in live speech acts—including the embodied, spatio-temporally situated, relationally activated, affectively charged, and socio-technically mediated dimensions particular to parole. According to the logic of data positivism, as Katherine Bode and I explained, the more data, the more certain the findings, regardless of any onto-epistemic reductions, biases, stereotypes, exclusions, and errors that arise along the way. Moreover, with the popularization of user-friendly chatbots, this positivistic monoculture has begun to permeate academic research as scholars, keen to accelerate productivity, turn to LLMs in place of best practices in knowledge production as typically conceived (e.g., enlisting autogenerated data and/or autogenerated models of data to replace painstaking data collection and analysis). When the culture of data positivism upholds quantitative efficiency—more output in less time—as a victory for human progress, it misconceives the active requirements for critical thinking and learning (e.g., Bastani et al. Cheng et al., Kosmyna et al.) and propagates what Stone and I have called the “myth of frictionless knowing.”