Published January 24, 2026
Part One: Literacy Workers, Unite!
Most of what is considered to be AI is other peoples’ data. In the same way that the “Cloud” metaphor abstracts and hides infrastructure, the term “AI” abstracts and hides the origins of information.
—Adam Harvey (qtd. in Smith)
The problem with the transformation of work today is less that new technologies could eventually replace workers but that they are used to degrade working conditions, keep wages stagnant, and mount a major flexibilization of working time.
—Juan Sebastian Carbonell (Broder)
It’s my last year as a PhD student, so I’ve been applying for jobs. Just for fun, I recently asked ChatGPT to generate a 500-word Teaching Philosophy Statement for a tenure-track, composition studies job application. The result wasn’t earth-shattering, but it did persuasively parrot the formal and substantive elements of the genre. This probably says more about how formulaic and predictable we humans are than it does about how ‘intelligent’ computers have become.
In this essay, I argue that tools like ChatGPT should be actively and collectively resisted. Such tools merely entrench the unjust, exploitative systems and structures that make them possible in the first place. What good is a tool that helps you get a job if that same tool helps bosses undermine labor conditions and worker autonomy?
As many readers probably know, the only reason a tool like ChatGPT works at all is by using statistical analysis of vast troves of human-produced data to spit out the most probable response. These technologies rely on elaborate “neural networks,” which Gideon Lewis-Kraus describes as “pliant digital lattices based loosely on the architecture of the brain” and which enable “classifications or predictions based on [a machine’s] ability to discover patterns in data” (“The Great A.I. Awakening”).
Companies like OpenAI scrape this data from the internet without compensating or crediting the ordinary users who generated the data in the first place. And ChatGPT’s outputs are impressive not because the machine itself is so smart, but because a high enough percentage of the data sets it uses have so much in common—i.e., are all so alike and unoriginal that it becomes relatively easy to mimic them without doing very much thinking at all. It’s for this reason that Jason Farago suggests that “we call our dissembling chatbots and insta-kitsch image engines what they are: mirrors of our diminished expectations.”
But what is it, exactly, that accounts for the often-unremarkable, algorithmic quality of so much human-generated text—from academic articles and legal briefs to workplace memos, cover letters, movie scripts, news reports, song lyrics, and even aspects of this very essay? Why do so many humans talk, write, and act like machines?
ChatGPT’s outputs are possible, in part, because all texts are intertextual, and all writing is, to some extent, pastiche (Halliday ; Jackson; Porter). But its outputs are also fundamentally enabled by 21st-century genre and activity systems (disciplines, professions, markets, nation-states, etc.) that tightly circumscribe the horizon of possibility for communication and interaction (Luke; Russell). It’s primarily these systems that explain why so much human-produced text is so often bland and lifeless: our writing can only be as good as our writing conditions allow.
Because the business model of companies like OpenAI is premised on large-scale, shameless data theft, their products should be avoided like the plague. To be clear, I’m not saying that students should be shamed or penalized for using ChatGPT or similar tools. When my own students resort to such tools, they often do so out of desperation, or because they’re unaware of the tools’ faults, or because I haven’t made a persuasive case for the value of process (writing as a series of recursive steps), not just product. I insist to them that I would much rather read messy, disorganized writing that is full of grammar-syntactic errors but also exhibits creativity, curiosity, risk-taking, personality, and a conscience. No matter how “good” or “bad” such writing might be, it always signals something important about students’ learning: their moral-emotional-intellectual development, their maturing capacity for open-ended inquiry, trial-and-error, and so on.
I’m suggesting, instead, that the pervasiveness of Big Tech hype and propaganda around AI writing creates new openings for talking and teaching about the social, economic, and geopolitical forces—racial capitalism, imperialism, patriarchy, etc.—that constrain and confine human interaction and meaning-making practices in such profound ways.
After all, it’s these forces and structures that make a predictive, pattern-recognition tool like ChatGPT even possible in the first place. They are what explain the abundance of human writing that’s been purged of all personality, pleasure, or conscience, reduced to merely transactional (Tarnoff). In this sort of textual landscape, unconventional, experimental writing looks ‘incorrect,’ ‘indulgent,’ or ‘out of place.’ But if a better, more humane world will involve the full range of linguistic, communicative capacities, that’s exactly the kind of writing we need more of (Mieszkowski; Olsen and Dodge).
Teachers, students, and scholars of writing are well-positioned to contribute to the ongoing, never-finished work of imagining alternative, more humane norms and institutions and constructing more just, democratic, and egalitarian genre and activity systems—in contrast to the ones we have now, many of which are premised on conformity, hierarchy, commodification, exploitation, and alienation (Wright). As the MLA-CCCC Joint Task Force on Writing and AI Working Paper points out, not only do large language models like ChatGPT “promote an uncritical normative reproduction of standardized English usage that aligns with dominant racial and economic power structures,” (7), but these technologies also create strong incentives for bosses, managers, and high-level administrators “to increase class sizes or modify workloads based on perceived efficiencies created by AI” (9).
Indeed, the danger here isn’t that tools like ChatGPT will replace writers or teachers. The real danger—one that can’t be overstated—is that such tools will be weaponized by the powerful in order to more ruthlessly exploit the less powerful (including contingent faculty and graduate students), further de-skilling and casualizing an already precarious workforce (Bousquet; Broder; see also Marc Maron’s interview with Writers Guild of America labor organizer Adam Conover; the relevant part of the conversation starts at the one-hour, ten-minute mark [1:10:00]).
For this reason and others, many of us can’t afford—literally—to fall prey to self-interested Big Tech narratives that want us to treat these technologies as somehow inevitable. Few things are really inevitable. Active, organized refusal and resistance—even,, outright sabotage—are, in fact, viable options, and in some cases, existential imperatives. The concrete forms that organized resistance and sabotage might take will, of course, depend on the context; labor unions certainly have an essential role to play, though. (Berry and Worthen; Samuels). As Elizabeth Gurley Flynn writes in “Sabotage: On the Conscious Withdrawal of Industrial Workers’ Efficiency”:
I am not going to attempt to justify sabotage on any moral ground. If the workers consider that sabotage is necessary, that in itself makes sabotage moral. Its necessity is its excuse for existence. And for us to discuss the morality of sabotage would be as absurd as to discuss the morality of the strike or the morality of the class struggle itself.
Class struggle is no less urgent now than it was when Flynn wrote those words (Therborn; Vasudevan). And if Flynn’s emphasis on “industrial workers” strikes you as outdated, let’s update it. This is easy to do: after all, in supposedly post-industrial sectors of the 21st-century global economy—sectors dominated by services (like education and health care), finance, and ICT—writing and literacy have become “a dominant form of manufacturing,” with “texts serv[ing] as a chief means of production and a chief output of production” (Brandt, The Rise of Writing 3).
Industrial workers might be in the minority, but the majority of people are still workers. Many of us are literacy workers (the “proLITariat,” to borrow Ryan McHale’s term). And we must unite—partly by recognizing our own strategic location at the point of economic (textual) production, a location that entails significant leverage and makes us more powerful than we might usually feel. As Vivek Chibber explains in “Why the Working Class?,”
progressive reform efforts have to find a source of leverage, a source of power that will enable them to overcome the resistance of the capitalist class and its political functionaries. The working class has this power, for a simple reason—capitalists can only make their profits if workers show up to work every day, and if they refuse to play along, the profits dry up overnight . . . Actions like strikes don’t just have the potential to bring particular capitalists to their knees, they can have an impact far beyond, on layer after layer of other institutions that directly or indirectly depend on them—including the government. This ability to crash the entire system, just by refusing to work, gives workers a kind of leverage that no other group in society has, except capitalists themselves.
The 19th-century Luddite movement, of course, offers an instructive historical precedent. Contrary to popular belief, the movement’s English textile workers weren’t, in fact, anti-tech. They were anti-exploitation: “They just wanted machines that made high-quality goods . . . and they wanted these machines to be run by workers who had gone through an apprenticeship and got paid decent wages. Those were their only concerns” (Conniff).
In the age of AI—an age in which technocratic, extractive capitalism has become an outmoded, bankrupt model for organizing social, political, and institutional life—we, too, must be staunchly anti-exploitation—and, therefore, anti-capitalist. Contrary to what the dominant neoliberal perspective would have us believe, the relentless cost-cutting and profit-maximizing strategies of Big Tech are incompatible with the public good.
Capitalism creates human needs far more quickly than it satisfies them (Lebowitz). For this reason and many others, meaningfully addressing urgent human needs won’t happen within capitalism’s confines—we need radically new systems and structures. As Johan Galtung writes in “Towards a New International Technological Order,” “For techniques that create different structures to come into their own, a very clear perception of the interlocking of technology and structures is needed. Also needed is the political will to use alternative technologies as an instrument to bring about a structural change” (277, emphases added).
The role of critical teachers and scholars of writing, rhetoric, and technology is to strengthen such perceptions and help constitute such political wills.
Anything less will be bland and lifeless—just another soulless transaction.
Part Two: Resisting Robo-Thought
Large language models (LLMs) like ChatGPT don’t just recognize patterns in data. They also replicate and riff on those patterns. This makes LLMs the quintessential “reproductive technology,” to use Sara Ahmed’s term: a technology that cements the status quo, and all the ideological values and assumptions it entails.
In its relation to society and culture, an LLM is basically an inertia machine, uncritically and unconsciously recycling the past to predict the future. And, unlike a person, it lacks the capacity to care about this fact: to empathize with its users or to agonize about its effects. It is, quite literally, blissfully unaware—and thus far from intelligent.
Intelligence can’t be computed or quantified, as Stephen Jay Gould and other incisive critics of eugenicist ideas about IQ have argued (Stoval). Human reasoning and judgment are not reducible to computation or calculation. As Matthew Cobb writes: “By viewing the brain as a computer that passively responds to inputs and processes data, we forget that it is an active organ, part of a body that is intervening in the world, and which has an evolutionary past that has shaped its structure and function.”
To counter the one-sided but influential narratives peddled by Big Tech and its enablers, teachers and scholars must do what we can to foreground the limitations of computation metaphors, AI ideologies, and proprietary LLMs. Along with the drawbacks mentioned so far, they include the following:
- LLMs excel at producing boilerplate text that’s been “channeled into its flattest possible version so as to be useful to those who mainly use language as liability control” (Weatherby).
- LLMs make it even more tempting to substitute the company of machines for the company of people, thus exacerbating the feelings of isolation and lack of social belonging that can predispose people to fascist, totalitarian, and other anti-social ideologies, as Hannah Arendt and others have argued (Jaffer; see also Turkle’s “Stop Googling. Let’s Talk”).
- Because of the amount of money, expertise, and resources required to develop LLMs in the first place, we face the prospect of “a new oligopoly that concentrates language technologies in the hands of a few private companies,” with profoundly anti-democratic implications for the public sphere, the parameters of which will increasingly be set by a powerful few at the expense of the many (Bajohr).
- LLMs enable the proliferation of disinformation and propaganda (Marcus).
- The conflation of computation and judgment can encourage us to think of human beings in rote, mechanistic, instrumental terms, thereby allowing influential decision-makers to keep a safe distance from the consequences of their decisions: “Powerful figures in government and business [can] outsource decisions to computer systems as a way to perpetuate certain practices while absolving themselves of responsibility. Just as the bomber pilot ‘is not responsible for burned children because he never sees their village’, Weizenbaum [an early critic of AI] wrote, software afforded generals and executives a comparable degree of psychological distance from the suffering they caused” (Tarnoff; see also Dress, “Inside America’s Plans for an Autonomous, AI-powered Military,” and McCaney, “AI Lie Detectors Could Soon Police the Borders”).
I certainly don’t want to feed into the fearmongering doomsday scenarios preferred by Big Tech, which conveniently distract from the very real, present-tense risks and costs—non-apocalyptic though they might be—of LLMs and other so-called “AI” tools.
But I do want to emphasize, more modestly, that we can’t be naive about these things. Nor is it helpful to adopt a wishy-washy, faux neutrality which pretends that LLMs’ costs and abuses are somehow balanced out by the perceived upsides. No tool or technology is neutral, nor are people (Winner). At every stage of technological development, the public must ask:
- Why are we doing this?
- Is this really the best use of our time, money, energy, and resources?
- Will this technology help us address big social and environmental problems like poverty, inequality, war, hunger, disease, and climate change, or will it make these problems worse?
- Who controls this technology?
- Whose interests does it serve?
- What enabling conditions make it possible? (What are the social, political, and economic forces that underwrite its existence?)
- What ideological values and premises are encoded in it?
- What does history tell us about the technology’s likely uses and abuses?
- Are the likely abuses preventable, or are they somehow incentivized by the technology’s enabling conditions?
Silicon Valley’s self-serving, pre-packaged answers to any of these questions must be flatly rejected. Instead, we must cultivate spaces and dispositions that allow for open-ended, democratic deliberation and critical, cooperative problem-posing.
My own views on the subject, I hope, are clear. But I don’t expect others to automatically agree. Beliefs, judgments, and commitments should never be automatic....pr automated. When it comes to the uses, abuses, and meanings of so-called “generative artificial intelligence” (more accurately called “degenerative artificial obliviousness”), we need generative dialogue, debate, and struggle among real thinking, feeling people.
* Teaser image photo by Kyle Head on Unsplash.
Further Reading
Bhalla, Jag and Nathan J. Robinson. “‘Techno-Optimism’ Is Not Something You Should Believe In.” Current Affairs. 20 Oct. 2023. https://www.currentaffairs.org/2023/10/techno-optimism-is-not-something-you-should-believe-in
boyd, danah and Kate Crawford. “Critical Questions for Big Data: Provocations for a Cultural, Technological, and Scholarly Phenomenon.” Information, Communication, & Society, vol. 15, no. 5, 2012, pp. 662–679.
Dhaliwal, Ranjodh Singh, Théo Lepage-Richer, and Lucy Suchman. Neural Networks. University of Minnesota Press, 2024. https://www.upress.umn.edu/book-division/books/neural-networks
Fuchs, Christian. “Communicative Socialism/Digital Socialism.” tripleC: Communication, Capitalism, & Critique, vol. 18, no. 1, 2020. https://www.triple-c.at/index.php/tripleC/article/view/1144/1308
Hornborg, Alf. “Technology as Fetish: Marx, Latour, and the Cultural Foundations of Capitalism.” Theory, Culture & Society, vol. 31, no. 4, 2014, pp. 119–140.
Pasquinelli, Matteo. The Eye of the Master: A Social History of Artificial Intelligence. Verso, 2023. https://www.versobooks.com/products/735-the-eye-of-the-master
Paur, Stephen. “Infopower and Ideologies of Extraction.” Electronic Book Review. 2024. https://electronicbookreview.com/essay/infopower-and-the-ideology-of-extraction/.
Prashad, Vijay. “Can the Global South Build a New World Information and Communication Order?” TriContinental. May 18, 2023. https://thetricontinental.org/newsletterissue/new-world-information-and-communication-order/
Therborn, Goran. “Class in the 21st Century.” New Left Review, vol. 2, no. 78, 2012. https://newleftreview.org/issues/ii78/articles/goran-therborn-class-in-the-21st-century
Thoreau, Henry David. “Nothing to Say.” Lapham’s Quarterly. 1854. https://www.laphamsquarterly.org/communication/nothing-say
Tucker, Ian. “Signal’s Meredith Whittaker: ‘These Are the People Who Could Actually Pause AI If They Wanted To.’” The Guardian. 11 June 2023. https://www.theguardian.com/technology/2023/jun/11/signals-meredith-whittaker-these-are-the-people-who-could-actually-pause-ai-if-they-wanted-to
Tulathimutte, Tony. “Proposals Toward the End of Writing.” The Believer. Feb. 9, 2016. https://www.thebeliever.net/logger/2016-02-09-proposals-toward-the-end-of-writing-2/
West, Mark. “An Ed-Tech Tragedy? Educational Technologies and School Closures in the Time of COVID-19.” UNESCO, 2023. https://unesdoc.unesco.org/ark:/48223/pf0000386701
Zhu, Elizabeth. “The Dangers of Techno-Optimism.” The Stanford Daily, Nov. 29, 2022. https://stanforddaily.com/2022/11/29/opinion-the-dangers-of-techno-optimism/
Ahmed, Sara. “Making Feminist Points.” feministkilljoys. Sept. 11, 2013. https://feministkilljoys.com/2013/09/11/making-feminist-points/
Bajohr, Hannes. “Whoever Controls Language Models Controls Politics.” Apr. 8, 2023.
https://hannesbajohr.de/en/2023/04/08/whoever-controls-language-models-controls-politics/
Berry, Joe and Helena Worthen. Power Despite Precarity: Strategies for the Contingent Faculty Movement in Higher Education. Pluto Press, 2021.
Bousquet, Marc. “The Waste Product of Graduate Education: Toward a Dictatorship of the Flexible.” Social Text, vol. 20, no. 1, 2002, pp. 81–104.
Brandt, Deborah. The Rise of Writing: Redefining Mass Literacy. Cambridge University Press, 2015.
Broder, David. “No, Automation Isn’t Going to Make Work Disappear.” Jacobin. Mar. 28, 2022.
https://jacobin.com/2022/03/automation-technology-precarity-employment-working-class-logistics
Chibber, Vivek. “Why the Working Class?” Jacobin. Mar. 13, 2016. https://jacobin.com/2016/03/working-class-capitalism-socialists-strike-power/
Cobb, Matthew. “Why Your Brain Is Not a Computer.” The Guardian. Feb. 27, 2020.
https://www.theguardian.com/science/2020/feb/27/why-your-brain-is-not-a-computer-neuroscience-neural-networks-consciousness
Conniff, Richard. “What the Luddites Really Fought Against.” Smithsonian. Mar. 2011.
https://www.smithsonianmag.com/history/what-the-luddites-really-fought-against-264412/
Dress, Brad. “Inside America’s Plans for an Autonomous, AI-powered Military.” The Hill. Sept. 27, 2023. https://thehill.com/policy/defense/4224631-inside-americas-plans-for-an-autonomous-ai-powered-military/
Farago, Jason. “A.I. Can Make Art That Feels Human. Whose Fault Is That?” The New York Times. Dec. 28, 2023. https://www.nytimes.com/2023/12/28/arts/design/artists-artificial-intelligence.html
Flynn, Elizabeth Gurley. “Sabotage: On the Conscious Withdrawal of Industrial Workers’ Efficiency.” Marxists.org. 1917. https://www.marxists.org/subject/women/authors/flynn/1917/sabotage.htm
Galtung, Johan. “Towards a New International Technological Order.” Alternatives: Global, Local, Political, vol. 4, no. 3, 1979, pp. 277–300.
Halliday, M. A. K. Explorations in the Functions of Language. Edward Arnold, 1973.
Jackson, Shelley. “Stitch Bitch: The Patchwork Girl.” MIT. Nov. 4, 1997. https://web.mit.edu/m-i-t/articles/jackson.htm
Jaffer, Nabeelah. “In Extremis.” Aeon. July 19, 2018. https://aeon.co/essays/loneliness-is-the-common-ground-of-terror-and-extremism
Lebowitz, Michael A. “Capital and the Production of Needs.” Science and Society, vol. 41, no. 4, 1977, pp. 430–447.
Lewis-Kraus, Gideon. “The Great AI Awakening.” The New York Times Magazine. Dec. 14, 2016. https://www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html
Luke, Allan. “Genres of Power: Literacy Education and the Production of Capital.” In Critical Literacy, Schooling, and Social Justice: The Collected Works of Allan Luke. Routledge, 2018.
Marcus, Gary. “AI Platforms like ChatGPT Are Easy to Use but Also Potentially Dangerous.” Scientific American. Dec. 19, 2022. https://www.scientificamerican.com/article/ai-platforms-like-chatgpt-are-easy-to-use-but-also-potentially-dangerous/
Maron, Marc. “Adam Conover.” WTF with Marc Maron. Aug. 10, 2023. https://www.wtfpod.com/podcast/episode-1460-adam-conover
McCaney, Kevin. “AI Lie Detectors Could Soon Police the Borders.” GovCIO Media & Research. Dec. 18, 2018. https://www.governmentciomedia.com/ai-lie-detectors-could-soon-police-borders
McHale, Ryan. Email to the Author. Feb. 5, 2024.
Mieszkowski, Jan. “Here Come the Prose Police.” The Chronicle of Higher Education. Oct. 11, 2019. https://www.chronicle.com/article/here-come-the-prose-police/
“MLA-CCCC Joint Task Force on Writing and AI Working Paper: Overview of the Issues, Statement of Principles, and Recommendations.” July 2023. https://hcommons.org/app/uploads/sites/1003160/2023/07/MLA-CCCC-Joint-Task-Force-on-Writing-and-AI-Working-Paper-1.pdf
Olsen, Lance and Trevor Dodge. “Architectures of Possibility: After Innovative Writing.” Guide Dog Books, 2012.
Porter, James. “Intertextuality and the Discourse Community.” Rhetoric Review, vol. 5, no. 1, 1986, pp. 34–47.
Russell, David R. “Rethinking Genre in School and Society: An Activity Theory Analysis.” Written Communication, vol. 14, no. 4, 1997, pp. 504–554.
Samuels, Robert. A Working Model for Contingent Faculty. The WAC Clearinghouse; University Press of Colorado, 2023. https://wac.colostate.edu/books/precarity/working/
Smith, Patrick. “Counter(media) Visioning and AI: Patrick Brian Smith Interviews Adam Harvey on Uses, Misuses, and the Possibility of Subversion.” Heliotrope. Sept. 14, 2022. https://www.heliotropejournal.net/helio/countermedia-visioning-and-ai
Stoval, Natasha. “Eugenics Powers IQ and AI.” Public Books. Mar. 24, 2021. https://www.publicbooks.org/eugenics-powers-iq-and-ai/
Tarnoff, Ben. “Weizenbaum’s Nightmares: How the Inventor of the First Chatbot Turned Against AI.” The Guardian. July 25, 2023. https://www.theguardian.com/technology/2023/jul/25/joseph-weizenbaum-inventor-eliza-chatbot-turned-against-artificial-intelligence-ai
Turkle, Sherry. “Stop Googling. Let’s Talk.” The New York Times. Sept. 26, 2015. https://www.nytimes.com/2015/09/27/opinion/sunday/stop-googling-lets-talk.html
Vasudevan, Ramaa. “The Global Class War.” Catalyst, vol. 3, no. 1, 2019. https://catalyst-journal.com/2019/07/the-global-class-war
Weatherby, Leif. “ChatGPT Is an Ideology Machine.” Jacobin. Apr. 17, 2023.
https://jacobin.com/2023/04/chatgpt-ai-language-models-ideology-media-production
Winner, Langdon. “Do Artifacts Have Politics?” Daedalus, vol. 109, no. 1, 1980, pp. 121–36.
Wright, Erik Olin. “Transforming Capitalism through Real Utopias.” American Sociological Review, vol. 78, no. 1, 2013, pp. 1–25.
Stephen Paur is a PhD student in Rhetoric, Composition, and the Teaching of English at the University of Arizona, where he teaches first-year writing and second-language writing. His research areas are techno-critical literacy, environmental rhetoric, ecological Marxism, and dialogic pedagogy.