Existential Risk from AI is Ethnographic, Not Speculative
The most pressing dangers of AI are not confined to some future cataclysm; they are here now, embedded in the systems that shape our lives
The public discourse on artificial intelligence is dominated by dystopian fears—scenarios where superintelligent machines break free from human control, wreaking havoc, and threatening our very existence. These hypothetical futures capture imaginations and headlines, but they obscure the more immediate, insidious risks that AI already poses today. By focusing on these dramatic, far-off threats, the real and pervasive dangers of AI—those embedded in our everyday systems, from workplaces to societal governance—go largely unaddressed. But who benefits from this lack of nuance? And why is the conversation being directed away from the current harms AI is creating?
Tech Companies and the Obfuscation of Present Harms
The first beneficiaries of this oversimplified narrative are tech companies, the main drivers behind AI's rapid development and deployment. By keeping the public focused on apocalyptic futures, these corporations distract from how they are already using AI to control, monitor, and exploit workers and consumers today. Consider the workplace: AI systems are increasingly being used to enforce Key Performance Indicators (KPIs) and metrics that dictate productivity and efficiency. These metrics often clash with the well-being, ethics, or satisfaction of workers. Yet, AI-driven systems coerce workers into achieving goals that many fundamentally disagree with. AI transforms the workplace into a dehumanizing environment where employees are seen as tools to be optimized rather than autonomous individuals with intrinsic values and rights.
By focusing public attention on future risks, tech companies can position AI as an innovative force for good while sidestepping uncomfortable truths about how AI is currently amplifying unethical practices. As long as the narrative remains focused on speculative doomsday scenarios, tech companies can avoid accountability for the concrete effects their technologies have on labor practices, consumer privacy, and social inequality. This deflection helps these companies maintain power and profit while the subtle, ongoing dangers of AI remain largely invisible to the public.
Governments and the Deferral of Regulation
Governments and policymakers also benefit from the lack of nuance in AI discourse. When the public perceives AI risks as distant and hypothetical, there is less pressure to regulate its current applications. AI is already being used in law enforcement, public services, and various areas where ethical concerns—such as surveillance, bias, and the erosion of civil liberties—should be at the forefront of conversation. Yet, these concerns are overshadowed by fantastical doomsday scenarios, enabling governments to delay meaningful regulation of AI. By focusing on future risks, governments can continue expanding AI's use in ways that enhance their control over citizens without having to engage in the difficult conversations about protecting civil liberties, privacy, and ethical governance today.
This convenient deferral of regulatory responsibilities allows AI's integration into public and governmental systems to proceed largely unchecked. The focus on far-off dangers ensures that the current, more immediate harms—such as the expansion of surveillance states, algorithmic bias in criminal justice, and the erosion of democratic accountability—remain out of sight and out of mind.
Investors and Capitalist Growth
Investors and the broader capitalist system are also served by this lack of nuance. AI represents a massive opportunity for profit, especially in industries like finance, healthcare, and retail, where automation can streamline processes, reduce labor costs, and optimize decision-making. A nuanced conversation that critically addresses the negative impacts of AI on labor markets, privacy, and social inequality would challenge the ethics of pursuing profit at the expense of human dignity. However, the dominant narrative of AI as a distant existential threat downplays these immediate concerns, allowing investors to continue fueling the AI boom without facing ethical scrutiny.
In this way, the simplified discourse on AI helps protect the interests of those who stand to gain the most from AI's rapid and unchecked development. Investors can pour money into AI technologies that optimize profits—whether through increased productivity or reduced labor costs—without addressing the significant social, economic, and ethical costs of doing so.
The Media’s Role in Perpetuating the Narrative
The media, too, plays a role in sustaining this oversimplified narrative. Sensational stories about AI-driven apocalypses capture more attention, and therefore more revenue, than critical examinations of how AI is already transforming society today. By focusing on these dramatic future scenarios, the media drives engagement while contributing to a distorted public understanding of AI's risks. This focus on speculative AI dangers allows the quieter, more immediate harms—such as labor exploitation, erosion of privacy, and algorithmic bias—to fly under the radar. As a result, the public is more fearful of distant, hypothetical threats than of the more pressing dangers AI poses to workers, consumers, and citizens right now.
Capitalist Control and Technological Rationality
This lack of nuance serves to maintain the status quo, reinforcing the power of tech companies, governments, investors, and the media. These powerful entities benefit from keeping the focus on speculative, future-oriented AI risks while avoiding accountability for the current harms AI is creating. Drawing from critical theory, we can better understand this dynamic and the deeper implications of AI under capitalism.
Herbert Marcuse, in his work One-Dimensional Man, critiqued technological rationality—the idea that technology, under capitalism, is designed not to liberate but to control and suppress. AI, particularly in the workplace, exemplifies this. AI-driven KPIs force workers into dehumanizing roles, not for their benefit, but for the continuous expansion of capital. This technological rationality ensures conformity, suppresses dissent, and prioritizes efficiency over autonomy.
Michel Foucault's concept of biopolitics and governmentality extends this analysis by showing how modern power operates through subtle control mechanisms that shape individual behavior and identity. AI’s role in monitoring workers, enforcing productivity metrics, and guiding consumer choices represents this biopolitical governance. It makes individuals complicit in their own regulation, normalizing efficiency and optimization as core societal values.
Gilles Deleuze’s notion of control societies further expands this critique. AI systems exemplify Deleuze’s idea of diffuse, continuous power, constantly monitoring, assessing, and adjusting human behavior. AI quietly shapes workplace dynamics and social governance, not through direct commands, but through pervasive adjustment and surveillance, molding human actions across all domains of life.
Hegemony and the Manipulation of Discourse
Antonio Gramsci’s theory of cultural hegemony offers insight into how the AI narrative is manipulated to benefit powerful actors. Gramsci argued that dominant groups maintain control not just through coercion, but by shaping cultural narratives to present their interests as universal. The simplified narrative of AI as a distant threat is a hegemonic strategy, diverting attention from how AI currently reinforces inequality, surveillance, and labor exploitation.
Mark Fisher’s concept of capitalist realism complements this by showing how capitalism perpetuates itself by foreclosing alternative futures. The focus on AI’s speculative dangers, rather than its present harms, is a form of capitalist realism. It makes it easier to imagine dystopian futures than to envision AI being used for ethical, human-centered purposes today. This narrative preserves capitalism's control over AI, ensuring that its integration into society is framed as inevitable, rather than something that can be contested and reshaped for the public good.
Alienation and Bureaucratic Violence
Karl Marx’s analysis of alienation and the commodification of labor is central to understanding AI's role within capitalism. Marx argued that capitalism alienates workers by turning their labor into a commodity, reducing them to mere instruments of production. AI intensifies this alienation by turning workers into data points to be optimized, evaluated, and controlled, further erasing their autonomy and humanity.
David Graeber’s critique of bureaucratic violence and the absurdity of modern work helps illuminate how AI-driven systems trap workers in meaningless roles, forcing them to conform to absurd, dehumanizing standards. AI reinforces these bureaucratic logics, reducing human creativity and ethical decision-making to mere compliance with algorithmic metrics.
Shifting the Conversation
The conversation around AI needs to shift. The most pressing dangers of AI are not confined to some future cataclysm; they are here now, embedded in the systems that shape our work, our governance, and our daily lives. Tech companies, governments, investors, and the media benefit from keeping the focus on far-off risks, allowing AI’s present harms—harms deeply intertwined with capitalist structures of control and exploitation—to go unchecked.
To confront these dangers, we must reframe the AI discourse. Instead of speculating about AI’s distant risks, we need to address how AI is already deepening inequality, eroding privacy, and reinforcing exploitative labor practices. By holding the institutions that deploy AI accountable for its current impact, we can begin to mitigate these risks and push for a future where AI serves not just capitalist interests, but the well-being and dignity of all.
In sum, the hidden dangers of AI are not lurking on the horizon—they are here now, quietly shaping the systems we depend on. To challenge them, we must confront the ways in which AI is being used to deepen capitalist control and exploitation, and we must reimagine a future where AI serves human liberation rather than domination.