Ethics Is What Gets You Fired
Stop searching for an ethical AI CEO. The job is impossible. The problem isn't a lack of individual virtue, but a system that makes virtue a liability.
In the conversation around AI, we keep searching for a hero: the ethical leader. This is the conscientious CEO who can navigate the tension between innovation and humanity, steering their company toward profit without sacrificing the public good.
Doing right by the public looks like negligence to shareholders.
The problem isn’t that this leader is hard to find. It’s that the current system is designed to make their success impossible. The role of a truly ethical AI leader is a structural contradiction, a job whose core requirements run directly counter to the rules of survival in today’s market.
This isn’t about a lack of good intentions. It’s about a system that actively filters for, and rewards, a very different kind of leadership.
The Unwinnable Game
For an AI leader, the conflict between ethics and survival is absolute. The system doesn't just discourage ethical choices; it punishes them.
An ethical leader must absorb liability for harms. But the system demands pushing risk onto users to protect margins. Anything else is framed as a breach of fiduciary duty.
An ethical leader must guarantee human alternatives to their products. But the system demands total user lock-in to create a competitive moat and sustain valuations.
An ethical leader must accept external veto power over their launches. But the system demands a confident, simple story of inevitable growth to secure capital and satisfy the board.
An ethical leader must build systems that can be forked or governed as a public commons. But the system rewards the creation of private monopolies and punishes anything that threatens that model.
In this game, a leader who does the right thing is immediately outcompeted by a rival who doesn't.
The board calls it negligence, investors call it a lack of vision, and executive search firms call it a failed tenure.
The Litmus Test
Want to see these constraints in real time? Ask any AI CEO five direct questions:
Will you support strict liability for harms caused by your models?
Will you guarantee legally required non-AI alternatives for essential services?
Will you permit your models to be forked under open standards for the public good?
Will you ship top-tier privacy and safety protections to everyone, not just enterprise clients?
Will you accept independent, binding stop-authority over your launches?
The only possible responses are jargon, evasion, or silence. That silence is the system speaking its truth: ethics and survival are mutually exclusive under current conditions.
The Real Fix
The failure isn’t a moral failing of individuals. It’s a design flaw in the system. We’ve built rules where the most profitable path is extractive, and the most ethical path leads to a dead end.
The work, then, isn’t to keep auditioning saints for an impossible role. It’s to change the rules of the game so that ethics and survival are no longer contradictions.
Can we build durable institutions without domination?
Anarchism is often described like a bonfire: a political philosophy of no institutions, no coordination, just a perpetual “no.” But the anarchist claim is subtler and more demanding: end domination, not organization.