Is "Ethical AI" Just Budget Cuts in a Sweater Vest?
Or, Why We Can’t Debias Our Way Out of a System Built to Exclude
Mainstream AI ethics loves dashboards. It loves auditing, measuring, correcting. Biases can be scrubbed. Outcomes can be balanced. With the right tools, enough transparency, and sufficient good faith, we can supposedly fix the harm.
But this fantasy rests on a dangerous omission: it treats harm as accidental—a flaw in the design—when in reality, exclusion is often the blueprint.
Because if we actually ask why so many AI systems exist—to what end they are optimized—the answer is rarely justice, inclusion, or care. It’s cost containment. Risk reduction. Efficiency.
AI, in too many sectors, is not built to help. It is built to filter out the “unprofitable,” the “noncompliant,” the “too complex.” And that purpose doesn’t vanish simply because the system reaches statistical parity.
You can’t reform a system whose core function is harm.
The Structural Logic of Cost-Cutting AI
Technology has always served capitalism’s quiet enforcement. From industrial-era assembly lines to the digitized cruelty of 1990s welfare reforms—where stricter eligibility criteria were designed to “weed out fraud” and reduce public spending—technology has consistently justified exclusion in the name of efficiency.
Today’s AI continues that tradition with precision and plausible deniability:
In healthcare, algorithms label patients “low-value” based on predicted costs, shuffling them into bureaucratic loops until they give up.
In public benefits, fraud detection systems “optimize” away applicants by burying them in documentation.
In policing, predictive tools map neatly onto redlined geographies, reinforcing surveillance while claiming neutrality.
As I argued in Actuarial Medicine & Hidden Exclusion, this is exclusion without denial. These systems don’t fail—they succeed at what they were built to do: manage scarcity by managing who disappears.
Optimization as a Smokescreen
Fairness metrics are seductive because they suggest we can keep our systems if only we fix the bias. But fairness cannot correct purpose.
A triage algorithm can be demographically balanced and still function to deny care.
A fraud model can distribute burdens evenly and still drive people out of benefits systems.
A predictive policing tool can spread surveillance “equitably” and still criminalize poverty.
Debiasing refines harm; it does not redeem it.
What gets celebrated instead is the illusion of fairness. Industry reports “inclusive” models. Philanthropies fund audit libraries. Ethics boards congratulate progress.
But the real questions remain untouched:
Why was the model built?
Who benefits from its deployment?
Who gets filtered out—structurally or by design?
In a system built to filter out the unprofitable, ‘fairness’ just optimizes the purge.
As I explored in The Commodification of Behavior in the Age of AI, optimization has become ambient violence: subtle, automated, and all the more dangerous for appearing neutral.
Absorbing Radical Critique Without Changing Anything
Scholars like Shoshana Zuboff, Cathy O’Neil, Ruha Benjamin, and Safiya Noble have laid bare the political economy of AI:
Zuboff on surveillance capitalism’s data extraction machine.
O’Neil on how risk scoring punishes the poor.
Benjamin on the New Jim Code—racism algorithmically sanitized.
Noble on search engines that replicate systemic exclusion.
But when their critiques reach industry or philanthropy, they’re metabolized into checklists. “Data governance.” “Algorithmic accountability.” “Responsible AI.” DEI dashboards as compliance theater.
As I warned in Misreading Capitalist Realism, this isn’t misunderstanding—it’s strategy. Power neutralizes critique not by rejecting it, but by absorbing it. The abolitionist call becomes a “safety toolkit.” The demand to dismantle becomes a design spec.
Who Gets Platformed, and Why
The thinkers who most clearly name systemic harm are often praised—but only the parts of their work that don’t threaten the bottom line. The rest gets clipped. Sanitized. Rendered legible.
To be heard, your critique must be containable. You must speak in metrics. You must propose solutions that leave the machine intact.
The result? A curated discourse where it’s acceptable to say “algorithms can be biased” but unacceptable to ask:
Should this system exist at all?
This is the price of institutional relevance: your analysis gets stripped for parts and sold back to you in a policy brief.
What True Ethical AI Would Require
If we’re serious about ethics, we can’t start with compliance. We have to start with refusal.
Refusal to optimize systems that ration care.
Refusal to automate gatekeeping.
Refusal to mistake prediction for justice.
True ethical AI would not ask:
“How do we make this system fairer?”
It would ask:
“Who does this system exist to exclude—and why?”
It would mean:
Banning systems that function solely to reduce service utilization.
Ending algorithmic triage and fraud scoring in healthcare and welfare.
Reorienting technology toward inclusion by default—not as a feature, but as a foundation.
This isn’t hypothetical. As I wrote in Kids These Days Just Want to Be Disabled, the logic of exclusion is already being laundered through a language of sustainability and efficiency. Tech simply automates what austerity already decided.
"Kids These Days Just Want to Be Disabled"
When poverty rises, they blame laziness, not economic policy. When climate disasters accelerate, they blame individual consumption, not fossil capital. And now, as disability rates surge, these reactionaries claim the real crisis is not public health failure, exploitative labor conditions, or mass infection but a population growing weaker, softer, and less productive.
We Won’t Debias Our Way Out of Structural Harm
Debiasing may remove overt discrimination. It cannot undo economic intent.
Until we confront the material incentives—profit margins, risk mitigation, cost avoidance—“ethical AI” will remain what it so often is budget cuts in a sweater vest: A performance of fairness in a system still built to exclude.