Racism as a Business Model
How Denial Became a Revenue Stream—and What It Will Take to Break the Machine
Racism isn’t just a misunderstanding or a moral failing, and it’s not simply a relic awaiting outgrowing. Again and again, it operates as a business model—built to profit from exclusion, extraction, and denial.
This logic isn’t unique to algorithms or Silicon Valley. It’s a throughline that runs wherever risk, scarcity, and eligibility are managed for profit margin.
What would it take for exclusion not just to become less visible, but less valuable?
When a job application vanishes into silence, was it individual bias—or a system quietly filtering for “fit”?
Why do tenant-screening portals and utilization reviews feel so neutral, yet fall hardest on those already made precarious by race, disability, or poverty?
Is this machinery of efficiency, or something more: a relentless search for justifiable, scalable ways to say “no,” always for a price?
If you want to see how denial itself becomes the commodity, read No Unprofitable People.
How Are These Lines Drawn?
Institutions—housing, healthcare, finance, education—routinely outsource risk and cost, usually in the name of efficiency or fairness. But who profits when denial becomes routine, when human review gives way to algorithmic or bureaucratic sorting?
Each layer of automation promises neutrality, but the map of who gets filtered out never feels random.
Does each “innovation” simply replace the last generation’s exclusion with a new proxy, or is there an opening for upstream change?
For a granular breakdown of how universality rewires incentives, see Universality Disincentivizes Surveillance.
Can Reform Ever Stop the Proxy Game?
Why does every banned variable spawn a new workaround? Redlining morphs into ZIP-risk scoring. Disability questions become utilization rates and prescription histories.
Is this just a policy arms race, or does it expose a deeper truth about which incentives run through every contract and product?
What would it take for reform to actually stop, rather than rebrand, this cycle?
My critique of engineered proxies is in Emergence Is an Excuse: Toward a Forensic Ethics of System Design.
It’s striking how quickly critique is absorbed. DEI trainings, audits, and “equity” dashboards become recurring budget lines.
Does this reflect organizational learning, or a system that metabolizes resistance as operational expense?
If every critique spawns a new industry—consultants, audit firms, policy reviews—does anything upstream really shift? Or does critique just reprice the same gate?
Who Decides Who Gets to Belong? Paywalls, “memberships,” and eligibility screens are often justified as survival, but who quietly disappears when access is rationed by speed, money, or compliance?
When platforms and services demand relentless engagement, who is being left behind, and by what logic?
If slowness, absence, refusal, or care are penalized, what does that reveal about the real terms of belonging?
See the logic of engineered disposability and attention economies in Accept All / Reject All.
Can the Incentives Shift?
Maybe the most important question is not whether anti-racist efforts are sincere, but whether they make exclusion costly.
Does the premium on denial finally disappear? Or does the machine just get new branding and subtler filters?
What would it take to abolish pay-per-denial contracts, or make universal provision the norm—housing, healthcare, education as rights, not rationed perks?
How would real transparency and veto power for those affected actually work, not as box-ticking, but as real, contestable governance?
For a design blueprint of how universality flips the incentive, see Universality Disincentivizes Surveillance.
What Are We Missing?
When new policies or platforms launch, what changes if we ask:
Who profits every time someone is denied?
Who sets the scoring rules?
Can people bypass the gate through a universal or cooperative alternative?
What opens up when the burden of proof—and the financial risk—shifts back to the system, not the individual trying to get through?
For a systemic history of how profit logic treats life as waste, see Edge-Case Medicine: How Profit Logic Treats Life as Waste.
What would it take for exclusion not just to become less visible, but less valuable?
Who gets to decide when the machine has truly changed, and when it’s just rebranded its output?
Maybe abolition’s next horizon isn’t just the morality of inclusion, but the economics of refusal.
The Curious Ethics of Soft Diagnosis
As artificial intelligence weaves itself deeper into the fabric of modern life, it subtly reshapes our perceptions of behavior, identity, and belonging. In spaces from education to employment, AI quietly influences who is understood, who is seen as “normal,” and who, by implication, is not. But as these systems seek to “understand” us, a critical question emerges: Whose understanding do they advance? And what happens when AI’s lens—so often designed around neurotypical norms—encounters those whose cognitive experiences diverge from conventional standards?