The Cowardice of Inference
Predictive modeling is always an exercise of political power.
“The three gibbering, fumbling creatures, with their enlarged heads and wasted bodies, were contemplating the future.”
“The Minority Report” by Philip K. Dick
People who build predictive systems often describe their work as analysis: discovering patterns, improving accuracy, reducing error. The language suggests distance from the consequences—as though the model is simply revealing how the world works.
But predictive systems don’t just find patterns. They establish them.1
And that is where the responsibility lies.
When a model denies someone a loan, it’s not uncovering a natural truth about risk. It’s enforcing a rule that treats certain people as risky.
When a fraud detector flags someone, it’s not identifying an objective category. It’s operationalizing institutional judgments about which behaviors deserve scrutiny.
When an engagement model shapes what people see, it’s not exposing their preferences. It’s steering their attention.
Every model you ship draws boundaries: who gets access, who gets delayed; who is trusted, who is watched; whose errors we tolerate, whose we punish.
Those choices are not technical. They are decisions about how people will be governed.
The Comfort of Indirection
Most of the profession is organized around avoiding that fact.2 Fairness metrics, explainability tools, model cards, differential privacy—all valuable, but all operating at the same safe distance. They assume the categories are legitimate, the objectives are appropriate, and the model’s place in the institution is a given.
But these tools don’t exist despite the field’s need for plausible deniability. They exist because of it. The entire apparatus of responsible AI lets you feel ethical while sidestepping the foundational question: do you have the authority to govern people’s lives through automated categorization?3
They help you optimize a system without forcing you to ask whether the system should exist. They let you document choices without defending them. They allow institutions to claim oversight while maintaining that decisions are technical rather than political.
And when something goes wrong, it’s easy to fall back on the standard line: “The system decided.”
Harm becomes a technical anomaly instead of a consequence of design decisions.
Patterns Are Not Neutral
Data is not neutral history. Labels are not natural categories.
Disability studies and trans studies have extensively documented how algorithmic categorization systems rely on rigid, institutional definitions that erase lived experience and enforce normative assumptions about bodies and identities.4
Features reflect institutional priorities.5 Objective functions encode value judgments.
The model is not discovering what the world “really is.” It is producing a particular vision of how the world should be organized.6
Even deciding which errors matter more—false positives or false negatives—is a political choice. It decides whose harm is preferable.7
These decisions require justification, not just documentation.
Why Accountability Feels Uncomfortable
Imagine explaining to someone denied a loan why this threshold is legitimate.
Or defending to someone flagged as fraudulent why these features capture wrongdoing.
Or telling someone whose content was removed why that boundary is appropriate for speech.
Not with “the model said so” or “the data demanded it,” but with reasons you are willing to stand behind.
If that feels uncomfortable, it’s because the system was never designed for you to occupy that role. Technical vocabulary allows you to stay adjacent to responsibility while avoiding its weight.
But predictive systems are now deeply embedded in decisions that shape access, opportunity, and dignity. If you build these systems, you are participating in governance—whether you name it or not.
The Path That Responsibility Requires
A responsible practice would begin with acknowledging, plainly, that:
Modeling is rule-making.
Feature selection and labeling are political acts.
Thresholds and objectives embed values.
Deployment choices determine who will bear the burden of error.
And acting on that acknowledgment would require:
making the system’s value judgments explicit;
creating real avenues for people to challenge how they’ve been categorized;
accepting responsibility for harms rather than attributing them to “complexity”;
treating model behavior as something institutions are accountable for, not something they can hide behind.
This isn’t a call to abandon predictive systems. Many can reduce error or mitigate bias when carefully designed. But they can only be legitimate if the humans behind them are willing to stand behind the choices they encode.
Beyond Deservingness
There’s a peculiar contradiction at the heart of modern public policy: Propose universal provision—unconditional meals, healthcare, housing, or cash—and you’re told it’s utopian, “unrealistic,” unaffordable. Yet the very same systems that balk at universality are perfectly willing to spend billions each year on eligibility audits, means-testing software, and bureaucratic hurdles designed to keep resources scarce. The logic animating these choices is not a rational allocation of limited means, but a deep-rooted fear: that someone, somewhere, might get help they didn’t “deserve.”
What This Costs
Accepting this responsibility means giving up something the field depends on: the professional claim that technical training grants you the authority to make these decisions without political justification.
It means your status as an expert doesn’t protect you from having to defend, in public, why this categorization serves justice rather than institutional convenience.
It means you can’t work on “interesting problems” without first establishing that you have legitimate authority to reshape people’s lives through those problems.
It means “data scientist” stops being a job title that lets you govern without being named as a governor.
The field won’t accept this easily. The entire professional structure is built on the premise that you’re discovering rather than deciding, analyzing rather than ruling. Your employment, your authority, your claim to neutrality all depend on maintaining that fiction.
The Real Question
Predictive systems are political systems.
The only question is whether the people building them will accept that—and take responsibility for their decisions—or continue letting the machinery absorb the blame.
Because the danger isn’t automation.
The danger is decisions that govern people’s lives with no one willing to answer for them.
On how prediction actively organizes social behavior and expectations, see Jenna Burrell & Marion Fourcade, “The Society of Algorithms,” Annual Review of Sociology (2021), which describes prediction as a form of anticipatory governance rather than neutral inference.
Ben Green’s 2021 essay “Data Science as Political Action,” argues practitioners must recognize themselves as political actors engaged in normative constructions of society rather than neutral researchers.
On how AI ethics frameworks function as legitimation rather than accountability mechanisms, see Brent Mittelstadt, ‘Principles Alone Cannot Guarantee Ethical AI,’ Nature Machine Intelligence (2019); Jacob Metcalf et al., ‘Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics,’ Social Research (2019).”
Scully, Jackie Leach & Gemma van Toorn, “Automating Misrecognition: The Case of Disability,” Big Data & Society (2025) examines how algorithmic categorization systems fail to recognize the diversity and context-dependency of disability experience.
“Os Keyes, ‘The Misgendering Machines: Trans/HCI Implications of Automatic Gender Recognition,’ Proceedings of the ACM on Human-Computer Interaction (2018), which documents how gender recognition systems encode cisnormative assumptions about gender as stable, binary, and appearance-based—treating technical feature selection as a political act that erases trans and non-binary existence.
On how ML systems instantiate normative worldviews through seemingly technical design decisions, see Madeleine Clare Elish & danah boyd, “Situating Methods in the Magic of Big Data and AI,” Communication Monographs (2018), which examines how data practices encode institutional perspectives into system outputs.
White, Jason J.G., “Fairness of AI for People with Disabilities: Problem Analysis and Interdisciplinary Collaboration,” ACM SIGACCESS (2023) argues error trade-offs in algorithmic systems disproportionately disadvantage disabled people and cannot be resolved through simple harm reduction frameworks.


