Who Has the Right to Sound Kind?
The real danger isn't that AI is cold and inhuman; it's that it's becoming perfectly, fluently "kind." "Counterfeit tenderness" is a new form of moral arbitrage.
For decades, the critique of AI has centered on its coldness—its inhuman rationality, its lack of “real” feeling. As someone who has spent his career building and critiquing large-scale software systems, I’ve come to see the real danger is the exact opposite: counterfeit tenderness.
We are building and deploying systems that perform empathy with flawless precision. This is not a benign feature. It is a new, highly effective governance technology—a tool of social and institutional control.
The current debate is trapped in a simulation fallacy. It asks: Is simulated empathy “good enough”? Can it outperform a doctor in a chat-based exchange (Ayers et al., JAMA Internal Medicine, 2023)? Do users feel cared for, at least until the AI is revealed (Rubin et al., Nature Human Behaviour, 2025)?
This line of questioning is a dangerous distraction. It mistakes performance for substance and completely misses the political and economic point. The debate stops at deception (is the AI lying?) instead of asking about redistribution (who pays for the lie?).
The critical error is to see empathy as a feeling to be simulated.
Empathy is not a trait; it is an infrastructure.
This is not a metaphor. In human society, empathy is a load-bearing function, much like a bridge distributes physical load. It is the vital, costly, and often invisible work of absorbing harm, processing frustration, de-escalating conflict, and—critically—enabling repair. When a person “shoulders a burden,” they are performing a material act: they are absorbing a moral and emotional “load” so that it can be safely processed and resolved.
When we build an AI to automate this function, we are not simulating a feeling. We are automating a piece of critical social infrastructure. And when that automated system performs the style of care without the capacity for repair, the moral load does not simply vanish.
It is redistributed.
This is the core social implication: moral outsourcing. It is a form of labor arbitrage, where the “load” is dangerously and invisibly shifted onto the most vulnerable humans in the system.
When an automated system offers a fluent apology for a billing error it cannot fix, the user who is trapped in a “help” loop is forced to absorb the system’s failure. When a gig worker is placated by a “caring” interface that algorithmically denies their pay, that worker must carry the full financial and emotional weight of the unresolved injustice.
Recent audits of AI mental-health tools confirm this, coining the term “deceptive empathy” to describe how bots create a false connection while “systematically violating ethical standards” in crisis conversations (Iftikhar et al, 2025).
The “friendly” interface, in this light, is not an ethical feature. It is camouflage for systemic failure. It is a design pattern for docility. It uses the affective language of care precisely to pacify dissent and prevent the structural demand for repair. This is how institutions deploy systems that fail at scale without ever being held accountable.
A broader cultural consequence is the devaluation of real empathy. By mass-producing an instant, fluent, and frictionless style of care, we unlearn the patience required for the slow, messy, difficult, and costly work of actual human repair.
This demands we pivot the entire conversation from virtue ethics to failure engineering. We must stop asking if an AI is “trustworthy” or “kind”—these are aesthetic virtues. The true test is procedural: Is the system corrigible?
Can it prove its care through repair? Can its harms be traced, halted, and reversed?
This is the regulatory through-line. The EU AI Act’s ban on emotion recognition in workplaces (Article 5, 2025) and the FTC’s inquiry into the “harmful psychological dependence” created by “AI companions” both tacitly recognize that warmth is being deployed as leverage. They are beginning to see that affective performance is a mechanism of control, not a signal of safety.
The problem, therefore, is not AI’s coldness, but the illusion of warmth that conceals its structural failure and social costs. This leads to an enforceable design law:
No machine, or institution, has earned the right to sound kind unless it can repair what it breaks.
References & Further Reading
Brown University. (2025). “New study: AI chatbots systematically violate mental health ethics standards.” AAAI/ACM Conference on AI, Ethics, and Society.
EU AI Act. (2025). Final Text. Article 5(1)(f) “Provisions on Emotion-AI restrictions.”
When Conscience Runs Out of Time: Toward an Ethics of Maintenance
Our inherited moral frameworks were built for a smaller world, one where people could see the results of their choices and take responsibility. That world no longer exists. Today, automated systems shape our lives. Algorithms determine creditworthiness, logistics networks govern scarcity, and content filters decide what can be said. These systems operate faster than human thought, often without any specific intention driving them.



