As artificial intelligence weaves itself deeper into the fabric of modern life, it subtly reshapes our perceptions of behavior, identity, and belonging. In spaces from education to employment, AI quietly influences who is understood, who is seen as “normal,” and who, by implication, is not. But as these systems seek to “understand” us, a critical question emerges: Whose understanding do they advance? And what happens when AI’s lens—so often designed around neurotypical norms—encounters those whose cognitive experiences diverge from conventional standards?
For neurodivergent individuals, including those with ADHD, autism, or dyslexia, AI’s presumed inclusivity can feel oddly confining. Despite its intent to “personalize,” AI risks reducing complex human diversity to simplified data points for ease of categorization. I see this as a call for a more transformative framework: cognitive justice. But what would it mean for AI to genuinely honor neurodivergent ways of being as equally valid expressions of humanity? Can technology, grounded in averages and statistical norms, truly respect the full spectrum of cognitive diversity?
A critical ethical issue at the intersection of AI and neurodiversity is soft diagnosis—when algorithms infer neurodivergent traits, such as autism or ADHD, from observed behaviors without clinical context or user consent.
Unlike formal diagnoses, which require comprehensive evaluation by a licensed human being, soft diagnosis stems from algorithms trained to detect deviations from neurotypical patterns. An educational app, for instance, might label a student’s behavior as “ADHD-like” based on their interactions with the software. The app may then adjust its interface without informing the student or their guardians. Or consider a hiring algorithm that deprioritizes candidates who display “non-normative” communication styles, quietly categorizing them as “unsuitable.”
These scenarios raise urgent ethical questions around consent and autonomy. If a system categorizes individuals based on behaviors they’re unaware of, what happens to their agency? For neurodivergent individuals navigating the complexities of masking (concealing traits to align with social expectations) or unmasking (revealing these traits), soft diagnosis can intensify pressures to conform.
When AI rewards “neurotypical” behaviors and subtly penalizes deviations, it risks reinforcing a quiet message: that only certain ways of being are acceptable.
Surveillance, Reductionism, and Control
Soft diagnosis can seem like a personalized feature, but it often amounts to a form of surveillance. When AI categorizes behaviors without explicit user consent, it surveils under the guise of support. In education and hiring, algorithms routinely monitor behaviors to adapt settings or adjust assessments. Yet, who defines what is “appropriate”? By nudging neurodivergent users toward neurotypical norms, these systems may end up prioritizing conformity over authentic self-expression.
For neurodivergent people accustomed to masking as a survival strategy, this pressure can be particularly disempowering. The act of masking is complex and often exhausting, a response to a world that may not understand or accommodate neurodivergent differences. When AI interprets masked behavior as “normal,” it reinforces an expectation that neurodivergent individuals must continue to suppress parts of themselves to “fit in.” Are these systems supportive, or are they subtle enforcers of a narrow standard of acceptability?
The Perils of Reductionism
The reductionist nature of soft diagnosis is equally troubling. When AI labels certain behaviors as “autistic” or “ADHD-like,” it risks flattening complex, dynamic identities into predefined categories.
For an individual newly coming to terms with a neurodivergent identity, an automated tag assigned by an algorithm cannot capture the nuances of their experience.
This reductionism has a parallel in Applied Behavior Analysis (ABA), a controversial practice that many in the autism community critique for focusing on conformity over self-acceptance. When AI systems adopt a similar logic—adjusting responses to encourage “correct” behaviors—they may inadvertently prioritize compliance over authenticity. For neurodivergent individuals, this feels dehumanizing, reducing lived experience to traits that the system can recognize and respond to.
The Commodification of Neurodivergent Traits
In a world driven by data, neurodivergent traits themselves have become commodities. Social platforms and algorithms frequently categorize users based on behavioral markers such as “attention to detail” or “creativity,” repurposing these traits for targeted advertising. For neurodivergent individuals still grappling with self-acceptance, seeing their cognitive differences commodified may feel invasive and objectifying.
But what does it mean to monetize someone’s cognitive identity without their informed consent? When neurodivergent traits are reduced to data points for profit, we must ask who benefits and at what cost to personal autonomy. AI’s commodification of cognitive diversity raises fundamental questions about ownership and exploitation. Who, ultimately, has the right to profit from one’s identity?
Bias and Exclusion in High-Stakes Contexts
The implications of soft diagnosis become even more severe in high-stakes areas like hiring, insurance, and immigration. Algorithms that infer neurodivergent traits can subtly penalize individuals, presenting decisions as neutral or data-driven. An insurance algorithm might flag a neurodivergent applicant as “high-risk,” or a hiring system might automatically deprioritize candidates based on non-normative communication styles.
Such biases are rarely neutral, despite AI’s claims to objectivity. Rather, they encode ableist assumptions within the systems that shape our access to crucial resources and opportunities. For neurodivergent individuals navigating opaque systems that quietly label them as “deviant,” the lack of transparency denies them recourse, reinforcing structural exclusions under the guise of neutrality.
Principles for Neurodivergent Autonomy in AI
If AI is to truly honor neurodivergent individuals, it must go beyond superficial inclusivity toward ethical principles that uphold autonomy, transparency, and self-determination. Neurodiverse AI Ethics suggests foundational principles, though each is challenging to implement in a data-driven society:
Continuous Consent: Neurodivergent users should have the right to view, challenge, or delete AI-generated labels. However, ensuring meaningful consent in a world of constant data flows is a formidable task. Can consent be anything more than a checkbox if the mechanisms for real agency are not in place?
Data Sovereignty: Neurodivergent individuals should have control over how they’re represented digitally, with the right to delete or correct misrepresentative labels. But as data flows seamlessly between platforms, can individuals realistically hope to reclaim control once categorized?
Ending Commodification: Neurodivergent traits should not be monetized without informed consent. Yet in an economy where data is currency, can we expect companies to prioritize ethics over profit? This principle raises questions about how, or if, a more respectful approach to data is possible within current commercial frameworks.
Abolition of Algorithmic Gatekeeping: AI systems should avoid exclusion based on neurotypical standards of “fit” or “suitability.” However, redefining suitability in ways that embrace cognitive diversity would challenge not only AI design but deeply embedded societal values about competence and worth.
These principles demand a rethinking of AI’s role and responsibilities—not just technical fixes but an ethical realignment that prioritizes respect over control.
Towards a Framework of Cognitive Justice
At the heart of Neurodiverse AI Ethics lies the concept of cognitive justice—an ethic that values neurodivergent perspectives not as deviations to manage, but as integral to human diversity. Cognitive justice calls for an AI that does not simply accommodate differences but respects them as fundamental to our shared humanity.
Yet achieving cognitive justice within AI poses profound challenges. AI, rooted in statistical norms, is ill-suited to capture the full breadth of human experience. This limitation is not merely technical; it is philosophical. Can a system designed to standardize ever truly honor the richness of diverse identities? In a field so accustomed to seeing patterns, can AI learn to let diversity exist without attempting to classify, label, or control?
An Invitation to Question
For those interested in exploring these themes further, I've written a range of articles that delve into the ethical implications of digital identity and neurodiversity:
"What is Autistic Hazing?": Explores societal pressures on neurodivergent individuals to conform.
"Dataism and Its Discontents": Critiques the commodification of identity in data economies.
"Another Day in Hell": Reflects on the erosion of autonomy in an age of pervasive digital surveillance.
This is all less a set of definitive answers and more an invitation to question the ethics of how we “know” and “understand” through AI. As we confront these questions, perhaps AI’s greatest challenge lies not in managing diversity but in learning to let it exist freely—in all its depth, variability, and beauty.