Could AI Explainability Reinforce Existing Power Structures?
I argue that by creating bureaucratic hurdles and reducing ethics to metrics, transparency becomes a tool for maintaining corporate power while diverting attention from the deeper questions of control and governance.
Transparency in artificial intelligence (AI) is often heralded as the key to ensuring fairness, trust, and accountability. The premise is that by making AI systems more explainable, their ethical issues will be resolved. However, this perspective is dangerously misleading.
The rise of transparency requirements across various sectors, including healthcare, education, and public services, has led to a surge in algorithmic audits, explainability tools, and compliance reports. While these measures are presented as necessary for ethical AI, they have spawned an industry of consultants and technocrats who profit from managing compliance—without addressing the core ethical issues.
Public institutions, already stretched by budget constraints, must divert resources from essential services to meet transparency standards, often relying on expensive external firms. Yet these measures seldom address the most fundamental ethical questions: Who controls the data? Who profits from its collection? Who decides how AI systems are deployed? These questions are buried under layers of bureaucratic procedure, making transparency an end in itself.
A clear example is the 2016 partnership between Google’s DeepMind and the UK’s National Health Service (NHS). While this collaboration was praised for its transparency, it obscured the fact that vast amounts of sensitive patient data were being handed over to Google. While the transparency measures focused on how the data was being processed, they ignored the more important question of who controlled and benefited from this data transfer. The illusion of accountability, provided by transparency, disguised the real ethical issue: ownership and control over public health data by a private corporation.
In practice, transparency often serves to maintain the status quo. It creates elaborate oversight mechanisms that simulate ethical responsibility, while leaving the actual dynamics of power and control unchallenged. The illusion of accountability provided by transparency obscures the deeper power structures driving inequality, turning transparency into a tool for managing, rather than addressing, systemic harm.
The Problem with Metrics
One of the most concerning aspects of the transparency movement is the reduction of ethics to quantifiable metrics. Corporations now emphasize how many audits they conduct or how many compliance reports they file. This emphasis on metrics turns ethical oversight into a superficial exercise, diverting attention from the real human impacts of AI systems.
For instance, AI systems used in hiring processes often undergo audits to ensure they are free of bias. However, these audits tend to focus on surface-level adjustments—like removing specific biased variables—without questioning whether these systems should be making such critical decisions in the first place. In the case of Amazon’s AI recruiting tool, which was found to be biased against women, the focus remained on how to make the tool more transparent, rather than on whether it should be used for hiring decisions at all.
Similarly, Facebook’s 2019 civil rights audit revealed biases in its content moderation algorithms but failed to address whether algorithms should wield such immense power over public discourse in the first place. By reducing ethics to compliance checklists, organizations avoid confronting the deeper questions about AI’s role in perpetuating social and economic inequalities.
This reduction of ethics to metrics allows corporations to appear ethical without confronting the exploitation their systems perpetuate. Transparency, in these cases, distracts from the real human impacts of AI on marginalized groups. It creates a form of superficial accountability that does little to address the structural inequalities AI systems often reinforce.
The Costs of Compliance: Burdening the Vulnerable
The financial and social costs of transparency-driven compliance are not borne by the tech giants creating AI systems, but by public institutions, workers, and marginalized communities. Hospitals, schools, and social services must divert funds from essential programs to comply with transparency mandates, reducing access to critical care and support for those most in need.
In the private sector, transparency sometimes justifies intensified surveillance and algorithmic management of workers. AI systems that monitor worker productivity in minute detail are often marketed as "transparent" and "accountable." However, these systems often enforce dehumanizing work conditions, pushing employees to meet unrealistic targets and causing stress and burnout. Consumers also bear the costs of compliance, as companies pass on these expenses through higher prices, while smaller competitors struggle to keep up with costly regulatory demands.
Ultimately, transparency-driven bureaucracy reinforces the dominance of large corporations while shifting the burden of compliance onto those least able to bear it. This creates a system where the already oppressed—public institutions, workers, and marginalized communities—are disproportionately affected by the costs of transparency.
Selective Transparency: Manipulating Openness to Conceal Power
A particularly troubling aspect of the transparency agenda is its selective nature. Corporations often disclose just enough information to appear transparent without relinquishing real control. By sharing superficial information—such as basic algorithmic processes—while withholding essential insights—like data ownership or decision-making criteria—companies manipulate public perception without addressing the underlying power dynamics.
For instance, tech companies involved in extensive data collection may disclose how they gather data but remain silent on how that data is shared with third parties or used for profit. Uber, for example, has shared information about how its AI optimizes driver routes but is less forthcoming about how its algorithm determines driver wages or deactivates drivers based on performance metrics.
This selective transparency creates the illusion of openness while allowing corporations to maintain their control over AI systems and preventing meaningful scrutiny. By controlling what is disclosed, selective transparency enables corporations to evade real accountability while continuing to profit from systems that entrench inequality and exploit vulnerable populations.
Reimagining Accountability: Empowering Communities and Decentralizing Control
If transparency serves to reinforce existing power structures, what should be done to achieve genuine accountability in AI? One crucial step is shifting focus from transparency to control—specifically, who has the authority to design, implement, and govern AI systems.
Empowering communities by involving them in decision-making processes can help ensure that AI technologies serve the public interest, rather than corporate agendas. This might involve participatory design methods, open-source development, and policies that promote collective ownership and stewardship of AI systems. By decentralizing control, we can create technological systems that are responsive to the needs and values of diverse groups, rather than perpetuating a one-size-fits-all model imposed by powerful entities.
The DECODE project in Barcelona provides an example of how decentralized governance can empower communities. The project allows citizens to control their own data, deciding who can access it and for what purposes, creating a more equitable model of data governance. This shows that by decentralizing AI control, we can develop systems that align with collective needs, rather than reinforcing corporate power.
This approach also requires questioning whether AI is always the best solution. Rather than automatically relying on technological fixes, we should consider alternatives that prioritize human judgment and community input over algorithmic decision-making. By resisting AI’s encroachment into areas where it may cause more harm than good, we can promote fairness and equity in how technology is used.
Beyond Transparency: Building a Just and Equitable Future
Focusing solely on transparency is insufficient to address the ethical challenges posed by AI. To build systems that genuinely serve the public good, we must confront and dismantle the power structures that govern AI development and deployment. This requires rethinking how technology is governed, moving away from centralized corporate control toward models that prioritize social justice, equity, and collective well-being.
By challenging existing hierarchies and advocating for the democratization of AI, we can work toward a future where technology empowers the many, rather than reinforcing the privileges of the few. This involves not only critiquing current practices but also actively developing and supporting alternatives that align with principles of fairness and community empowerment.