Research challenges idea that bias is a technical flaw

Credit: Unsplash/CC0 Public Domain

Researchers challenge the widespread belief that AI-induced bias is a technical flaw, arguing instead that AI is deeply influenced by societal power dynamics. It learns from historical data shaped by human biases, absorbing and perpetuating discrimination in the process. This means that, rather than creating inequality, AI reproduces and reinforces it.

The research is published in the journal Technological Forecasting and Social Change.

“Our study highlights real-world examples where AI has reinforced existing biases,” Prof. Bircan says. “One striking case is Amazon’s AI-driven hiring tool, which was found to favor male candidates, ultimately reinforcing gender disparities in the job market.

“Similarly, government AI fraud detection systems have wrongly accused families, particularly migrants, of fraud, leading to severe consequences for those affected. These cases demonstrate how AI, rather than eliminating bias, can end up amplifying discrimination when left unchecked.

“Without transparency and accountability, AI risks becoming a tool that entrenches existing social hierarchies rather than challenging them.”

AI is developed within a broader ecosystem where companies, developers, and policymakers make critical decisions about its design and use. These choices determine whether AI reduces or worsens inequality. When trained on data reflecting societal biases, AI systems replicate discrimination in high-stakes areas like hiring, policing, and welfare distribution.

Professor Bircan’s research stresses that AI governance must extend beyond tech companies and developers. Given that AI relies on user-generated data, there must be greater transparency and inclusivity in how it is designed, deployed and regulated. Otherwise, AI will continue to deepen the digital divide and widen socio-economic disparities.

Despite the challenges, the study also offers hope. “Rather than accepting AI’s flaws as inevitable, our work advocates for proactive policies and frameworks that ensure AI serves social justice rather than undermining it. By embedding fairness and accountability into AI from the start, we can harness its potential for positive change rather than allowing it to reinforce systemic inequalities,” Prof. Bircan concludes.

More information:
Tuba Bircan et al, Unmasking inequalities of the code: Disentangling the nexus of AI and inequality, Technological Forecasting and Social Change (2024). DOI: 10.1016/j.techfore.2024.123925

Provided by
Free University of Brussels


Citation:
Unmasking inequalities in AI: Research challenges idea that bias is a technical flaw (2025, February 27)
retrieved 27 February 2025
from https://techxplore.com/news/2025-02-unmasking-inequalities-ai-idea-bias.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.