top of page

Bias and Fairness in Artificial Intelligence

It is true that Artificial Intelligence (AI) leads the pack in contemporary technology, impacting decision-making in major industries like healthcare, criminal justice, finance, and hiring. The mass deployment of AI has ushered in serious ethical issues—particularly those of bias and fairness. Bias AI systems can perpetuate and even increase social disparities. Thus, finding the causes of bias and investigating ways to introduce algorithmic fairness is imperative.


Understanding Algorithmic Bias


Bias in artificial intelligence systems occurs when algorithms create consistently unjust results for particular groups. Training data is one of the primary sources of these biases. Data from the past might include real-world biases, which get incorporated into the model's estimates (Ferrara, 2024). Facial recognition software, for example, has proved to have much higher error rates for women and people with darker skin tones as a result of skewed datasets (Buolamwini & Gebru, 2018).


Bias may also be the result of algorithmic design. Model developers tend to optimize models for performance measures such as accuracy without considering how those measurements impact various demographic groups. Additionally, conscious and unconscious human bias may also end up affecting decisions made in the dataset curation process and model choice (Fred, 2025).


Types of AI Fairness


AI fairness does not have a universal definition but usually fits into three measurable categories:


  1. Demographic parity: Equal selection rates across groups.

  2. Equalized odds: Have equal true and false positive rates between groups.

  3. Individual fairness: Similar should be treated similarly.


These concepts can come into conflict. For instance, having demographic parity can breach individual fairness by treating unlike individuals in an identical manner (Fred, 2025).


Real-World Implications


A few prominent cases demonstrate the real-world risks of unfair AI:


In criminal justice, the COMPAS tool unfairly stigmatized Black defendants as high-risk in comparison to similarly situated white defendants (Angwin et al., 2016).


In healthcare, one patient prioritization algorithm preferentially assigned higher priority to white patients over Black patients based on historic expenditure data, which served as a proxy for the needs of care (Obermeyer et al., 2019).


In hiring, AI systems trained on past resumes showed gender and racial bias, penalizing women and minority candidates, especially in technology (Dastin, 2018).


This kind of evidence reinforces the importance of serious monitoring and reduction of AI bias.


Avoiding Bias


Avoidance of bias demands intervention across multiple points along the AI pipeline:


1. Pre-processing: Adjusting training data to minimize built-in biases, e.g., re-sampling or augmenting underrepresented classes.


2. In-processing: Direct incorporation of fairness constraints into the training objectives of the model.


3. Post-processing: Model output modification to enhance equity, e.g., error rate balancing across groups.


Open-source toolkits, including Fairlearn and IBM's AI Fairness 360, assist practitioners in identifying and mitigating bias (Bellamy et al., 2019).


Conclusion


While technical solutions are needed, they are insufficient by themselves. Ensuring AI fairness also involves transparency, public discourse, and strong legal foundations. Policies such as the EU's AI Act seek to establish standards for the ethical use of AI.


Since AI mirrors larger societal trends, fairness calls for an integrated approach—one that combines technical, ethical, and policy views. Through collaborative work and careful design, it is feasible to develop AI systems that are both effective and fair and just.


References


Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. ProPublica.


Bellamy, R.K.E., et al. (2019). AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. IBM Journal of Research and Development, 63(4/5), pp.4:1–4:15.


Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Conference on Fairness, Accountability, and Transparency, pp. 77–91.


Dastin, J. (2018). Amazon scrapped 'AI recruiting tool' that showed bias against women. Reuters.


Ferrara, G. (2024). Fairness and bias in artificial intelligence: A brief survey of sources, impacts, and mitigation strategies. Journal of Intelligence and Robotics, 6(1), pp.1–15.


Fred, N. (2025). Bias and fairness in AI algorithms. ResearchGate.


Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), pp.447–453.

Comments


Contact Us!
or email us @veritasnewspaperorg.gmail.com

Thanks for submitting! We will contact you via email - make sure to check your spam folder as our emails sometimes appear there.

veritas.pdf (1).png

© 2025 by Veritas Newspaper

bottom of page