AI Bias as a Human Rights Violation: Legal Standards and Judicial Remedies in Automated Decision Systems

Authors

    Amelia Lawson Department of Law, University of Sydney, Sydney, Australia
    Daniel Tremblay * Department of Political Science, University of Toronto, Toronto, Canada daniel.tremblay@utoronto.ca

Keywords:

Algorithmic bias, automated decision-making, human rights, discrimination, AI governance, judicial remedies, transparency, accountability, equality, due process

Abstract

The rapid integration of automated decision-making systems into public and private governance has intensified global concern over the human rights implications of algorithmic bias. As machine learning tools increasingly shape outcomes in criminal justice, welfare administration, migration control, employment screening, healthcare triage, and financial services, evidence shows that these systems often reproduce and scale structural inequalities embedded within historical data and institutional practices. This narrative review synthesizes current scholarship to analyze how biased algorithms undermine core human rights principles, including equality, non-discrimination, due process, transparency, privacy, and freedom from arbitrary decision-making. The article examines the conceptual foundations of AI-driven discrimination, highlighting how technical, societal, and structural biases interact during data collection, model development, and deployment. It then evaluates international, regional, and sector-specific legal frameworks governing automated decision systems, identifying significant gaps and inconsistencies that hinder effective accountability. Judicial approaches and case law are assessed to illustrate both the potential and limitations of litigation as a mechanism for addressing algorithmic harm. The review also explores existing and emerging remedies—such as injunctions, algorithmic audits, impact assessments, algorithmic affirmative action, and mandated transparency—and considers the challenges courts face in regulating opaque and technically complex systems. Finally, the article outlines governance models that integrate state responsibility, corporate due diligence, civil society participation, and international norm-setting, emphasizing the importance of preventive, lifecycle-based regulation over reactive judicial intervention. The findings underscore the urgent need for harmonized, rights-based governance structures capable of mitigating discriminatory outcomes and ensuring that automated decision systems operate in alignment with democratic values and human dignity.

References

Akhamere, G. D. (2023). Fairness in Credit Risk Modeling: Evaluating Bias and Discrimination in AI-Based Credit Decision Systems. International Journal of Advanced Multidisciplinary Research and Studies, 3(6), 2061-2070. https://doi.org/10.62225/2583049x.2023.3.6.4716

Edenberg, E., & Wood, A. (2023). Disambiguating Algorithmic Bias: From Neutrality to Justice. 691-704. https://doi.org/10.1145/3600211.3604695

Farinella, F. (2022). Algorithmic Bias and Non-Discrimination in Argentina. Jour, 1(1), 63-74. https://doi.org/10.17803/lexgen-2022-1-1-63-74

Fine, A., Le, S., & Miller, M. K. (2023). Content Analysis of Judges' Sentiments Toward Artificial Intelligence Risk Assessment Tools. Criminology Criminal Justice Law & Society, 24(2), 31-46. https://doi.org/10.54555/ccjls.8169.84869

Kim, R. (2023). Under the Law: Boys, Men, and Title IX. Phi Delta Kappan, 104(7), 62-63. https://doi.org/10.1177/00317217231168268

Lee, W. P. A. (2023). Discrimination of Algorithmic Decision-Making and Protection of Human Rights : With a Focus on the Legal Regulation of Europe. Korean Assoc Int Assoc Const Law, 30(2), 97-125. https://doi.org/10.24324/kiacl.2024.30.2.097

Lütz, F. (2023). Artificial Intelligence and Gender-Based Discrimination. 207-221. https://doi.org/10.1093/law/9780192882486.003.0014

Nenadić, S., & Miljuš, I. (2022). Krivična Pravda U Eri Veštačke Inteligencije. 291-315. https://doi.org/10.56461/zr_22.dukpp.21

Talati, D. V. (2021). Artificial Intelligence and Unintended Bias: A Call for Responsible Innovation. International Journal of Science and Research Archive, 2(2), 298-312. https://doi.org/10.30574/ijsra.2021.2.2.0110

Zakaria, M. G. (2023). AI Applications in the Criminal Justice System: The Next Logical Step or Violation of Human Rights. Journal of Law and Emerging Technologies, 3(2), 233-257. https://doi.org/10.54873/jolets.v3i2.124

Downloads

Published

2023-01-01

Submitted

2022-11-17

Revised

2022-12-17

Accepted

2022-12-27

How to Cite

Lawson, A., & Tremblay, D. (2023). AI Bias as a Human Rights Violation: Legal Standards and Judicial Remedies in Automated Decision Systems. Legal Studies in Digital Age, 2(1), 53-67. https://jlsda.com/index.php/lsda/article/view/302

Similar Articles

1-10 of 201

You may also start an advanced similarity search for this article.