Revista de Estudios Sociales

rev. estud. soc. | eISSN 1900-5180 | ISSN 0123-885X

Avoiding the Formalism Trap: A Critical Evaluation and Selection of Statistical Fairness Metrics in Public Algorithms

No. 93 (2025-07-25)
  • Alberto Coddou Mc Manus
    Pontificia Universidad Católica de Chile
    ORCID iD: https://orcid.org/0000-0003-2041-2304
  • Mariana Germán Ortiz
    Universidad Adolfo Ibáñez, Chile
    ORCID iD: https://orcid.org/0009-0002-7360-1336
  • Reinel Tabares Soto
    Universidad de Caldas, Colombia
    ORCID iD: https://orcid.org/0000-0003-3639-4147

Abstract

This article examines the different statistical fairness metrics used to evaluate the performance of artificial intelligence (AI) models and proposes criteria for selecting them based on context and legal implications. It focuses in particular on how these metrics can help safeguard the right to equality and non-discrimination in algorithmic systems implemented by the state. Its core contribution is the development of an analytical framework for choosing fairness metrics according to the purpose of the automated system, the nature of the project, and the rights at stake. For example, in the criminal justice system—where individual liberty is at risk—the emphasis is on minimizing false positives. In contrast, for algorithms designed to protect victims of gender-based violence, the priority is to reduce false negatives. In areas like public procurement, group fairness is assessed using metrics such as disparate impact or demographic parity. In sectors like tax enforcement or medical diagnostics, the focus is on predictive accuracy and efficiency. Taking an interdisciplinary approach, the article puts forward a sociotechnical perspective that brings together technical and legal insights. It highlights the need to avoid the “formalism trap,” where fairness is reduced to abstract metrics without accounting for the broader social and political context. Finally, it argues that selecting appropriate metrics not only helps identify and mitigate algorithmic bias but also contributes to building AI systems that are fairer and more transparent, and that align with fundamental principles of equality and non-discrimination.

Keywords: algorithmic bias, algorithmic discrimination, algorithmic justice, artificial intelligence, public algorithms, statistical fairness

References

Bekkum, Marvin van. 2024. “Using Sensitive Data to Debias AI Systems: Article 10 (5) of the EU AI Act”. SSRN. https://dx.doi.org/10.2139/ssrn.4992036

Binns, Reuben. 2018. “Fairness in Machine Learning: Lessons from Political Philosophy”. En Proceedings of Machine Learning Research 81: 149-159. Conference on Fairness, Accountability, and Transparency, 23 al 24 de febrero, Nueva York, Estados Unidos. https://proceedings.mlr.press/v81/binns18a/binns18a.pdf

Buijsman, Stefan. 2024. “Navigating Fairness Measures and Trade-Offs”. AI Ethics 4: 1323-1334. https://doi.org/10.1007/s43681-023-00318-0

Buyl, Maarten y Tijl De Bie. 2024. “Inherent Limitations of AI Fairness”. Communications of the ACM 67 (2): 48-55. https://doi.org/10.1145/3624700

Caton, Simon y Christian Haas. 2024. “Fairness in Machine Learning: A Survey”. ACM Computing Surveys 56 (7): 1-38. https://doi.org/10.1145/3616865

Chacón, Andrés, Eduardo E. Kausel y Tamara Reyes. 2022. “A Longitudinal Approach for Understanding Algorithm Use”. Journal of Behavioral Decision Making 35 (4): e2275. https://doi.org/10.1002/bdm.2275

Coddou McManus, Alberto, Isabel Ruiz-Esquide, Dominique Bergeret, Camila Loyola y Valentina Sánchez. En prensa. “IA y violencia contra las mujeres: la automatización de la evaluación del riesgo”. Política Criminal.

Consejo para la Transparencia. 2023. Decisión amparo ROL C12867-23, 29 de noviembre de 2023. Entidad pública: Servicio de Impuestos Internos. Requirente: Mauricio Álvarez Vega. https://jurisprudencia.cplt.cl/Paginas/Resulta-doBusqueda.aspx?data=86C88158961C

Contreras, Pablo. 2024. “Convergencia internacional y caminos propios: regulación de la inteligencia artificial en América Latina”. Actualidad Jurídica Iberoamericana 21: 468-491. https://revista-aji.com/convergencia-internacional-y-caminos-propios-regulacion-de-la-inteligencia-artificial-en-america-latina/

Corbett-Davies, Sam, Emma Pierson, Avi Feller, Sharad Goel y Aziz Hug. 2017. “Algorithmic Decision Making and the Cost of Fairness”. En KDD '17: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Minin, 797-806. 13 al 17 de agosto, Halifax, Canadá. https://doi.org/10.1145/3097983.3098095

Giffen, Benjamin van, Dennis Herhausen y Timo Fahse. 2022. “Overcoming the Pitfalls and Perils of Algorithms: A Classification of Machine Learning Biases and Mitigation Methods”. Journal of Business Research 144: 93-106. https://doi.org/10.1016/j.jbusres.2022.01.076

Green, Ben. 2022. “Escaping the Impossibility of Fairness: From Formal to Substantive Algorithmic Fairness”. Philosophy & Technology 35 (90): en línea. https://doi.org/10.1007/s13347-022-00584-6

Harrisr, Lynette y Carley Foster. 2010. “Aligning Talent Management with Approaches to Equality and Diversity: Challenges for UK Public Sector Managers”. Equality, Diversity and Inclusion: An International Journal 29 (5): 422-435. https://doi.org/10.1108/02610151011052753

Hellman, Deborah. 2020. “Measuring Algorithmic Fairness”. Virginia Law Review 106 (4): 811-866. https://virginialawreview.org/wp-content/uploads/2020/06/Hellman_Book.pdf

Hermosilla, María Paz y Mariana Germán. 2024. “Implementación responsable de algoritmos e inteligencia artificial en el sector público de Chile”. Revista Chilena de la Administración del Estado 11: 101-122. https://doi.org/10.57211/revista.v11i11.185

Lapostol Piderit, José Pablo, Romina Garrido Iglesias y María Paz Hermosilla Cornejo. 2023. “Algorithmic Transparency from the South: Examining the State of Algorithmic Transparency in Chile’s Public Administration Algorithms”. En FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 227-235. 12 al 15 de junio, Chicago, Estados Unidos. https://doi.org/10.1145/3593013.3593991

Larson, Jeff, Surya Mattu, Lauren Kirchner y Julia Angwin. 2016. “How We Analyzed the COMPAS Recidivism Algorithm”. ProPublica. https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm

Lippert-Rasmussen, Kasper. 2013. Born Free and Equal?: A Philosophical Inquiry into the Nature of Discrimination. Oxford: Oxford University Press.

Martínez Placencia, Victoria. 2023. “Funciones de las categorías de discriminación en el derecho laboral chileno”. Revista Chilena de Derecho 50 (2): 1-31. https://doi.org/10.7764/R.502.1

Moreau, Sophia. 2010. “What Is Discrimination?”. Philosophy & Public Affairs 38 (2): 143-179. https://doi.org/10.1111/j.1088-4963.2010.01181.x

Palma, Aníbal. 2024: “Data and Data Governance (Article 10)”. En The EU Regulation on Artificial Intelligence: A commentary, 183-206. Roma: Wolters Kluwers.

Proyecto de ley del 7 de mayo de 2024. “Regula los sistemas de inteligencia artificial”. Cámara de Diputadas y Diputados. https://www.camara.cl/legislacion/ProyectosDeLey/tramitacion.aspx?prmID=17429&prmBOLETIN=16821-19

RAE (Real Academia Española). 2025. Diccionario de la lengua española. 23.a ed. https://dle.rae.es

Rawls, John. 2012. Justicia como equidad: una reformulación. Barcelona: Paidós.

Regulation (EU) 2024/1689 of the European Parliament and of the Council. https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng

Ruf, Boris y Marcin Detyniecki. 2021. “Towards the Right Kind of Fairness in AI”. Cornell University. https://doi.org/10.48550/arXiv.2102.08453

Selbst, Andrew D., Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian y Janet Vertesi. 2019. “Fairness and Abstraction in Sociotechnical Systems”. En FAT* '19: Proceedings of the Conference on Fairness, Accountability, and Transparency, 59-68. 29 al 31 de enero, Atlanta, Estados Unidos. https://doi.org/10.1145/3287560.3287598

Ungern-Sternberg, Andreas von. 2022. “Discriminatory AI and the Law: Legal Standards for Algorithmic Profiling”. En Responsible Artificial Intelligence, editado por Silja Vöneky, Philipp Kellmeyer, Oliver Müller y Wolfram Burgard, 252-278. Cambridge: Cambridge University Press.

US Equal Employment Opportunity Commission. 2022. “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees”. EEOC-NVTA-2022-2. https://perma.cc/LG25-D53T

Vandevelde, Kenneth J. 2010. “A Unified Theory of Fair and Equitable Treatment”. New York University Journal of International Law and Politics 43 (1): 43-106. https://ssrn.com/abstract=2357642

Verma, Sahil y Julia Rubin. 2018. “Fairness Definitions Explained”. En FairWare '18: Proceedings of the International Workshop on Software Fairness, 1-7. 29 de mayo, Gotemburgo, Suecia. https://doi.org/10.1145/3194770.3194776

Wachter, Sandra, Brent Mittelstadt y Chris Russell. 2018. “Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR”. Harvard Journal of Law & Technology 31 (2): 841-887. https://doi.org/10.48550/arXiv.1711.00399

Wachter, Sandra, Brent Mittelstadt y Chris Russell. 2020. “Bias Preservation in Machine Learning: The Legality of Fairness Metrics under EU Non-Discrimination Law”. West Virginia Law Review 123 (3): 3-51. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3792772

Wachter, Sandra, Brent Mittelstadt y Chris Russell. 2021. “Why Fairness Cannot Be Automated: Bridging the Gap between EU Non-Discrimination Law and AI”. Computer Law & Security Review 41: 105567. https://doi.org/10.1016/j.clsr.2021.105567

License

Copyright (c) 2025 Alberto Coddou Mc Manus, Mariana Germán Ortiz, Reinel Tabares Soto

Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.