Autoridad algorítmica: ¿cómo empezar a pensar la protección de los derechos humanos en la era del “big data”?
No. 2 (2019-01-01)Autor/a(es/as)
-
René UrueñaFacultad de Derecho, Universidad de los Andes (Bogotá) rf.uruena21@uniandes.edu.co
Resumen
Recientes cambios tecnológicos han creado las condiciones para el surgimiento de una nueva forma de ejercicio del poder, la “autoridad algorítmica”, con características particulares y novedosas. Este texto es una contribución a un vocabulario jurídico que permita, en primera instancia, describir este nuevo ejercicio de poder como un fenómeno jurídico y, en segundo lugar, pensar normativamente diversos modelos para su regulación, con el objetivo de promover la protección de los derechos humanos. Se proponen cuatro nociones centrales: (1) autoridad algorítmica, (2) lex algorítmica, (3) autonomía y (4) transparencia. Mediante estos cuatro conceptos se sienta la base para modos argumentativos y doctrinas jurídicas ajustados al proceso tecnológico que se busca regular.
Referencias
Amoroso, Daniele, y GuglielmoTamburrini. “The Ethical and Legal Case Against Autonomy in Weapons Systems”. Global Jurist; Berlin 17, n.° 3 (2017): 1-20.
Ananny, Mike, y KateCrawford. “Seeing without Knowing: Limitations of the Transparency Ideal and Its Application to Algorithmic Accountability”. New Media & Society, (2016), 1-17.
Angwin, Julia, y JeffLarson. “Machine Bias”. ProPublica, 23 de mayo del 2016. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
Arendt, Hannah. The Origins of Totalitarianism.San Diego: Harcourt, 1979.
Balkin, Jack. “The Path of Robotics Law”. California Law Review Circuit 6 (2015): 45.
Balkin, Jack M. “The Three Laws of Robotics in the Age of Big Data Lecture”. Ohio State Law Journal 78 (2017): 1217-1242.
Barr, Alistair. “Google Mistakenly Tags Black People as ‘Gorillas,’ Showing Limits of Algorithms”. WSJ (blog), 1.° de julio del 2015. https://blogs.wsj.com/digits/2015/07/01/google-mistakenly-tags-black-people-as-gorillas-showing-limits-of-algorithms/.
Bartle, Ian, y PeterVass. Self-regulation and the regulatory state: A survey of policy and practice. Citeseer, 2005.
Benvenisti, Eyal. “EJIL Foreword: Upholding Democracy Amid the Challenges of New Technology: What Role for the Law of Global Governance?” Global Trust Working Paper, 2018. https://papers.ssrn.com/abstract=3106847.
Berger, Klaus Peter. The Creeping Codification of the New Lex Mercatoria.2.a ed. Alphen aan den Rijn: Kluwer Law International, 2010.
Bogdandy, Armin von, PhilippDann, y MatthiasGoldmann. “Developing the Publicness of Public International Law: Towards a Legal Framework for Global Governance Activities Special Issue: The Exercise of Public Authority by International Institutions: Introduction and Concept”. German Law Journal 9 (2008): 1375-1400.
Bonchi, Francesco, SaraHajian, BudMishra, y DanieleRamazzotti. “Exposing the probabilistic causal structure of discrimination”. International Journal of Data Science and Analytics 3, n.° 1 (2017): 1-21.
Brennan, Tim, WilliamDieterich, y BeateEhret. “Evaluating the Predictive Validity of the Compas Risk and Needs Assessment System”. Criminal Justice and Behavior 36, n.° 1 (2009): 21-40.
Calo, Ryan. “Artificial Intelligence Policy: A Primer and Roadmap”. U.C. Davis Law Review 51 (2017): 399-436.
Cannataci, Joseph. “Report of the Special Rapporteur on the right to privacy”, A/72/43103 (19 de octubre del 2017).
Caral, Jose Ma Emmanuel A. “Lessons from ICANN: Is Self-Regulation of the Internet Fundamentally Flawed”. International Journal of Law and Information Technology 12 (2004): 1-31.
Chander, Anupam. “The Racist Algorithm”. Michigan Law Review 115 (2017): 1023-1046.
Cheney-Lippold, John. We Are Data: Algorithms and the Making of Our Digital Selves.New York: New York University Press, 2017.
Comisión Interamericana de Derechos Humanos - Relatoría Especial para la Libertad de Expresión. “Libertad de expresión e Internet”, OEA/Ser.L/V/ii, cidh/rele/inf.11/13 (31 de diciembre del 2013).
Crootof, Rebecca. “A Meaningful Floor for Meaningful Human Control Autonomous Legal Reasoning: Legal and Ethical Issues in the Technologies in Conflict”. Temple International & Comparative Law Journal 30 (2016): 53-62.
Datta, Amit, Michael CarlTschantz, y AnupamDatta. “Automated Experiments on Ad Privacy Settings”. Proceedings on Privacy Enhancing Technologies 2015, n.° 1 (2015): 92-112.
Desmarais, Sarah, y Jay PSingh. “Risk Assessment Instruments Validated and Implemented in Correctional Settings in the United States”, 27 de marzo del 2013. https://csgjusticecenter.org/reentry/publications/risk-assessment-instruments-validated-and-implemented-in-correctional-settings-in-the-united-states/.
Dressel, Julia, y HanyFarid. “The Accuracy, Fairness, and Limits of Predicting Recidivism”. Science Advances 4, n.° 1 (2018): eaao5580.
Duhigg, Charles. “How Companies Learn Your Secrets”. The New York Times Magazine, 16 de febrero del 2012. https://www.nytimes.com/2012/02/19/magazine/shopping-habits.html.
Elish, M. C. “Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction”. We Robot Working Paper, 2016. robots.law.miami.edu/2016/wp-content/uploads/2015/07/ELISH_WEROBOT_cautionary-tales_03212016.pdf.
Elster, Jon. Making Sense of Marx.Cambridge, New York, Paris: Cambridge University Press, Editions de la Maison des sciences de l’homme, 1985.
Feller, Avi, EmmaPierson, SamCorbett-Davies, y SharadGoel. “A Computer Program Used for Bail and Sentencing Decisions Was Labeled Biased against Blacks. It’s Actually Not That Clear”. Washington Post, 17 de octubre del 2016. https://www.washingtonpost.com/news/monkey-cage/wp/2016/10/17/can-an-algorithm-be-racist-our-analysis-is-more-cautious-than-propublicas/?noredirect=on&utm_term=.1e4f7ae9a504.
Flores, Anthony W., Christopher TLowenkamp, y KristinBechtel. “False Positives, False Negatives, and False Analyses: A Rejoinder to ‘Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And it’s Biased Against Blacks’”, 2017. http://www.crj.org/assets/2017/07/9_Machine_bias_rejoinder.pdf.
Friedman, Batya, y HelenNissenbaum. “Bias in computer systems”. ACM Transactions on Information Systems (TOIS) 14, n.° 3 (1996): 330-347.
Future of Life Institute. “An Open Letter to the United Nations Convention on Certain Conventional Weapons”, 2017. https://futureoflife.org/autonomous-weapons-open-letter-2017/.
Future of Life Institute. “Autonomous Weapons: An Open Letter from AI & Robotics Researchers”, 2015. https://futureoflife.org/open-letter-autonomous-weapons/.
Gandomi, Amir, y MurtazaHaider. “Beyond the hype: Big data concepts, methods, and analytics”. International Journal of Information Management 35, n.° 2 (2015): 137-144.
Garriga Domínguez, Ana. Nuevos retos para la protección de datos personales en la Era del Big Data y de la computación ubicua. Madrid: Dykinson, S.L., 2016.
Gavison, Ruth. “Feminism and the Public/Private Distinction”. Stanford Law Review 45 (1993): 1-46.
Goldsmith, Jack L., y TimWu. Who Controls the Internet?: Illusions of a Borderless World.Oxford, New York: Oxford University Press, 2006.
Goodman, Bryce, y SethFlaxman. “European Union regulations on algorithmic decision-making and a ‘right to explanation’”. Presented at 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), arXiv preprint arXiv:1606.08813, 2016.
Hajian, Sara, FrancescoBonchi, y CarlosCastillo. “Algorithmic bias: From discrimination discovery to fairness-aware data mining”. En Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 2125-26. ACM, 2016.
Heyns, Christof. “Human Rights and the Use of Autonomous Weapons Systems (AWS) During Domestic Law Enforcement”. Human Rights Quarterly 38, n.° 2 (2016): 350-378.
Human Rights Watch. “Losing Humanity. The Case against Killer Robots”, 19 de noviembre del 2012.
ICRC. “Autonomous weapon systems: Technical, military, legal and humanitarian aspects. Expert meeting, Geneva, Switzerland, 26-28 March 2014”, 1.° de noviembre del 2014. https://www.icrc.org/en/document/report-icrc-meeting-autonomous-weapon-systems-26-28-march-2014.
Israni, Ellora. “Algorithmic Due Process: Mistaken Accountability and Attribution in State v. Loomis”. JOLT Digest - Harvard Journal of Law & Technology, 31 de agosto del 2017. http://jolt.law.harvard.edu/digest/algorithmic-due-process-mistaken-accountability-and-attribution-in-state-v-loomis-1.
Jasanoff, Sheila. “The Idiom of Co-Production”. En States of knowledge: The co-production of science and social order, editado por SheilaJasanoff, 1-12. London, New York: Routledge, 2004.
Johns, Fleur. “Data, Detection, and the Redistribution of the Sensible in International Law”. American Journal of International Law 111, n.° 1 (2017): 57-103.
Just, Natascha, y MichaelLatzer. “Governance by Algorithms: Reality Construction by Algorithmic Selection on the Internet”. Media, Culture & Society 39, n.° 2 (2017): 238-258.
Kelsen, Hans. Pure Theory of Law, 2.a ed. Berkeley: University of California Press, 1967.
Kennedy, Duncan. “Stages of the Decline of the Public/Private Distinction”. University of Pennsylvania Law Review 130 (1982): 1349.
Kenny, Anthony John Patrick. “The Homunculus Fallacy”. En The Legacy of Wittgenstein, editado por Anthony John PatrickKenny, 125-136. Oxford, New York: Blackwell, 1984.
Kilbertus, Niki, Mateo RojasCarulla, GiambattistaParascandolo, MoritzHardt, DominikJanzing, y BernhardSchölkopf. “Avoiding discrimination through causal reasoning”. En Advances in Neural Information Processing Systems, 656-666, 2017.
Kingsbury, Benedict, Richard B.Stewart, y NikoKrisch. “The emergence of global administrative law”. Law and Contemporary Problems 68 (2005): 15-61.
Kuang, Cliff. “Can A.I. Be Taught to Explain Itself?”, The New York Times Magazine, 21 de noviembre del 2017, sec. Magazine.
Kurzweil, Ray. The Singularity Is near: When Humans Transcend Biology. New York: Viking, 2005.
Levendowski, Amanda. “How Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem”. SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, 24 de julio del 2017. https://papers.ssrn.com/abstract=3024938.
Lewis, Dustin Andrew, GabriellaBlum, y Naz KhatoonModirzadeh. “War-Algorithm Accountability”. Harvard Law School Program on International Law and Armed Conflict - PILAC, 2016. http://nrs.harvard.edu/urn-3:HUL.InstRepos:28265262.
McCrudden, Christopher. “Human Dignity and Judicial Interpretation of Human Rights”. European Journal of International Law 19, n.° 4 (2008): 655-724.
McGinnis, John O. “Accelerating AI”. Northwestern University Law Review 104 (2010): 1253-1270.
Moore, Gordon E. “Cramming more components onto integrated circuits”. Electronics 38, n.° 8 (1965): 114.
Office of the High Commissioner for Human Rights. “The right to privacy in the digital ages” A/HRC/27/37 (30 de junio del 2014).
Olah, Chris, AlexanderMordvintsev, y LudwigSchubert. “Feature Visualization”. Distill 2, n.° 11 (7 de noviembre del 2017): e7.
Pasquale, Frank. The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge: Harvard University Press, 2015.
Relator Especial de la ONU para la Libertad de Opinión y de Expresión, Representante para la Libertad de los Medios de Comunicación de la OSCE, Relator Especial de la CIDH para la Libertad de Expresión y Relatora Especial sobre Libertad de Expresión y Acceso a la Información de la CADHP. “Declaración conjunta sobre la libertad de expresión y las respuestas a las situaciones de conflicto”, 3 de mayo del 2015.
Relatoría Especial de la CIDH para la Libertad de Expresión y Relator Especial de la ONU para la Promoción y Protección del Derecho a la Libertad. “Declaración conjunta sobre programas de vigilancia y su impacto en la libertad de expresión”, 21 de junio del 2013.
Remolina Angarita, Nelson. Tratamiento de datos personales: aproximación internacional y comentarios a la Ley 1581 de 2012, 1.a edición. Bogotá: Legis, 2013.
RobKitchin. “Big Data, new epistemologies and paradigm shifts”. Big Data & Society 1, n.° 1 (2014): 1-12.
Rosenberg, Matthew, NicholasConfessore, y CaroleCadwalladr. “How Trump Consultants Exploited the Facebook Data of Millions”. The New York Times, 17 de marzo del 2018, sec. Politics.
Rupp, Karl. “42 Years of Microprocessor Trend Data”. Accedido el 15 de febrero del 2018. https://www.karlrupp.net/2018/02/42-years-of-microprocessor-trend-data/.
Samuel, Arthur L. “Some studies in machine learning using the game of checkers”. IBM Journal of research and development 3, n.° 3 (1959): 210-229.
Simonite, Tom. “Moore’s Law Is Dead. Now What?”. MIT Technology Review, 23 de mayo del 2016.
Sweet, Alec Stone. “The new Lex Mercatoria and transnational governance”. Journal of European Public Policy 13, n.° 5 (1.° de agosto del 2006): 627-646.
Tene, Omer, y JulesPolonetsky. “Big Data for All: Privacy and User Control in the Age of Analytics”. Northwestern Journal of Technology and Intellectual Property 11 (2013): 239-274.
Teubner, Gunther, y PeterKorth. “Two Kind of Legal Pluralism: Collision of Transnational in the Double Fragmentation of the World Society”. En Regime Interaction in International Law: Facing Fragmentation, editado por MargaretYoung, 23-54. Cambridge: Cambridge University Press, 2012.
Thompson, John B. Studies in the Theory of Ideology.Berkeley: University of California Press, 1984.
Turing, Alan M. “Computing Machinery and Intelligence”. Mind 59 (1950): 433-460.
Urueña, Rene. “Indicators and the Law: A case study of the Rule of Law Index”. En The Quiet Power of Indicators: Measuring Governance, Corruption, and Rule of Law, editado por Sally EngleMerry, Kevin E.Davis, y BenedictKingsbury. Cambridge, New York: Cambridge University Press, 2015.
Urueña, Rene. No Citizens Here: Global Subjects and Participation in International Law. Leiden/Boston: Martinus Nijhoff Publishers, 2012.
Urueña, Rene. “Indicators as Political Spaces”. International Organizations Law Review 12, n.° 1 (2015): 1-18.
Urueña, Rene. “International Law as Expert Knowledge: Exploring the Changing Role of International Lawyers in National Contexts”. En International Law as a Profession, editado por Jeand’Aspremont, TarcisioGazzini, AndréNollkaemper, y Wouter G.Werner, 389-410. Cambridge, New York: Cambridge University Press, 2017.
US Department of Defense. “Autonomy in Weapons Systems”. Directive 3000.09, 21 de noviembre del 2012.
Waldron, Jeremy. “Can There Be a Democratic Jurisprudence?”. Emory Law Journal 58 (2009): 675-712.
Wang, Yilun, y MichalKosinski. “Deep neural networks are more accurate than humans at detecting sexual orientation from facial images”. PsyArXiv, 7 de septiembre del 2017. https://psyarxiv.com/hv28a/.
Wittgenstein, Ludwig. Investigaciones filosóficas.México D. F.: UNAM, 2003.
Licencia
Derechos de autor 2019 René Urueña

Esta obra está bajo una licencia internacional Creative Commons Atribución-NoComercial-SinDerivadas 4.0.