Toward Human-Centered Artificial Intelligence: Contributions from the Social Sciences
No. 93 (2025-07-25)Author(s)
-
Andrés PáezUniversidad de los Andes, ColombiaORCID iD: https://orcid.org/0000-0002-4602-7490
-
Juan David GutiérrezUniversidad de los Andes, ColombiaORCID iD: https://orcid.org/0000-0002-7783-4850
-
Diana Acosta-NavasLoyola University Chicago, United StatesORCID iD: https://orcid.org/0000-0002-5351-5820
Abstract
This introduction to the Revista de Estudios Sociales dossier, which explores the intersection of artificial intelligence (AI) and the social sciences, is based on the idea that the social sciences are not only crucial for analyzing and understanding the major changes AI has brought to human life, but also—more fundamentally—for guiding the future development of AI in ways that support human well-being and individual growth. Ensuring that AI systems are designed, developed, used, and evaluated with a human-centered focus requires the conceptual frameworks, empirical data, and community-based expertise that the social sciences offer. Human-centered AI—aligned with human values and interests—depends on these contributions to take shape. This introduction highlights three key areas where we believe the social sciences can provide meaningful guidance for the future direction of AI. First, we examine how AI is reshaping interpersonal relationships—transforming how people communicate, form bonds, construct identities, cooperate, and manage conflict. Second, we consider the impact of AI on social equality. Because models are often trained on historical data, they risk reproducing the inequalities and biases embedded in that data, which can lead to outcomes that disadvantage marginalized groups. Finally, we explore how the development and application of AI influence political regimes, policy-making, bureaucracy, and the role of the state.
References
Acemoglu, Daron y Pascual Restrepo. 2018. “Artificial Intelligence, Automation, and Work”. En The Economics of Artificial Intelligence: An Agenda, editado por Ajay Agrawal, Joshua Gans y Avi Goldfarb, 197-236. Chicago: University of Chicago Press.
Acosta-Navas, Diana. 2025a. “On Foundations and Foundation Models: What AI and Philanthropy can Learn from One Another”. En The Routledge Handbook of Artificial Intelligence and Philanthropy, editado por Giuseppe Ugazio y Milos Maricic, 393-407. Nueva York; Londres: Routledge.
Acosta-Navas, Diana. 2025b. “In Moderation: Automation in the Digital Public Sphere”. Journal of Business Ethics. https://doi.org/10.1007/s10551-024-05912-8
Akpudo, Ugochukwu Ejike, Jake Okechukwu Effoduh, Jude Dzevela Kong y Yongsheng Gao. 2024. “Unveiling AI Concerns for Sub-Saharan Africa and its Vulnerable Groups”. En 2024 International Conference on Intelligent and Innovative Computing Applications (ICONIC), editado por Sameerchand Pudaruth y Kingsley Ogudo, 45-55. https://doi.org/10.59200/ICONIC.2024.007
Angwin, Julia, Jeff Larson, Surya Mattu y Lauren Kirchner. 2016. “Machine Bias”. ProPublica, 23 de mayo. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Anthis, Jacy Reese, Ryan Liu, Sean M. Richardson, Austin C. Kozlowski, Bernard Koch, James Evans, et al. 2025. “LLM Social Simulations Are a Promising Research Method”. arXiv. https://arxiv.org/abs/2504.02234
Araujo, Theo, Natali Helberger, Sanne Kruikemeier y Claes H. De Vreese. 2020. “In AI We Trust? Perceptions about Automated Decision-Making by Artificial Intelligence”. AI & Society 35: 611-623. https://doi.org/10.1007/s00146-019-00931-w
Badarovski, Marjan. 2024. “The Role of AI in Enhancing Survey Data Quality: How AI Can Help Detect Fraudulent Responses and Improve Panel Management”. Lifepanel Blog, 27 de septiembre. https://lifepanel.eu/blog/the-role-of-ai-in-enhancing-survey-data-quality-how-ai-can-help-detect-fraudulent-responses-and-improve-panel-management/
Barocas, Solon, y Andrew D. Selbst. 2016. “Big Data’s Disparate Impact”. California Law Review 104 (3): 671-732. http://dx.doi.org/10.2139/ssrn.2477899
Barocas, Solon, Moritz Hardt y Arvind Narayanan. 2023. Fairness and Machine Learning: Limitations and Opportunities. Cambridge, MA: MIT Press.
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major y Shmargaret Shmitchell. 2021. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”. En FAccT ‘21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610-623. 3 al 10 de marzo, virtual, Canadá. https://doi.org/10.1145/3442188.3445922
Bianchi, Federico, Pratyusha Kalluri, Esin Durmus, Faisal Ladhak, Myra Cheng, Debora Nozza, et al. 2023. “Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale”. En FAccT ‘23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 1493-1504. 12 al 15 de junio, Chicago, Estados Unidos. https://doi.org/10.1145/3593013.3594095
Bommasani, Rishi, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, et al. 2021. “On the Opportunities and Risks of Foundation Models”. ArXiv. https://doi.org/10.48550/arXiv.2108.07258
Brynjolfsson, Erik. 2022. “The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence”. Dædalus 151 (2): 272-287. https://www.amacad.org/publication/daedalus/turing-trap-promise-peril-human-artificial-intelligence
Chen, Zhisheng. 2023. “Ethics and Discrimination in Artificial Intelligence-Enabled Recruitment Practices”. Humanities and Social Sciences Communications 10: 1-12. https://doi.org/10.1057/s41599-023-02079-x
Corrêa, Nicholas Kluge, Camila Galvão, James William Santos, Carolina Del Pino, Edson Pontes Pinto, Camila Barbosa, et al. 2023. “Worldwide AI Ethics: A Review of 200 Guidelines and Recommendations for AI Governance”. Patterns 4 (10): 1-14. https://doi.org/10.1016/j.patter.2023.100857
Danaher, John, y Neil McArthur, eds. 2017. Robot Sex: Social and Ethical Implications. Cambridge, MA: MIT Press.
Delvin, Kate. 2018. Turned On: Science, Sex and Robots. Londres: Bloomsbury.
Döring, Nicola, Thuy Dung Le, Laura M. Vowels, Matthew J. Vowels y Tiffany L. Marcantonio. 2025. “The Impact of Artificial Intelligence on Human Sexuality: A Five-Year Literature Review 2020-2024”. Current Sexual Health Reports 17: 1-39. https://doi.org/10.1007/s11930-024-00397-y
Effoduh, Jake Okechukwu, Ugochukwu Ejike Akpudo y Jude Dzevela Kong. 2024. “Toward a Trustworthy and Inclusive Data Governance Policy for the Use of Artificial Intelligence in Africa”. Data & Policy 6: 1-14. https://doi.org/10.1017/dap.2024.26
Eubanks, Virginia. 2018. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. Nueva York: St. Martin’s Press.
European Commission. 2019. “Liability for Artificial Intelligence and other Emerging Technologies”. European Commission [edición digital]. https://data.europa.eu/doi/10.2838/573689
Flórez Rojas, María Lorena y Juliana Vargas Leal. 2020. “El impacto de herramientas de inteligencia artificial: Un análisis en el sector público en Colombia”. En Inteligencia artificial en América Latina y el Caribe. Ética, gobernanza y políticas, editado por Carolina Aguerre, 206-238. Buenos Aires: CETyS; Universidad de San Andrés.
Floridi, Luciano. 2023. The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities. Oxford: Oxford University Press.
GobLab UAI. 2022. “Repositorio de algoritmos públicos de Chile. Primer informe de estado de uso de algoritmos en el sector público”. Santiago de Chile: Universidad Adolfo Ibáñez (UAI). https://goblab.uai.cl/wp-content/uploads/2022/02/Primer-Informe-Repositorio-Algoritmos-Publicos-en-Chile.pdf
Goldberg, Beth, Diana Acosta-Navas, Michiel Bakker, Ian Beacock, Matt Botvinick, Prateek Buch, et al. 2024. “AI and the Future of Digital Public Squares”. arXiv. https://arxiv.org/abs/2412.09988
Gray, Mary L. y Siddharth Suri. 2019. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Nueva York: Harper Business.
Gutiérrez, Juan David. 2024a. “Critical Appraisal of Large Language Models in Judicial Decision-Making”. En Handbook on Public Policy and Artificial Intelligence, editado por Regine Paul, Emma Carmel y Jennifer Cobbe, 323-338. Londres: Edward Elgar Publishing.
Gutiérrez, Juan David. 2024b. Consultation Paper on AI Regulation: Emerging Approaches across the World. París: Unesco. https://unesdoc.unesco.org/ark:/48223/pf0000390979
Gutiérrez, Juan David y Michelle Castellanos-Sánchez. 2023. “Transparencia algorítmica y Estado abierto en Colombia”. Reflexión Política 25 (52): 6-21. https://doi.org/10.29375/01240781.4789
Gutiérrez, Juan David y Sarah Muñoz-Cadena. 2023. “Adopción de sistemas de decisión automatizada en el sector público: Cartografía de 113 sistemas en Colombia”. GIGAPP Estudios Working Papers 10 (270): 365-395. https://www.gigapp.org/ewp/index.php/GIGAPP-EWP/article/view/329
Gutiérrez, Juan David y Sarah Muñoz-Cadena. 2025. “Artificial Intelligence in Latin America’s Public Policy Cycles”. En Handbook of Public Policy in Latin America, editado por Leonardo Secchi y César N. Cruz-Rubio, 403-420. Londres: Edward Elgar Publishing.
Gündüz, Uğur. 2017. “The Effect of Social Media on Identity Construction”. Mediterranean Journal of Social Sciences 8 (5): 85-92. http://archive.sciendo.com/MJSS/mjss.2017.8.issue-5/mjss-2017-0026/mjss-2017-0026.pdf
Hall, Rachel y Claire Wilmot. 2025. “Meta Faces Ghana Lawsuits over Impact of Extreme Content on Moderators”. The Guardian, 27 de abril. https://www.theguardian.com/technology/2025/apr/27/meta-faces-ghana-lawsuits-over-impact-of-extreme-content-on-moderators
Hermosilla, María Paz, Romina Garrido y Daniel Loewe. 2020. “Transparencia y responsabilidad algorítmica para la inteligencia artificial”. Santiago de Chile: Gob_Lab UAI/Escuela de Gobierno Universidad Alfonso Ibañez. https://goblab.uai.cl/wp-content/uploads/2020/04/Paper-Transparencia-GobLab.pdf
Hermosilla, María Paz y José Pablo Lapostol. 2022. “The Limits of Algorithmic Transparency”. En Survey on the Use of Information and Communication Technologies in the Brazilian Public Sector: ICT Electronic Government 2021, editado por el Núcleo de Informação e Coordenação do Ponto BR, 289-295. Brasil: Comitê Gestor da Internet no Brasil.
ISC (International Science Council). 2023. A Framework for Evaluating Rapidly Developing Digital and Related Technologies: AI, Large Language Models and Beyond. París: ISC. https://council.science/publications/framework-digital-technologies/
Jacobs, Jane. 1961. The Death and Life of Great American Cities. Nueva York: Random House.
Kleinman, Zoe. 2024. “Why Google’s ‘Woke’ AI Problem Won’t Be an Easy Fix”. BBC News, 24 de febrero. https://www.bbc.com/news/technology-68412620
Krägeloh, Christian U., Mohsen M. Alyami y Oleg N. Medvedev. 2023. “AI in Questionnaire Creation: Guidelines Illustrated in AI Acceptability Instrument Development”. En International Handbook of Behavioral Health Assessment, 1-23. Cham: Springer.
Lapostol, José Pablo, Romina Garrido y María Paz Hermosilla. 2023. “Algorithmic Transparency from the South: Examining the State of Algorithmic Transparency in Chile’s Public Administration Algorithms”. En FAccT ‘23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 227-235. 12 al 15 de junio, Chicago, Estados Unidos. https://doi.org/10.1145/3593013.3593991
Lebrun, Benjamin, Sharon Temtsin, Andrew Vonasch y Christoph Bartneck. 2024. “Detecting the Corruption of Online Questionnaires by Artificial Intelligence”. Frontiers in Robotics and AI 10: 1-25. https://doi.org/10.3389/frobt.2023.1277635
Liang, Percy, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, et al. 2023. “Holistic Evaluation of Language Models”. Transactions on Machine Learning Research. https://openreview.net/pdf?id=iO4LZibEqW
Lindgren, Simon y Jonny Holmström. 2020. “A Social Science Perspective on Artificial Intelligence: Building Blocks for a Research Agenda”. Journal of Digital Social Research 2 (3): 1-15. https://doi.org/10.33621/jdsr.v2i3.65
Lu, Donna. 2025. “We Tried Out DeepSeek. It Worked Well, until we Asked It about Tiananmen Square and Taiwan”. The Guardian, 28 de enero. https://www.theguardian.com/technology/2025/jan/28/we-tried-out-deepseek-it-works-well-until-we-asked-it-about-tiananmen-square-and-taiwan
Marcuse, Herbert. 1993. El hombre unidimensional. Barcelona: Planeta DeAgostini.
Marx, Karl. 2014. “Tesis sobre Feuerbach”. En Karl Marx, La ideología alemana, 499-502. Madrid: Akal.
Medaglia, Rony y Luca Tangi. 2022. “The Adoption of Artificial Intelligence in the Public Sector in Europe: Drivers, Features, and Impacts”. En ICEGOV ‘22: Proceedings of the 15th International Conference on Theory and Practice of Electronic Governance, 10-18. 4 al 7 de octubre, Guimarães, Portugal. https://doi.org/10.1145/3560107.3560110
Mhlambi, Sabelo. 2020. “Q&A: Sabelo Mhlambi on What AI Can Learn from Ubuntu Ethics”. People + AI Research Blog, Medium, 6 de mayo. https://medium.com/people-ai-research/q-a-sabelo-mhlambi-on-what-ai-can-learn-from-ubuntu-ethics-4012a53ec2a6
Misuraca, Gianluca, Colin van Noordt y Anys Boukli. 2020. “The Use of AI in Public Services: Results from a Preliminary Mapping across the EU”. En ICEGOV ‘20: Proceedings of the 13th International Conference on Theory and Practice of Electronic Governance, 90-99. 23 al 25 de septiembre, Atenas, Grecia. https://doi.org/10.1145/3428502.3428513
Mumford, Lewis. 1992. Técnica y civilización. Madrid: Alianza Editorial.
Nguyen, C. Thi. 2020. “Echo Chambers and Epistemic Bubbles”. Episteme 17 (2): 141-161. https://doi.org/10.1017/epi.2018.32
Ovadya, Aviv y Luke Thorburn. 2023. “Bridging Systems: Open Problems for Countering Destructive Divisiveness across Ranking, Recommenders, and Governance”. arXiv. https://arxiv.org/abs/2301.09976
Páez, Andrés, ed. 2021a. “Robot Mindreading and the Problem of Trust”. AISB Convention 2021: Communication and Conversation, 140-143. Londres: AISB.
Páez, Andrés. 2021b. “Negligent Algorithmic Discrimination”. Law and Contemporary Problems 84 (3): 19-33. https://scholarship.law.duke.edu/lcp/vol84/iss3/3
Park, Joon Sung, Carolyn Q. Zou, Aaron Shaw, Benjamin Mako Hill, Carrie Cai, Meredith Ringel Morris, et al. 2024. “Generative Agent Simulations of 1,000 People”. arXiv. https://arxiv.org/abs/2411.10109
Pasquale, Frank. 2015. The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge, MA: Harvard University Press.
Perrigo, Billy. 2023. “Exclusive: OpenAI Used Kenyan Workers on Less than $2 per Hour to Make ChatGPT Less Toxic”. Time, 18 de enero. https://time.com/6247678/openai-chatgpt-kenya-workers/
Perrigo, Billy. 2024. “Inside OpenAI’s Plan to Make AI more ‘Democratic’”. Time, 5 de febrero. https://time.com/6684266/openai-democracy-artificial-intelligence/
Raghavan, Prabhakar. 2024. “Gemini Image Generation Got It Wrong. We’ll Do Better”. Google The Keyword, 23 de febrero. https://blog.google/products/gemini/gemini-image-generation-issue/
Ramírez-Bustamante, Natalia y Andrés Páez. 2024. “Análisis jurídico de la discriminación algorítmica en los procesos de selección laboral”. En Derecho, poder y datos. Aproximaciones críticas al derecho y las nuevas tecnologías, editado por René Urueña Hernández y Natalia Ángel-Cabo, 203-230. Bogotá: Ediciones Uniandes.
Régis, Catherine, Florian Martin-Bariteau, Okechukwu Jake Effoduh, Juan David Gutiérrez, Gina Neff, Carlos Affonso Souza, et al. 2025. “AI in the Ballot Box: Four Actions to Safeguard Election Integrity and Uphold Democracy”. IVADO. https://www.uottawa.ca/research-innovation/sites/g/files/bhrskd326/files/2025-02/gpb-ai_ai_and_elections.pdf
Reich, Rob. 2018. Just Giving: Why Philanthropy Is Failing Democracy and How It Can Do Better. Princeton: Princeton University Press.
Richardson, Kathleen. 2016. “The Asymmetrical ‘Relationship’: Parallels Between Prostitution and the Development of Sex Robots”. ACM SIGCAS Computers and Society 45 (3): 290-293. https://doi.org/10.1145/2874239.2874281
Risse, Mathias. 2019. “Human Rights and Artificial Intelligence”. Human Rights Quarterly 41 (1): 1-16. https://doi.org/10.1353/hrq.2019.0000
Roberts, Sarah T. 2019. Behind the Screen: Content Moderation in the Shadows of Social Media. New Haven: Yale University Press.
Saunders-Hastings, Emma. 2022. Private Virtues, Public Vices: Philanthropy and Democratic Equality. Chicago: University of Chicago Press.
Shin, Minkyu, Jin Kim y Jiwoong Shin. 2023. “The Adoption and Efficacy of Large Language Models: Evidence from Consumer Complaints in the Financial Industry”. arXiv. https://arxiv.org/abs/2311.16466
Sistemas de Algoritmos Públicos. 2025. “Informe de los Repositorios 1.0”. Bogotá: Escuela de Gobierno / Universidad de los Andes. https://sistemaspublicos.tech/wp-content/uploads/Informe-1-Repositorios-Proyecto-SAP-032025-PLATAFORMA.pdf
Spence, Andrew Michael. 2022. “Automation, Augmentation, Value Creation & the Distribution of Income & Wealth”. Dædalus 151 (2): 244-255. https://www.amacad.org/publication/daedalus/automation-augmentation-value-creation-distribution-income-wealth
Sunstein, Cass R. 2017. #Republic: Divided Democracy in the Age of Social Media. Princeton: Princeton University Press.
Tyson, Laura D. y John Zysman. 2022. “Automation, AI & Work”. Dædalus 151 (2): 256-271. https://www.amacad.org/publication/daedalus/automation-ai-work
Tocqueville, Alexis de. (1835) 2000. Democracy in America. Chicago: University of Chicago Press.
Valderrama, Matías, María Paz Hermosilla y Romina Garrido. 2023. “State of the Evidence: Algorithmic Transparency”. Open Government Partnership, 24 de mayo. https://www.opengovpartnership.org/documents/state-of-the-evidence-algorithmic-transparency/
Valle-Cruz, David, J. Ignacio Criado, Rodrigo Sandoval-Almazán y Edgar A. Ruvalcaba-Gomez. 2020. “Assessing the Public Policy-Cycle Framework in the Age of Artificial Intelligence: From Agenda-Setting to Policy Evaluation”. Government Information Quarterly 37 (4): 101509. https://doi.org/10.1016/j.giq.2020.101509
Wankhade, Mayur, Annavarapu Chandra Sekhara Rao y Chaitanya Kulkarni. 2022. “A Survey on Sentiment Analysis Methods, Applications, and Challenges”. Artificial Intelligence Review 55 (7): 5731-5780. https://doi.org/10.1007/s10462-022-10144-1
Weber, Max. 1984. Economía y sociedad. Ciudad de México: FCE.
Winner, Langdon. 1980. “Do Artifacts Have Politics?”. Dædalus 109 (1): 121-136. https://www.jstor.org/stable/20024652?seq=1
Winters, Jutta y Jonathan Latner. 2025. “Does Automation Replace Experts or Augment Expertise? The Answer Is Yes”. IAB-Forum, 9 de enero. https://www.iab-forum.de/en/does-automation-replace-experts-or-augment-expertise-the-answer-is-yes/
Wirtz, Bernd W. y Wilhelm M. Müller. 2019. “An Integrated Artificial Intelligence Framework for Public Management”. Public Management Review 21 (7): 1076-1100. https://doi.org/10.1080/14719037.2018.1549268
Yang, Zeyi. 2022. “There’s no Tiananmen Square in the New Chinese Image-Making AI”. MIT Technology Review, 14 de septiembre. https://www.technologyreview.com/2022/09/14/1059481/baidu-chinese-image-ai-tiananmen/
Zuboff, Shoshana. 2019. The Age of Surveillance Capitalism. Nueva York: Routledge.
License
Copyright (c) 2025 Andrés Páez, Juan David Gutiérrez, Diana Acosta-Navas

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.