Revista de Estudios Sociales

rev. estud. soc. | eISSN 1900-5180 | ISSN 0123-885X

Hacia una inteligencia artificial centrada en los seres humanos: contribuciones de las ciencias sociales

No. 93 (2025-07-25)
  • Andrés Páez
    Universidad de los Andes, Colombia
    Identificador ORCID: https://orcid.org/0000-0002-4602-7490
  • Juan David Gutiérrez
    Universidad de los Andes, Colombia
    Identificador ORCID: https://orcid.org/0000-0002-7783-4850
  • Diana Acosta-Navas
    Loyola University Chicago, Estados Unidos
    Identificador ORCID: https://orcid.org/0000-0002-5351-5820

Resumen

Esta introducción al dossier de la Revista de Estudios Sociales, dedicado a la relación entre las ciencias sociales y la inteligencia artificial (IA), parte de la premisa de que las ciencias sociales no solo tienen un papel fundamental en analizar y comprender las profundas transformaciones en la vida humana que ha implicado la irrupción de la IA, sino que también, y principalmente, tienen una función esencial en guiar el desarrollo futuro de la IA para que esté alineado con el bienestar humano y el desarrollo individual. Para asegurarnos de que el diseño, desarrollo, uso y evaluación de los sistemas de IA estén centrados en los seres humanos, necesitamos las herramientas conceptuales, los datos experimentales y la experiencia trabajando con diversas comunidades que pueden proporcionar las ciencias sociales. Una IA centrada en los humanos y alineada con sus intereses requiere de las ciencias sociales para existir. En esta introducción nos hemos centrado en tres áreas en las que consideramos que las ciencias sociales pueden cumplir ese papel de guía para el desarrollo futuro de la IA. Por una parte, consideramos que uno de los efectos más significativos de la IA en la vida humana es la manera en que está reconfigurando las relaciones interpersonales. La IA ha cambiado la forma en que las personas se comunican, forman relaciones, construyen su identidad, cooperan y enfrentan los desacuerdos. Por otra parte, examinamos el impacto de la IA en la igualdad social. Los modelos entrenados con datos históricos tienen el potencial de perpetuar las desigualdades y sesgos sociales reflejados en estos datos y producir resultados que ponen en desventaja a miembros de grupos marginados. Finalmente, nos interesa comprender cómo el desarrollo y uso de las diferentes aplicaciones de la IA influyen en los regímenes políticos, la política pública, la burocracia y el Estado.

Palabras clave: algoritmos de aprendizaje automático, discriminación algorítmica, IA y democracia, relaciones interpersonales, uso de la IA en las ciencias sociales

Referencias

Acemoglu, Daron y Pascual Restrepo. 2018. “Artificial Intelligence, Automation, and Work”. En The Economics of Artificial Intelligence: An Agenda, editado por Ajay Agrawal, Joshua Gans y Avi Goldfarb, 197-236. Chicago: University of Chicago Press.

Acosta-Navas, Diana. 2025a. “On Foundations and Foundation Models: What AI and Philanthropy can Learn from One Another”. En The Routledge Handbook of Artificial Intelligence and Philanthropy, editado por Giuseppe Ugazio y Milos Maricic, 393-407. Nueva York; Londres: Routledge.

Acosta-Navas, Diana. 2025b. “In Moderation: Automation in the Digital Public Sphere”. Journal of Business Ethics. https://doi.org/10.1007/s10551-024-05912-8

Akpudo, Ugochukwu Ejike, Jake Okechukwu Effoduh, Jude Dzevela Kong y Yongsheng Gao. 2024. “Unveiling AI Concerns for Sub-Saharan Africa and its Vulnerable Groups”. En 2024 International Conference on Intelligent and Innovative Computing Applications (ICONIC), editado por Sameerchand Pudaruth y Kingsley Ogudo, 45-55. https://doi.org/10.59200/ICONIC.2024.007

Angwin, Julia, Jeff Larson, Surya Mattu y Lauren Kirchner. 2016. “Machine Bias”. ProPublica, 23 de mayo. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Anthis, Jacy Reese, Ryan Liu, Sean M. Richardson, Austin C. Kozlowski, Bernard Koch, James Evans, et al. 2025. “LLM Social Simulations Are a Promising Research Method”. arXiv. https://arxiv.org/abs/2504.02234

Araujo, Theo, Natali Helberger, Sanne Kruikemeier y Claes H. De Vreese. 2020. “In AI We Trust? Perceptions about Automated Decision-Making by Artificial Intelligence”. AI & Society 35: 611-623. https://doi.org/10.1007/s00146-019-00931-w

Badarovski, Marjan. 2024. “The Role of AI in Enhancing Survey Data Quality: How AI Can Help Detect Fraudulent Responses and Improve Panel Management”. Lifepanel Blog, 27 de septiembre. https://lifepanel.eu/blog/the-role-of-ai-in-enhancing-survey-data-quality-how-ai-can-help-detect-fraudulent-responses-and-improve-panel-management/

Barocas, Solon, y Andrew D. Selbst. 2016. “Big Data’s Disparate Impact”. California Law Review 104 (3): 671-732. http://dx.doi.org/10.2139/ssrn.2477899

Barocas, Solon, Moritz Hardt y Arvind Narayanan. 2023. Fairness and Machine Learning: Limitations and Opportunities. Cambridge, MA: MIT Press.

Bender, Emily M., Timnit Gebru, Angelina McMillan-Major y Shmargaret Shmitchell. 2021. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”. En FAccT ‘21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610-623. 3 al 10 de marzo, virtual, Canadá. https://doi.org/10.1145/3442188.3445922

Bianchi, Federico, Pratyusha Kalluri, Esin Durmus, Faisal Ladhak, Myra Cheng, Debora Nozza, et al. 2023. “Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale”. En FAccT ‘23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 1493-1504. 12 al 15 de junio, Chicago, Estados Unidos. https://doi.org/10.1145/3593013.3594095

Bommasani, Rishi, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, et al. 2021. “On the Opportunities and Risks of Foundation Models”. ArXiv. https://doi.org/10.48550/arXiv.2108.07258

Brynjolfsson, Erik. 2022. “The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence”. Dædalus 151 (2): 272-287. https://www.amacad.org/publication/daedalus/turing-trap-promise-peril-human-artificial-intelligence

Chen, Zhisheng. 2023. “Ethics and Discrimination in Artificial Intelligence-Enabled Recruitment Practices”. Humanities and Social Sciences Communications 10: 1-12. https://doi.org/10.1057/s41599-023-02079-x

Corrêa, Nicholas Kluge, Camila Galvão, James William Santos, Carolina Del Pino, Edson Pontes Pinto, Camila Barbosa, et al. 2023. “Worldwide AI Ethics: A Review of 200 Guidelines and Recommendations for AI Governance”. Patterns 4 (10): 1-14. https://doi.org/10.1016/j.patter.2023.100857

Danaher, John, y Neil McArthur, eds. 2017. Robot Sex: Social and Ethical Implications. Cambridge, MA: MIT Press.

Delvin, Kate. 2018. Turned On: Science, Sex and Robots. Londres: Bloomsbury.

Döring, Nicola, Thuy Dung Le, Laura M. Vowels, Matthew J. Vowels y Tiffany L. Marcantonio. 2025. “The Impact of Artificial Intelligence on Human Sexuality: A Five-Year Literature Review 2020-2024”. Current Sexual Health Reports 17: 1-39. https://doi.org/10.1007/s11930-024-00397-y

Effoduh, Jake Okechukwu, Ugochukwu Ejike Akpudo y Jude Dzevela Kong. 2024. “Toward a Trustworthy and Inclusive Data Governance Policy for the Use of Artificial Intelligence in Africa”. Data & Policy 6: 1-14. https://doi.org/10.1017/dap.2024.26

Eubanks, Virginia. 2018. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. Nueva York: St. Martin’s Press.

European Commission. 2019. “Liability for Artificial Intelligence and other Emerging Technologies”. European Commission [edición digital]. https://data.europa.eu/doi/10.2838/573689

Flórez Rojas, María Lorena y Juliana Vargas Leal. 2020. “El impacto de herramientas de inteligencia artificial: Un análisis en el sector público en Colombia”. En Inteligencia artificial en América Latina y el Caribe. Ética, gobernanza y políticas, editado por Carolina Aguerre, 206-238. Buenos Aires: CETyS; Universidad de San Andrés.

Floridi, Luciano. 2023. The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities. Oxford: Oxford University Press.

GobLab UAI. 2022. “Repositorio de algoritmos públicos de Chile. Primer informe de estado de uso de algoritmos en el sector público”. Santiago de Chile: Universidad Adolfo Ibáñez (UAI). https://goblab.uai.cl/wp-content/uploads/2022/02/Primer-Informe-Repositorio-Algoritmos-Publicos-en-Chile.pdf

Goldberg, Beth, Diana Acosta-Navas, Michiel Bakker, Ian Beacock, Matt Botvinick, Prateek Buch, et al. 2024. “AI and the Future of Digital Public Squares”. arXiv. https://arxiv.org/abs/2412.09988

Gray, Mary L. y Siddharth Suri. 2019. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Nueva York: Harper Business.

Gutiérrez, Juan David. 2024a. “Critical Appraisal of Large Language Models in Judicial Decision-Making”. En Handbook on Public Policy and Artificial Intelligence, editado por Regine Paul, Emma Carmel y Jennifer Cobbe, 323-338. Londres: Edward Elgar Publishing.

Gutiérrez, Juan David. 2024b. Consultation Paper on AI Regulation: Emerging Approaches across the World. París: Unesco. https://unesdoc.unesco.org/ark:/48223/pf0000390979

Gutiérrez, Juan David y Michelle Castellanos-Sánchez. 2023. “Transparencia algorítmica y Estado abierto en Colombia”. Reflexión Política 25 (52): 6-21. https://doi.org/10.29375/01240781.4789

Gutiérrez, Juan David y Sarah Muñoz-Cadena. 2023. “Adopción de sistemas de decisión automatizada en el sector público: Cartografía de 113 sistemas en Colombia”. GIGAPP Estudios Working Papers 10 (270): 365-395. https://www.gigapp.org/ewp/index.php/GIGAPP-EWP/article/view/329

Gutiérrez, Juan David y Sarah Muñoz-Cadena. 2025. “Artificial Intelligence in Latin America’s Public Policy Cycles”. En Handbook of Public Policy in Latin America, editado por Leonardo Secchi y César N. Cruz-Rubio, 403-420. Londres: Edward Elgar Publishing.

Gündüz, Uğur. 2017. “The Effect of Social Media on Identity Construction”. Mediterranean Journal of Social Sciences 8 (5): 85-92. http://archive.sciendo.com/MJSS/mjss.2017.8.issue-5/mjss-2017-0026/mjss-2017-0026.pdf

Hall, Rachel y Claire Wilmot. 2025. “Meta Faces Ghana Lawsuits over Impact of Extreme Content on Moderators”. The Guardian, 27 de abril. https://www.theguardian.com/technology/2025/apr/27/meta-faces-ghana-lawsuits-over-impact-of-extreme-content-on-moderators

Hermosilla, María Paz, Romina Garrido y Daniel Loewe. 2020. “Transparencia y responsabilidad algorítmica para la inteligencia artificial”. Santiago de Chile: Gob_Lab UAI/Escuela de Gobierno Universidad Alfonso Ibañez. https://goblab.uai.cl/wp-content/uploads/2020/04/Paper-Transparencia-GobLab.pdf

Hermosilla, María Paz y José Pablo Lapostol. 2022. “The Limits of Algorithmic Transparency”. En Survey on the Use of Information and Communication Technologies in the Brazilian Public Sector: ICT Electronic Government 2021, editado por el Núcleo de Informação e Coordenação do Ponto BR, 289-295. Brasil: Comitê Gestor da Internet no Brasil.

ISC (International Science Council). 2023. A Framework for Evaluating Rapidly Developing Digital and Related Technologies: AI, Large Language Models and Beyond. París: ISC. https://council.science/publications/framework-digital-technologies/

Jacobs, Jane. 1961. The Death and Life of Great American Cities. Nueva York: Random House.

Kleinman, Zoe. 2024. “Why Google’s ‘Woke’ AI Problem Won’t Be an Easy Fix”. BBC News, 24 de febrero. https://www.bbc.com/news/technology-68412620

Krägeloh, Christian U., Mohsen M. Alyami y Oleg N. Medvedev. 2023. “AI in Questionnaire Creation: Guidelines Illustrated in AI Acceptability Instrument Development”. En International Handbook of Behavioral Health Assessment, 1-23. Cham: Springer.

Lapostol, José Pablo, Romina Garrido y María Paz Hermosilla. 2023. “Algorithmic Transparency from the South: Examining the State of Algorithmic Transparency in Chile’s Public Administration Algorithms”. En FAccT ‘23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 227-235. 12 al 15 de junio, Chicago, Estados Unidos. https://doi.org/10.1145/3593013.3593991

Lebrun, Benjamin, Sharon Temtsin, Andrew Vonasch y Christoph Bartneck. 2024. “Detecting the Corruption of Online Questionnaires by Artificial Intelligence”. Frontiers in Robotics and AI 10: 1-25. https://doi.org/10.3389/frobt.2023.1277635

Liang, Percy, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, et al. 2023. “Holistic Evaluation of Language Models”. Transactions on Machine Learning Research. https://openreview.net/pdf?id=iO4LZibEqW

Lindgren, Simon y Jonny Holmström. 2020. “A Social Science Perspective on Artificial Intelligence: Building Blocks for a Research Agenda”. Journal of Digital Social Research 2 (3): 1-15. https://doi.org/10.33621/jdsr.v2i3.65

Lu, Donna. 2025. “We Tried Out DeepSeek. It Worked Well, until we Asked It about Tiananmen Square and Taiwan”. The Guardian, 28 de enero. https://www.theguardian.com/technology/2025/jan/28/we-tried-out-deepseek-it-works-well-until-we-asked-it-about-tiananmen-square-and-taiwan

Marcuse, Herbert. 1993. El hombre unidimensional. Barcelona: Planeta DeAgostini.

Marx, Karl. 2014. “Tesis sobre Feuerbach”. En Karl Marx, La ideología alemana, 499-502. Madrid: Akal.

Medaglia, Rony y Luca Tangi. 2022. “The Adoption of Artificial Intelligence in the Public Sector in Europe: Drivers, Features, and Impacts”. En ICEGOV ‘22: Proceedings of the 15th International Conference on Theory and Practice of Electronic Governance, 10-18. 4 al 7 de octubre, Guimarães, Portugal. https://doi.org/10.1145/3560107.3560110

Mhlambi, Sabelo. 2020. “Q&A: Sabelo Mhlambi on What AI Can Learn from Ubuntu Ethics”. People + AI Research Blog, Medium, 6 de mayo. https://medium.com/people-ai-research/q-a-sabelo-mhlambi-on-what-ai-can-learn-from-ubuntu-ethics-4012a53ec2a6

Misuraca, Gianluca, Colin van Noordt y Anys Boukli. 2020. “The Use of AI in Public Services: Results from a Preliminary Mapping across the EU”. En ICEGOV ‘20: Proceedings of the 13th International Conference on Theory and Practice of Electronic Governance, 90-99. 23 al 25 de septiembre, Atenas, Grecia. https://doi.org/10.1145/3428502.3428513

Mumford, Lewis. 1992. Técnica y civilización. Madrid: Alianza Editorial.

Nguyen, C. Thi. 2020. “Echo Chambers and Epistemic Bubbles”. Episteme 17 (2): 141-161. https://doi.org/10.1017/epi.2018.32

Ovadya, Aviv y Luke Thorburn. 2023. “Bridging Systems: Open Problems for Countering Destructive Divisiveness across Ranking, Recommenders, and Governance”. arXiv. https://arxiv.org/abs/2301.09976

Páez, Andrés, ed. 2021a. “Robot Mindreading and the Problem of Trust”. AISB Convention 2021: Communication and Conversation, 140-143. Londres: AISB.

Páez, Andrés. 2021b. “Negligent Algorithmic Discrimination”. Law and Contemporary Problems 84 (3): 19-33. https://scholarship.law.duke.edu/lcp/vol84/iss3/3

Park, Joon Sung, Carolyn Q. Zou, Aaron Shaw, Benjamin Mako Hill, Carrie Cai, Meredith Ringel Morris, et al. 2024. “Generative Agent Simulations of 1,000 People”. arXiv. https://arxiv.org/abs/2411.10109

Pasquale, Frank. 2015. The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge, MA: Harvard University Press.

Perrigo, Billy. 2023. “Exclusive: OpenAI Used Kenyan Workers on Less than $2 per Hour to Make ChatGPT Less Toxic”. Time, 18 de enero. https://time.com/6247678/openai-chatgpt-kenya-workers/

Perrigo, Billy. 2024. “Inside OpenAI’s Plan to Make AI more ‘Democratic’”. Time, 5 de febrero. https://time.com/6684266/openai-democracy-artificial-intelligence/

Raghavan, Prabhakar. 2024. “Gemini Image Generation Got It Wrong. We’ll Do Better”. Google The Keyword, 23 de febrero. https://blog.google/products/gemini/gemini-image-generation-issue/

Ramírez-Bustamante, Natalia y Andrés Páez. 2024. “Análisis jurídico de la discriminación algorítmica en los procesos de selección laboral”. En Derecho, poder y datos. Aproximaciones críticas al derecho y las nuevas tecnologías, editado por René Urueña Hernández y Natalia Ángel-Cabo, 203-230. Bogotá: Ediciones Uniandes.

Régis, Catherine, Florian Martin-Bariteau, Okechukwu Jake Effoduh, Juan David Gutiérrez, Gina Neff, Carlos Affonso Souza, et al. 2025. “AI in the Ballot Box: Four Actions to Safeguard Election Integrity and Uphold Democracy”. IVADO. https://www.uottawa.ca/research-innovation/sites/g/files/bhrskd326/files/2025-02/gpb-ai_ai_and_elections.pdf

Reich, Rob. 2018. Just Giving: Why Philanthropy Is Failing Democracy and How It Can Do Better. Princeton: Princeton University Press.

Richardson, Kathleen. 2016. “The Asymmetrical ‘Relationship’: Parallels Between Prostitution and the Development of Sex Robots”. ACM SIGCAS Computers and Society 45 (3): 290-293. https://doi.org/10.1145/2874239.2874281

Risse, Mathias. 2019. “Human Rights and Artificial Intelligence”. Human Rights Quarterly 41 (1): 1-16. https://doi.org/10.1353/hrq.2019.0000

Roberts, Sarah T. 2019. Behind the Screen: Content Moderation in the Shadows of Social Media. New Haven: Yale University Press.

Saunders-Hastings, Emma. 2022. Private Virtues, Public Vices: Philanthropy and Democratic Equality. Chicago: University of Chicago Press.

Shin, Minkyu, Jin Kim y Jiwoong Shin. 2023. “The Adoption and Efficacy of Large Language Models: Evidence from Consumer Complaints in the Financial Industry”. arXiv. https://arxiv.org/abs/2311.16466

Sistemas de Algoritmos Públicos. 2025. “Informe de los Repositorios 1.0”. Bogotá: Escuela de Gobierno / Universidad de los Andes. https://sistemaspublicos.tech/wp-content/uploads/Informe-1-Repositorios-Proyecto-SAP-032025-PLATAFORMA.pdf

Spence, Andrew Michael. 2022. “Automation, Augmentation, Value Creation & the Distribution of Income & Wealth”. Dædalus 151 (2): 244-255. https://www.amacad.org/publication/daedalus/automation-augmentation-value-creation-distribution-income-wealth

Sunstein, Cass R. 2017. #Republic: Divided Democracy in the Age of Social Media. Princeton: Princeton University Press.

Tyson, Laura D. y John Zysman. 2022. “Automation, AI & Work”. Dædalus 151 (2): 256-271. https://www.amacad.org/publication/daedalus/automation-ai-work

Tocqueville, Alexis de. (1835) 2000. Democracy in America. Chicago: University of Chicago Press.

Valderrama, Matías, María Paz Hermosilla y Romina Garrido. 2023. “State of the Evidence: Algorithmic Transparency”. Open Government Partnership, 24 de mayo. https://www.opengovpartnership.org/documents/state-of-the-evidence-algorithmic-transparency/

Valle-Cruz, David, J. Ignacio Criado, Rodrigo Sandoval-Almazán y Edgar A. Ruvalcaba-Gomez. 2020. “Assessing the Public Policy-Cycle Framework in the Age of Artificial Intelligence: From Agenda-Setting to Policy Evaluation”. Government Information Quarterly 37 (4): 101509. https://doi.org/10.1016/j.giq.2020.101509

Wankhade, Mayur, Annavarapu Chandra Sekhara Rao y Chaitanya Kulkarni. 2022. “A Survey on Sentiment Analysis Methods, Applications, and Challenges”. Artificial Intelligence Review 55 (7): 5731-5780. https://doi.org/10.1007/s10462-022-10144-1

Weber, Max. 1984. Economía y sociedad. Ciudad de México: FCE.

Winner, Langdon. 1980. “Do Artifacts Have Politics?”. Dædalus 109 (1): 121-136. https://www.jstor.org/stable/20024652?seq=1

Winters, Jutta y Jonathan Latner. 2025. “Does Automation Replace Experts or Augment Expertise? The Answer Is Yes”. IAB-Forum, 9 de enero. https://www.iab-forum.de/en/does-automation-replace-experts-or-augment-expertise-the-answer-is-yes/

Wirtz, Bernd W. y Wilhelm M. Müller. 2019. “An Integrated Artificial Intelligence Framework for Public Management”. Public Management Review 21 (7): 1076-1100. https://doi.org/10.1080/14719037.2018.1549268

Yang, Zeyi. 2022. “There’s no Tiananmen Square in the New Chinese Image-Making AI”. MIT Technology Review, 14 de septiembre. https://www.technologyreview.com/2022/09/14/1059481/baidu-chinese-image-ai-tiananmen/

Zuboff, Shoshana. 2019. The Age of Surveillance Capitalism. Nueva York: Routledge.

Licencia

Derechos de autor 2025 Andrés Páez, Juan David Gutiérrez, Diana Acosta-Navas

Creative Commons License

Esta obra está bajo una licencia internacional Creative Commons Atribución-NoComercial-SinDerivadas 4.0.