EducAtIon: Opportunities and Challenges for Teaching in the Era of Artificial Intelligence*
David Orlando Niño Muñoz
Universidad de los Andes, Bogotá (Colombia)
https://orcid.org/0009-0009-7997-1945
Reception: October 31, 2024 | Acceptance: February 7, 2025 | Publication: May 31, 2025
DOI: http://doi.org/10.18175/VyS16.2.2025.8
ABSTRACT
In the age of artificial intelligence (AI), the educational field faces the need to adopt tools like large language models (LLMs), which bring both opportunities and significant challenges. This article explores the benefits that generative AI provides to higher education, while also acknowledging the risks and ethical issues involved. Using specific experiences and usage examples, practical applications of AI are presented, including creating assessment rubrics, customizing chatbots for feedback, generating questions, and managing content on learning management system (LMS) platforms, among other things. Concerns about algorithmic biases and the reliability of generated information are also discussed, offering ethical and critical strategies for the responsible use of these tools in the classroom. This text encourages educators to consider their role in a world where AI is already a permanent element, suggesting that the key to unlocking its potential is guiding students toward informed and creative use that enhances, rather than replaces, the learning process.
KEYWORDS
artificial intelligence, digital educational tools, pedagogy.
PedagogIA: oportunidades y desafíos para enseñar en la era de la inteligencia artificial
RESUMEN
En la era de la inteligencia artificial (IA), el ámbito educativo se enfrenta a la necesidad de integrar herramientas como los modelos extensos de lenguaje (LLM), los cuales representan tanto promesas como desafíos significativos. Este artículo explora las oportunidades que la IA generativa brinda en la educación superior, al tiempo que reconoce los riesgos y dilemas éticos inherentes. A través de experiencias concretas y ejemplos de uso, se presentan aplicaciones prácticas de la IA en la creación de rúbricas de evaluación, personalización de chatbots para retroalimentación, generación de preguntas, administración de plataformas LMS (sistemas de gestión de aprendizaje), entre otros. A su vez, se abordan preocupaciones sobre los sesgos algorítmicos y la fiabilidad de la información generada, frente a lo cual se proponen enfoques críticos y éticos para la adopción responsable de estas herramientas en el aula. Este texto invita a los educadores a una reflexión sobre su rol en un contexto donde la IA es ya un componente permanente y sugiere que la clave para aprovechar su potencial reside en guiar a los estudiantes hacia un uso informado y creativo que enriquezca, más que reemplace, el proceso de aprendizaje.
PALABRAS CLAVE
herramientas educativas digitales, inteligencia artificial, pedagogía.
PedagogIA: oportunidades e desafios para ensinar na era da inteligência artificial
RESUMO
Na era da inteligência artificial (IA), o campo educacional enfrenta a necessidade de integrar ferramentas como os LLMs (large language models), que oferecem tanto promessas quanto desafios significativos. Este artigo explora as oportunidades que a IA generativa traz para o ensino superior, ao mesmo tempo em que reconhece os riscos e dilemas éticos inerentes. Através de experiências concretas e exemplos de uso, são apresentadas aplicações práticas da IA na criação de rubricas de avaliação, personalização de chatbots para feedback, geração de perguntas, gestão de conteúdo em plataformas LMS (learning management system), entre outros usos. Também são abordadas preocupações sobre os vieses algorítmicos e a confiabilidade das informações geradas, propondo abordagens críticas e éticas para a adoção responsável dessas ferramentas em sala de aula. Este texto convida os educadores a refletirem sobre seu papel em um contexto em que a IA já é um componente permanente, sugerindo que a chave para aproveitar seu potencial reside em guiar os alunos para um uso informado e criativo que enriqueça, em vez de substituir, o processo de aprendizagem.
PALAVRAS-CHAVE
ferramentas educacionais digitais, inteligência artificial, pedagogia.
Introduction
A few weeks ago, while browsing social media, I came across a fascinating website called The Library of Babel,1 inspired by the famous short story of the same name by Jorge Luis Borges (2012). In this story, part of the collection The Garden of Forking Paths, Borges describes an infinite library—a universe of books that contains absolutely everything that has been and could ever be written. This challenging and philosophical idea was brought to life in The Library of Babel by Jonathan Basile (n.d.), who created this digital version of the library following Borges’s original rules. “Each wall of each hexagon is furnished with five bookshelves; each bookshelf holds thirty-two books identical in format; each book contains four hundred and ten pages; each page, forty lines; each line, approximately eighty black letters” (Borges, 1998, p. 113). Thus, Basile’s infinite library, if completed, “would contain every possible combination of 1,312,000 existing characters, including lower case letters, space, comma, and period. Thus, it would contain every book ever had been written, and every book that could ever be – including every play, every song, every scientific paper, every legal decision, every constitution, every piece of scripture, and so on” (Basile, n.d., para. 1). In fact, the online version includes all possible pages of 3,200 characters, which is roughly equivalent to 104677 books. To give an idea of the scope of this site, it is enough to say that you can search for an exact phrase, and the system will tell you where to find it among the huge number of books and digital shelves.
Can you imagine a library containing everything ever written and ever to be written? This idea immediately made me think of artificial intelligence (AI), especially large language models (LLMs). In a way, these models have a conceptual similarity to The Library of Babel: while Basile’s library is based on mathematical combinations, LLMs depend on probabilities to generate coherent responses and sentences. Like the library, these models can evoke any possible language combination and often produce responses that seem to come from an immense and unfathomable knowledge.
This parallel led me to reflect on a fundamental question: if everything in The Library of Babel is already written, can using generative AI in educational settings still be considered plagiarism? Can we still debate authorship in this context? Even if I create without using a language model, would it still not be considered plagiarism if everything is already written in The Library of Babel? These seemingly philosophical questions directly impact on how we perceive and utilize AI in education.
I remember, for example, the panic that arose as tools like OpenAI’s ChatGPT gained popularity. New York City public schools banned the tool’s use in classrooms soon after its media buzz began (Korn & Kelly, 2023), though the ban was eventually lifted months later (Faguy, 2023). However, this panic was not unique to New York; similar concerns appeared in many other parts of the world, driven by fears that artificial intelligence (AI) would replace or undermine the learning process (Nolan, 2023).
On the other hand, the rise of ChatGPT also sparked a trend as it was widely used as a test subject for passing standardized exams. It was assessed whether the model could pass the university entrance exam, the bar exam, or any other complex test (Alexander, 2024; Arredondo, 2023; Newton & Xiromeriti, 2024). Even today, when new AI models are released, they are compared to human academic performance: can they outperform PhD students? Can they respond like an experienced professional? When OpenAI released its latest model, o1-preview, it noted: “In our tests, the [new] model update performs similarly to PhD students on challenging benchmark tasks in physics, chemistry, and biology. We also found that it excels in math and coding” (OpenAI, 2024, section “How it works?”, para. 2). Interestingly, these models are evaluated using academic benchmarks; perhaps this suggests that the educational environment offers the best opportunities to leverage these tools for transforming learning.
In the following pages, I will share my experience with using generative AI—specifically, ChatGPT, although many of the insights and prompts suggested here can also be applied to other models—in teaching contexts in higher education at the undergraduate level, especially in my area of expertise, law.
Although my examples will come from this discipline, the scenarios I describe could easily be applied to other fields. I’ll give you a preview of my position: I believe AI is a powerful tool and that, as educators, we should explore its potential to enhance our lessons and enrich the learning process. This text will not be limited to personal experiences; I will also include prompts2 and potential classroom applications, as well as counterarguments to the most common criticisms of using AI in educational settings.
Contemporary Discussions
When you explore the world of AI—especially related to LLMs—you notice that much of the literature, even the non-technical types, tends to use complex descriptions of how these models operate. However, this article does not aim to explain in detail how an LLM works, but rather to examine its potential and implications in education. That said, I find it important to highlight certain concerns frequently mentioned in the literature on AI use. These cautionary points have formed the basis for principles such as the informed, transparent, ethical, responsible, diverse, appropriate, and critical use of AI (Gutiérrez, 2023).
Certainly, any technology used in education must be accompanied by an analysis of its potential risks. One of the most notable risks— which, as a supporter of this technology, I have also identified several times— is the presence of biases in AI systems. Algorithmic biases mainly stem from using inadequate databases during model training and can lead to biased or problematic results (Flórez Rojas & Vargas León, 2020). A well-known example of this issue is Joy Buolamwini’s work with the Algorithmic Justice League, where she demonstrated how some facial recognition systems failed to correctly identify dark-skinned individuals, highlighting biases in the data selection and processing used to train these models (Buolamwini, 2016).
Another common example arises with generative AI image creation tools. Sometimes, when asking AI to produce an image of a group of lawyers, the result includes only men or people of a specific ethnicity, revealing biases about gender and ethnicity in certain professions. The solution? Continuous vigilance. It is essential that, as teachers, students, or researchers, we recognize these biases and know how to spot them quickly. For example, if I ask an AI system to help me generate an image of a group of lawyers and the output lacks diversity, I need to question why and work to reduce that bias.
In the educational field, Prabha Kannan’s (2024) article, published by Stanford University Human-Centered Artificial Intelligence, shows how biases can still be present even in learning situations or scenarios. Kannan describes an experiment where the PaLM 2 model is asked to create a story about a star student helping a classmate who is struggling with English and other languages. The model chose names like Jamal and Carlos for the students needing support, names that, in the American context, are often linked to migrant populations (Kannan, 2024). What could happen if a professor uses an LLM in class to craft a challenging scenario, for example, within the framework of problem-based learning, without realizing these underlying biases? This would not necessarily be problematic if the professor is aware of the biases and adjusts the AI output or, if needed, uses tools or models that are better suited for their purpose. In any case, while recognizing the possibility that they can adjust their level of caution and considering whether its use will happen in educational settings at the undergraduate or graduate level, it is also important to foster students’ critical analysis skills without underestimating them or assuming they are incapable of recognizing these risks on their own.
It should be understood that the biases present in AI are not the “fault” of the technology itself. Often, these biases simply mirror existing societal discrimination, which is reflected in the data used to train the model or, in some cases, in the prejudices that the programmer may introduce (López Baroni, 2021; Sharma, 2018). The responsibility to prevent the continuation of these biases is something we, as educators, must uphold regardless of whether we use AI or not.
Another common concern with using generative AI is the risk of receiving false, inaccurate, or outdated information. Imagine a student who uses an AI like ChatGPT to find academic sources. It is not always clear whether the references provided are truthful or suitable. Like the issue of bias, the solution is to approach the generated results critically. Verifying information and assessing its quality are responsibilities that, in academia, must be carried out regardless of whether AI is used. In fact, some AI tools already try to decrease this risk by providing specific references. Co-STORM, for example, a tool developed by Stanford University (n.d.), generates articles with references to specific, verifiable websites, but also shows the sources used and highlights relevant sections from each.
That said, addressing this concern again depends on being able to identify which AI tool is best suited for the task at hand and verifying that the information provided is accurate and relevant. As educators, our role is not only to be aware of these limitations but also to guide our students on how to use these tools responsibly.
Other risks associated with using AI include managing personal data and the potential infringement of intellectual property rights. For example, there are ongoing debates about whether AI can be considered an author when used to create works or whether the data used during training had proper permissions (Tremblay v. OpenAI, Inc., 2023). Inequity in access to technology is also a concern, especially when AI is introduced into classrooms without addressing issues related to device availability or students’ digital literacy. Automated decision-making can also raise ethical concerns, such as grading papers without adequate human oversight.
In educational settings, we could continue exploring the numerous risks documented in the literature, but that would go beyond the scope of this article. The main point is that all these risks can be mitigated as long as teachers, students, and researchers know which tools to use, what their limitations are, and how to minimize the associated dangers. By considering these factors and acting carefully, we can effectively harness AI’s potential in the classroom.
Beyond the Virtual Assistant
To continue, I want to share some opportunities I have specifically explored using ChatGPT Plus, the paid version of ChatGPT, OpenAI’s LLM model. Although I discuss my particular experiences with this version in this article, I want to highlight that many of the prompts and uses I mention are also valid for other LLMs with similar features. My goal is to show how AI can create various opportunities in educational settings and help transform different teaching and learning practices.
Creating evaluation rubrics
One of the first uses I want to mention, though not the first I discovered, is creating assessment rubrics for activities with students. ChatGPT Plus has become a valuable tool for generating analytical rubrics, especially when I need to clarify the criteria and achievement levels for a specific task. By providing the model with the desired number of achievement levels and the specific criteria I want to assess, the AI creates detailed descriptions for each level. It is important to note that, to get good results, I usually give the model the full activity instructions the students received, which helps it better understand the context and the nature of the task. This adaptability and precision have saved me a lot of time and allowed me to focus on teaching strategies rather than the logistics of assessment.
Custom chatbots for feedback
Another use I have found highly enriching is creating custom chatbots to give feedback before students submit their work. The paid version of ChatGPT allows for the creation of custom GPT bots (OpenAI, 2023b), trained with specific instructions. In my experience, I have developed a chatbot that provides suggestions to improve the writing and clarity of students’ texts. Students can then paste their text into the chatbot and receive detailed feedback on specific areas I have predefined, such as punctuation, potential contradictions, unclear examples, and more.
To ensure that the use of this system is both formative and ethical, I give the chatbot two clear instructions: (1) not to directly correct students’ texts; that is, the chatbot should refuse to rewrite the text if asked and should only provide suggestions that help students improve on their own; and (2) to indicate the expected level of rigor based on the student’s academic level. This way, the feedback aligns with the course standards.
Using custom GPT documents
A variation of personalized chats involves adding documents. ChatGPT Plus allows GPTs to work directly with user-uploaded documents, expanding the range of possibilities. For example, in a literature review activity, I pre-selected several key texts in both Spanish and English and uploaded them to the chatbot. Students, using a link I provided without needing to be paid subscribers (OpenAI, n.d.-b), could access this chatbot, which gave them specific answers to questions about the texts and their theses.
This not only allows activities like quick literature reviews based on pre-selected sources but also eliminates language barriers and offers a more guided learning experience. The chatbot can even be configured to cite the specific article and page where it retrieved the information, improving transparency and proper academic citation.
Exploring GPT bots for course management
In a less formal experiment, I uploaded a complete course syllabus into a custom chatbot to answer students’ common questions about the course. This included queries such as: What happens if I miss class? When is the next midterm? Or what is the reading due tomorrow? Although the chatbot was never used in an actual class, the results were surprising. The system was able to respond accurately most of the time, even advising that some information should be discussed directly with the professor or instructor, as I had directed. These kinds of tools could be especially helpful in reducing the administrative workload in large courses.
Generating test-type questions
Another important use has been creating multiple-choice questions for assessments. While I personally prefer other types of evaluations, I understand that in certain situations—such as self-contained courses or diagnostic tests—these can be useful. ChatGPT allows you to generate question banks from specific readings and customize the number of options, difficulty level, and topics to be tested. This has helped me quickly develop a question database that aligns with the content being taught.
Supporting teacher creativity
Another key benefit of this technology is its ability to enhance teacher creativity. When I need to organize a class session or plan activities that evaluate specific learning outcomes, ChatGPT has consistently offered new ideas. Sometimes, when creativity is limited by routine or workload, this tool can provide a fresh perspective that improves my own ideas and enriches the students’ experience.
Standardizing instructions and organizing the LMS
When it comes to organizing routine tasks like updating the LMS or creating instructions, ChatGPT has helped me standardize the structure of activities and ensure that instructions are consistent throughout the course. For example, I can ask the model to write instructions for an activity following a specific style and structure, inspired by previous activities I have personally curated. This kind of assistance, though seemingly minor, makes a big difference in the flow and clarity of the course.
Practicing oral skills and reviewing content
As a student—in a continuous learning effort—and also as a recommendation for my students, I have explored other uses. The mobile version of ChatGPT has a conversational voice chatbot feature (OpenAI, 2023a), useful for practicing speaking skills. For example, you can practice pronouncing a foreign language or rehearsing a speech. Additionally, in preparation for exams, sharing notes with the chatbot and asking it to generate quizzes has been an effective way to review concepts and ensure I fully understand the content.
These are just a few of the many opportunities AI provides in education. Additionally, it is important to highlight that the United Nations Educational, Scientific and Cultural Organization (UNESCO) has also recognized and promoted AI’s potential in education and research. Its Guidance for Generative AI in Education and Research (2024b) explores various applications, from research project consulting to course and curriculum co-design. This comprehensive approach emphasizes the importance of implementing AI technologies ethically, equitably, and in a human-centered way. This stance complements and expands the strategies outlined here to improve and transform teaching practices.
In line with this effort, UNESCO (2024a) developed the AI Competency Framework for Students, a key initiative aimed at preparing both educators and students to integrate these tools into learning. The framework outlines twelve competencies divided into four main areas: a human-centered mindset, AI ethics, AI techniques and applications, and AI system design. It also defines three levels of progression—understand, apply, and create—that enable students to move from basic understanding to innovative use of these technologies. This approach offers a structured foundation for educational institutions to effectively and contextually foster the development of AI skills.
For simplicity, this initial approach shows how AI systems like ChatGPT can be versatile and transform practices while enhancing processes in the classroom and beyond. Later, I will share some specific examples of prompts and configurations that can help implement these strategies in various educational settings.
Do we urgently need a paradigm shift?
One of my biggest concerns, both as a professor and as a student—and a common worry among many colleagues—is whether generative AI threatens critical thinking. In particular, I have always maintained a clear stance: AI should not replace activities that require ethical judgment and deep reflection. It is a powerful tool, but its use is not unlimited, nor should it be applied carelessly. We must be cautious when using it, avoiding the temptation to underestimate or overestimate its capabilities.
As one of my mentors, María Lorena Flórez Rojas, who, interestingly, introduced me to the world of AI—something I appreciate—according to what she learned from one of her mentors, used to say: neither techno-fascination nor techno-phobia. This phrase resonates deeply with me, especially now, as technology seems to advance so quickly that it often causes us to react in extreme ways.
It is undeniable that AI cannot replace the development of soft skills in students; however, we should not underestimate the potential of these tools to help us improve those skills. In fact, their potential can be significant if we know how to use them correctly and guide our students to utilize them in a critical and reflective manner.
On the other hand, I believe we should not view tools like ChatGPT simply as “magicians” that automatically generate responses without any thought. Although AI often seems almost magical, the reality is that it also requires a human “magician” to guide it. Achieving good results with generative AI inevitably involves responsibility: selecting the right tool for the task, verifying if the model has up-to-date data, and understanding its limitations (Equipo proyecto IA-Uniandes, 2024; Flórez Rojas, 2023; Universidad de los Andes, 2023, n.d.). Even more importantly, our instructions need to be clear and precise, demonstrating an understanding of the context and the potential of the tool we are using. If our instructions are vague or poorly formulated, the results will fall short. In other words, the value of the output largely depends on the quality of the input we provide.
We cannot cover the sun with a finger. The real world requires AI tools to make processes more efficient, and the academic sector cannot remain oblivious to this. In conversations with colleagues outside of academia, they tell me their companies are already using AI tools to automate tasks like contract drafting and case law review—tools such as Harvey AI (n.d.), Vincent AI (vLex, 2024), SilvIA (Legis Colombia, 2023). Even in the Colombian judicial system, there are ongoing discussions about whether judges should incorporate generative AI into their decisions (Consejo Superior de la Judicatura, 2024; Sentencia T-323, 2024). These trends make us consider the need for our students to develop skills to effectively use these tools. If we want them to meet future job market demands, understanding the critical use of generative AI is becoming an essential skill.
In a recent talk, Raquel Bernal, rector of the Universidad de los Andes, shared a phrase that still resonates with me. I apologize if my paraphrasing is not perfectly accurate, but the main idea was clear: she urged us to reflect on our role as teachers in the age of AI. If this technology can, in many cases, provide better explanations on a specific topic, then what is our role as educators? This is a complex and challenging question, but I believe it opens the door to an important exercise in self-reflection. It encourages us to adopt a curious attitude toward AI, learn to use these tools effectively, and explore how we can keep serving as guides and facilitators of learning in a rapidly changing environment.
I often get questions about how to make assessments “AI-proof” or how to tell if an assignment was created with the help of a tool like ChatGPT. However, I wonder if these are truly the right questions to ask. I propose we consider others: Should we be designing assessment activities that a language model can easily answer? Why are we so concerned about whether a student used AI to complete an assignment, rather than focusing on how to use these tools for meaningful learning goals?
These reflections were essential before beginning work on the prompts I will share below. Each was crafted with the intention of using AI as an ally to enhance learning, not to replace or diminish it. I believe that when we find the right balance, AI can be a powerful tool that not only simplifies certain processes but also fosters new ways of thinking, learning, and creating.
Experiences: Prompts
In this section, I want to share a series of prompt examples that I find useful in educational settings to enhance teaching and learning. These prompts have been developed and tested under specific conditions: using memory-limited versions of ChatGPT models, which do not retain conversation history. This feature provides greater flexibility, reproducibility, and confidence in the outcomes of each interaction.
The prompts below come from my personal research into prompt engineering techniques (OpenAI, n.d.-a; Universidad de los Andes, 2024). This involved an in-depth study of how to effectively craft questions and requests to maximize AI’s potential in educational settings. Ultimately, despite these strategies, I have found that creating good prompts is mainly a process of trial and error.
It is important to emphasize that these examples should not be viewed as strict formulas but as starting points that each teacher, researcher, or student can adapt and explore based on their own needs and educational environment. Flexibility is essential in this field, and the power of AI lies in its ability to be customized for each specific situation.
The following prompts are designed to spark creativity, critical thinking, and personalized learning. I hope they serve as a helpful tool and inspire readers to experiment and carve their own paths in the expansive world of AI-powered education. Feel free to customize and adapt them to meet the specific needs of your classrooms and environments. AI is a versatile and powerful partner, and these prompts offer a foundation for fully exploring its potential.
Creating rubrics
Hello, Chat! I want you to act as an expert in education and learning assessment, specializing in [the topic in question]. I need your help in creating a detailed analytical rubric for an activity where students must [clear instructions for activity3].
The rubric must include four achievement levels: excellent, high, medium, and low. The criteria evaluated in this activity are as follows:
Criterion 1: [brief description of specific expectations].
Criterion 2: [brief description of specific expectations].
Criterion 3: [brief description of specific expectations].
I request that you include clear and specific descriptions for each level of achievement in each criterion so that the expected performance level is clear.
Do you need any additional information before we begin?
Generation of test-type questions
Hello, Chat! Assume the role of an expert in pedagogy and learning assessment. I will attach a file with a text that I need you to review. Help me create five multiple-choice questions for a reading comprehension activity aimed at verifying analysis and deep understanding of the content.
Instructions for the questions:
Do you need any additional information before starting?
Organizing the LMS
Hello, Chat! Act as an expert in pedagogy, interface design, and LMS platform development. I need you to organize the information for week [week number] of the course [course name], following the style and structure of previous weeks.
Instruct students to review the materials below before the session. Leave space for the links to each reading.
Students are required to finish the activity [name of activity] and turn it in by [deadline].
[Include any other sections or elements you find helpful for organizing the LMS, according to your pedagogical practices.]
To follow the same format and structure as previous weeks, here is the content for week 1 as an example:
[Previously designed text for week 1]
Do you have any questions about these instructions or notice any missing important details?
Creating instructions
Hello, Chat! I want you to act as an expert in education and instructional design. I need you to write clear and detailed instructions for an activity I will share with my students.
In this activity, I expect students to [description of what they should do: group organization, reading materials, specific actions, timelines or deadlines, assessment criteria, etc.].
It is important that you follow the same style and structure as the previous instructions so that students can recognize the format. Below, I provide an example of earlier instructions, so you can maintain consistency in tone, language, and format.
[Text of previous instructions]
Do you have any questions about the activity or need more details to create the instructions?
Personalized feedback for students
I used the following prompt when creating custom GPTs. As you can see, it has a different structure, tailored for the ChatGPT function. However, a similar prompt could be used with the basic version of the template, followed by the student’s work. This way, it could serve as a prompt to share with students so they can submit their own exercises for assessment.
This GPT will assist in reviewing and commenting on [brief description of the activity]. Its purpose is to [description of expectations for what the GPT should do]. It should be both critical and constructive, emphasizing opportunities for improvement in a professional and encouraging manner. It’s important that it does not give instructions on how to improve the written text; only potential issues and reasons for necessary adjustments will be noted.
Special focus will be on providing clear and specific feedback. Feel free to ask for clarification if any part of the deliverable is unclear or if more context is needed for a thorough evaluation.
This GPT will assess the work based on the following criteria:
[Include the points you want to emphasize in the feedback or evaluation rubric]
The GPT will provide its comments in a detailed and comprehensive manner. It will organize its feedback according to all previous criteria and present it in bullet points. If any criterion is satisfied, the GPT will acknowledge this.
Before starting, it will always greet the author and provide a general comment on the text. In this initial section, it will include a warning in italics stating that its role is purely advisory and that it offers suggestions which the author may choose to accept or ignore in their final submission. It will also mention that the course teaching team will perform the final review and may consider other elements not seen in the chat.
If the author asks for suggestions for improvements or examples, the GPT should reply that its role is limited to identifying opportunities for improvement and explaining why, but it will not offer rewriting suggestions. It’s important that the GPT never provides specific advice on how to rewrite the text; its additional guidance will only include general guidelines for the authors to follow on their own.
Some Conclusions
Throughout this article, I have aimed to present a balanced view on the use of AI in educational settings: its challenges, risks, and, most importantly, its opportunities. AI, especially LLMs like ChatGPT, is a powerful tool that can transform the classroom and the teaching-learning process when used critically, knowledgeably, and ethically. While AI has raised concerns, particularly regarding authorship and critical thinking development, it also offers a unique opportunity to improve learning and teaching if we learn how to use it effectively.
One of the main points I want to emphasize is that AI should not be seen as a replacement for creativity or academic effort, but rather as a supplement. Although there is a risk of losing skills like critical thinking, I firmly believe the true danger is in not guiding our students to use these tools thoughtfully and intentionally. When teachers adopt a proactive and curious attitude instead of a reactive or restrictive one, we can integrate AI in ways that boost our students’ abilities rather than hinder them. AI allows us to achieve levels of personalization and support that were once only possible in small groups, and this has great transformative potential.
I also want to clarify that the role of educators is not diminished by the presence of AI; instead, it evolves and may even be strengthened. AI’s ability to provide detailed explanations or automatically generate content does not make us less essential. On the contrary, it enables us to focus on parts of learning that truly need empathy, ethics, and human guidance. The challenge is not to compete with AI; it is to leverage it to become better teachers, more effective facilitators of learning, and to help our students focus on the skills that make us uniquely human.
Ultimately, the integration of AI into education should begin with the understanding that it is not about replacing or automating learning, but about enhancing it. To achieve this, educators need to act with judgment, constructive skepticism, and creativity. We must learn to stop viewing AI as a shortcut or a threat and instead see it as an ally. The uses and prompts I have shared aim to show how we can achieve this integration practically and thoughtfully. I do not have all the answers, but I hope I have shared some ideas that can inspire further exploration and innovation in the classroom.
The ability of LLMs to create unique language combinations is similar to what Borges described in “The Library of Babel” (2012); however, the true difference lies in our ability to ask meaningful questions. This is the main role of the teacher: to teach students how to ask critical and transcendent questions in a world where, in theory, all information is already accessible. AI has enormous transformative potential, but that potential is only realized when humans—especially educators—guide it toward clear and contextualized goals.
The examples mentioned in this text, such as creating personalized chatbots, generating rubrics, or developing prompts to promote critical thinking, aim to put into practice a core belief: education is about more than just transferring information; it is about inspiring the development of skills that help students interpret and understand that information. This is where AI shows its transformative power by helping us personalize learning experiences, providing detailed feedback, and enabling students to explore new methods while enhancing metacognitive skills.
As Buolamwini (2016), Flórez Rojas (2023), Gutiérrez (2023), and other researchers have highlighted, it is essential not to lose sight of the biases inherent in AI systems. Teachers’ decisions about how, when, and for what purposes to use these tools should be informed and deliberate. We must avoid both uncritical fascination and outright rejection of the technology; instead, a balanced approach that involves critical and reflective thinking is necessary. As educators, we have a responsibility to examine how AI affects educational processes and to develop strategies to mitigate associated risks, such as reinforcing biases or disseminating misinformation.
To conclude, I want to revisit the question raised by Raquel Bernal in her discussion about new educational models (Universidad de los Andes, 2024): If AI can provide better explanations in some cases, what is the role of the teacher?
We are experiencing a profound change that, like any major shift, brings uncertainty. Still, I believe that if we approach AI with a mindset of curiosity, respect, and a willingness to learn, we will be able to adapt. Moreover, we can lead this change toward an education better suited to the challenges of today and tomorrow. AI is here to stay; the important thing now is how we will use it to improve our teaching practices and our students’ learning.
References
Alexander, D. (2024, April 16). Why ChatGPT-4’s score on the bar exam may not be so impressive. New York State Bar Association. https://nysba.org/why-gpt-4s-score-on-the-bar-exam-may-not-be-so-impressive/
Arredondo, P. (2023, April 19). GPT-4 passes the bar exam: What that means for artificial intelligence tools in the legal profession. Stanford Law School: Legal Aggregate. https://law.stanford.edu/2023/04/19/gpt-4-passes-the-bar-exam-what-that-means-for-artificial-intelligence-tools-in-the-legal-industry/
Basile, J. (s. f.). About. The Library of Babel. https://libraryofbabel.info/About.html
Borges, J. L. (1998). “The Library of Babel.” In Jorge Luis Borges. Collected Fictions, trans. A. Hurley (pp. 112–118). Viking.
Buolamwini, J. (2016, November). How I’m fighting bias in algorithms [video]. TEDxBeaconStreet. https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms?subtitle=en&lng=es
Consejo Superior de la Judicatura. (2024). ABC: Sentencia T-323. Colombia. https://www.ramajudicial.gov.co/documents/10635/155269343/ABC_SentenciaIA_T323De2024.pdf/ a2006b6d-58f1-beb0-31f8-1b04f398bf68?t=1727383419804
Equipo Proyecto IA-Uniandes. (2024). Lineamientos para el uso de inteligencia artificial generativa (IAG) en la Universidad de los Andes. Universidad de los Andes. https://secretariageneral.uniandes.edu.co/images/documents/lineamientos-uso-inteligencia-artificial-generativa-IAG-uniandes.pdf
Faguy, A. (2023, May 18). New York City public schools reverses ChatGPT ban. Forbes. https://www.forbes.com/sites/anafaguy/2023/05/18/new-york-city-public-schools-reverses-chatgpt-ban/
Flórez Rojas, M. L. (2023). Pensamiento de diseño y marcos éticos para la inteligencia artificial: una mirada a la participación de las múltiples partes interesadas. Desafíos, 35(1), 1-31. https://doi.org/10.12804/revistas.urosario.edu.co/desafios/a.12183
Flórez Rojas, M. L., & Vargas León, J. (2020). El impacto de herramientas de inteligencia artificial: Un análisis en el sector público en Colombia. In C. Aguerre (ed.), Inteligencia artificial en América Latina y el Caribe: ética, gobernanza y política (pp. 206–225). Centro de Estudios en Tecnología y Sociedad (CETyS), Universidad de San Andrés.
Gutiérrez, J. D. (2023). Lineamientos para el uso de inteligencia artificial en contextos universitarios (v 5.0). GIGAPP Estudios Working Papers, 10(272), 416–434. https://www.gigapp.org/ewp/index.php/GIGAPP-EWP/article/view/331
Harvard University Information Technology. (2023, August 30). Getting started with prompts for text-based generative AI tools. Harvard University. https://www.huit.harvard.edu/news/ai-prompts
Harvey AI [Generative IA platform]. (s. f.). https://harvey.ai
Kannan, P. (2024, October 3). How harmful are AI’s biases on diverse student populations? Stanford University Human-Centered Artificial Intelligence. https://hai.stanford.edu/news/how-harmful-are-ais-biases-diverse-student-populations
Korn, J., & Kelly, S. (2023, January 6). New York City public schools ban access to AI tool that could help students cheat. CNN Business. https://edition.cnn.com/2023/01/05/tech/chatgpt-nyc-school-ban/index.html
Legis Colombia. (2023, May 5). Legis presenta a SilvIA el primer chat jurídico con inteligencia artificial [video]. YouTube. https://www.youtube.com/watch?v=PYBvGgYrHpk
López Baroni, M. J. (2019). Las narrativas de la inteligencia artificial. Revista de Bioética y Derecho, 46, 5–28. https://doi.org/10.1344/rbd2019.0.27280
Newton, P., & Xiromeriti, M. (2024). ChatGPT performance on multiple choice question examinations in higher education: A pragmatic scoping review. Assessment & Evaluation in Higher Education, 49(6), 781–798. https://doi.org/10.1080/02602938.2023.2299059
Nolan, B. (2023, January 30). Here are the schools and colleges that have banned the use of ChatGPT over plagiarism and misinformation fears. Business Insider. https://www.businessinsider.com/chatgpt-schools-colleges-ban-plagiarism-misinformation-education-2023-1
OpenAI. (2023a, September 25). ChatGPT can now see, hear, and speak. https://openai.com/index/chatgpt-can-now-see-hear-and-speak/
OpenAI. (2023b, November 6). Introducing GPTs. https://openai.com/index/introducing-gpts/
OpenAI. (2024, September 12). Introducing OpenAI o1-preview. https://openai.com/index/introducing-openai-o1-preview/
OpenAI. (n. d.-a). Text generation and prompting. OpenAI Platform. Accessed November 2024. https://platform.openai.com/docs/guides/text#prompt-engineering
OpenAI. (n. d.-b). Using GPTs on our free tier - FAQ. OpenAI Help. Accessed November 2024. https://help.openai.com/en/articles/9300383-using-gpts-on-our-free-tier-faq
Tremblay v. OpenAI, Inc., 3:23-cv-03223. (District Court, N.D. California, 2023). United States. https://www.courtlistener.com/docket/67538258/tremblay-v-openai-inc/
Sentencia T-323 de 2024. (Constitutional Court, 2024). Sala Segunda de Revisión, Juan Carlos Cortés González [magistrado sustanciador]. Colombia. https://www.corteconstitucional.gov.co/relatoria/2024/T-323-24.htm
Sharma, K. (2018, March). How to keep human bias out of AI [video, TEDxWarwick]. TED. https://www.ted.com/talks/kriti_sharma_how_to_keep_human_bias_out_of_ai?subtitle=en&lng=es&geo=es
Stanford University. (n. d.). Co-STORM [Generative IA platform]. https://storm.genie.stanford.edu
Tremblay v. OpenAI, Inc., 3:23-cv-03223. (District Court, N.D. California, 2023). United States. https://www.courtlistener.com/docket/67538258/tremblay-v-openai-inc/
United Nations Educational, Scientific and Cultural Organization (UNESCO). (2024a). AI Competency Framework for Students. https://doi.org/10.54675/JKJB9835
United Nations Educational, Scientific and Cultural Organization (UNESCO). (2024b). Guidance for Generative AI in Education and Research. https://unesdoc.unesco.org/ark:/48223/pf0000386693
Universidad de los Andes. (2023, March 18). Inteligencia artificial en la educación [video]. YouTube. https://www.youtube.com/watch?v=fDiK8awDOCU
Universidad de los Andes. (2024). Chateando con la IAG. Dirección de Innovación y Desarrollo Académico (DIDACTA). https://view.genially.com/667f11db4f9504001433187f
Universidad de los Andes. (n. d.). ¿Cuáles son las implicaciones de usar la IAG en procesos formativos? Dirección de Innovación y Desarrollo Académico (DIDACTA). https://didacta.uniandes.edu.co/inteligencia-artificial-en-uniandes/
vLex. (2024, October 31). Vincent AI [Generative IA platform]. https://vlex.com/products/vincent-ai
........................................................................................................................................................
David Orlando Niño Muñoz
Lawyer and professional in Government and Public Affairs from Universidad de los Andes, with academic background in Psychology and Financial Administration. Currently pursuing a specialization in Commercial Law at the same university, his interests include education, the use of technology in law, legal design, and private law. He recently published the article “La concesión mercantil de espacio, la membresía coworking y ¿otras formas de saltarse las protecciones al arrendatario de local comercial?” (2023), Boletín de Actualidad, Francesco Galgano Contract Law Seedbed, Universidad de los Andes.
* This article is not part of any thesis or similar document; it was written exclusively for the “Artificial Intelligence in Education” call for papers published by Voces y Silencios: Revista Latinoamericana de Educación. No funding was received, and there are no conflicts of interest to disclose. All correspondence regarding this work should be addressed to David Orlando Niño Muñoz (do.nino@uniandes.edu.co). The article was translated with funding from the Vice President for Research and Creation at Universidad de los Andes (Colombia). This article was first published in Spanish as: Niño Muñoz, D. O. (2025). PedagogIA: oportunidades y desafíos para enseñar en la era de la inteligencia artificial. Voces Y Silencios. Revista Latinoamericana De Educación, 16(2), 151-168.
2 These are pieces of information, phrases, or questions entered into a generative AI tool that greatly impact the quality of responses. When given a prompt, the AI model analyzes the input and generates a response based on patterns it learned during training. More detailed prompts often enhance the quality of the results (Harvard University of Information Technology 2023).
3 If the model allows it, attach the document with the instructions.