This post was originally published on this site.
Despite its power, artificial intelligence (AI) has its limits. Yes, it can accelerate data analysis, streamline processes and personalise learning. But there are also critical moments – often involving creativity, nuance, ethics and empathy – where human judgement and involvement remain irreplaceable.
Teaching: why AI can’t ‘read the classroom’
Teaching is one area where the human element remains central, making it crucial to recognise the limitations of AI in this domain. While AI-powered platforms can customise lesson plans, adapt to students’ learning speeds and provide instant feedback on assignments, their capabilities fall short in addressing the emotional and relational aspects of education. For example, while these tools may assist neurodivergent students by presenting information in different formats, aiding executive functions like planning and organising, they cannot replicate the deeper human connection that defines effective teaching.
Teaching is a fundamentally human endeavour rooted in empathy, intuition and inspiration. An educator does more than deliver content; they interpret the unspoken dynamics of a classroom, responding to subtle cues such as student engagement, confusion or emotional needs. This ability to “read the room” enables teachers to adapt their approach in real time, creating an environment where students feel seen, heard and motivated.
In moments when students require encouragement, guidance or understanding, AI falls short. Machines cannot discern the complexities of a student’s emotional state or provide the reassurance that comes from human interaction. Educating, at its core, is about stimulating human growth as much as transferring knowledge – a task that requires the emotional intelligence, creativity and relational depth only humans can provide. While AI can enhance the mechanics of learning, the decision to eschew it in teaching is vital when the focus shifts to cultivating personal connections, inspiring curiosity and nurturing students’ holistic development.
Research: why AI fails the ‘what if’ test
In research, AI can process vast datasets and identify patterns far beyond human capacity, but it is less effective in abstract “thinking” and defining new directions. AI can suggest correlations, but it doesn’t understand the broader context or ask the “what if?” questions that lead to paradigm shifts. For instance, the conceptual leaps that led to quantum mechanics or the structure of DNA required human imagination and philosophical questioning that AI, bound by existing paradigms, cannot replicate.
In my work, I maintain a cautious approach, validating AI-generated results rigorously to uphold research integrity. Ethical considerations, particularly regarding bias in AI models and the responsible use of data, are integral to our practice as researchers.
Consider this: as an associate editor of two peer-reviewed academic journals, my daily job involves assessing new research projects that my peer researchers have proposed and conducted. It is standard academic practice for researchers to do a literature search before they launch into a given research project to learn what has already been discovered about the topic they’re studying. If they rely on AI to do this search, they may miss critical papers that would be pivotal to their research because AI may rely on outdated or incomplete datasets. What’s more, AI tools are not value-neutral; they may have biases embedded in their data or algorithms. Finally, how we ask our questions when we’re formulating AI enquiries may skew our results.
There are definite times when using AI in research can be dangerous or unethical or constitute bad practice. These scenarios typically arise when AI:
- compromises research integrity (for example, when AI-generated content is presented as original work)
- exacerbates biases (using AI tools that are biased to begin with or exclude certain populations)
- infringes on privacy (using AI for analysis without asking participants for consent)
- leads to harm (deploying AI-recommended treatment plans without ensuring their efficacy).
Ethical decision-making: AI lacks empathy and human context
Ethical decision-making is an area where AI faces significant limitations. Recent advancements have demonstrated that AI can assist in diagnosis or support clinical decision-making, but these systems function best when complementing human expertise rather than replacing it. In highly sensitive areas, such as end-of-life care or organ allocation, decisions often involve ethical considerations that transcend data-driven logic. For instance, determining whether to prioritise a patient’s quality of life over the likelihood of extending it may require weighing emotional, cultural and philosophical factors. AI lacks the capacity for empathy, introspection or understanding of these broader human contexts, which are often pivotal in making compassionate and morally sound decisions.
AI is a powerful partner, not a replacement. Its ability to process vast amounts of data and perform repetitive tasks with precision is unmatched, but it lacks the emotional intelligence, creativity and moral reasoning that define humanity. As we integrate AI into more facets of life, let’s acknowledge its strengths while recognising its limitations.
Qin Zhu is an associate professor in the department of engineering education at Virginia Tech. He is subject matter expert on ethics and policy of computing technologies and robotics.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.