This post was originally published on this site.
Note: This is the second piece in a series aimed at helping leaders identify and build the human skills we need to successfully navigate the AI era. The first is here.
In 1962, President John F. Kennedy spoke at Rice University and made a heartfelt case for landing on the moon: “We set sail on this new sea because there is new knowledge to be gained, and new rights to be won, and they must be won and used for the progress of all people.”
But his enthusiasm came with a warning: “Space science, like nuclear science and all technology, has no conscience of its own. Whether it will become a force for good or ill depends on man.”
Today, AI is our race to the moon. While companies scramble to invest, workers are bracing for impact. A recent Pew survey revealed that American workers are more worried than hopeful about how AI will affect their jobs. According to the World Economic Forum, two-thirds of employers plan to hire AI-skilled workers, while 40% expect to shrink their workforce through automation. WEF predicts that 90 million global jobs will disappear in the next five years.
Whether AI becomes a force for good or evil depends, as Kennedy put it, on us. We must be firmly in control making the right decisions, maintaining the moral high ground, and ensuring AI doesn’t go too far. And the only way to do that is by strengthening the human skills we need to safeguard our future.
Critical Thinking: The Baloney Detector
A teacher I once knew had a singular goal for his students—not just to teach them facts, but to supercharge what he called their “baloney detectors.” He wanted them to question, to analyze, to push past surface answers. He pushed students to be innate skeptics with a hunger for truth.
Never has there been a greater need for a well-honed baloney detector. AI generates information with stunning fluency—but without citation, logic, or accountability. The World Economic Forum ranks AI-driven misinformation as the number one global risk over the next two years. And yet, a sobering study from Microsoft and Carnegie Mellon reveals that relying on AI reduces our critical thinking skills. When we abdicate our thinking to AI, we atrophy our very ability to think. Left unchecked, AI doesn’t just answer our questions—it erodes our ability to ask the right ones.
If we’re not vigilant, we risk a future where the ease of AI makes us intellectually complacent. The world doesn’t need more passive consumers of information—it needs skeptics, investigators, and thinkers who challenge AI’s output instead of blindly accepting it.
Ethical Judgment: AI Doesn’t Have A Moral Compass
AI can be programmed to align with human values. But whose values? Which culture? What history? Alignment isn’t as simple as flipping a switch. AI doesn’t arrive neutral—it absorbs the biases, blind spots, and inequities embedded in its training data.
We’ve already seen the consequences: A healthcare AI prioritizing certain demographics for life-saving treatments. A mortgage AI systematically rejecting applicants from specific neighborhoods. An AI-powered surveillance system in Italy that wasn’t anonymized, exposing private citizens. These systems weren’t designed to discriminate. But without human oversight, bias isn’t just possible—it’s inevitable.
We already know AI can be used for harm—deepfakes, AI-driven scams, and mass disinformation campaigns are on the rise. But even well-intentioned AI can have devastating consequences. The only safeguard is human ethical judgment—people who can intervene, correct, and question AI’s decisions before they shape our lives in irreversible ways. AI shouldn’t get the final say. We should.
Empathy: The Last Line of Defense
The recent murder of the United Healthcare CEO revealed disturbing details about AI’s role in the company’s insurance claim denials. The system was reportedly designed to reject an outsized number of legitimate claims, relying on the fact that many people wouldn’t dispute them.
This isn’t an anomaly. Companies are increasingly using AI for hiring screens, employee assistance programs, and lower-level management—and the results aren’t always fair. AI-driven automation can widen inequality, creating a world where the wealthy get human attention while the poor are triaged by chatbots. And the more we automate, the more we risk losing touch with human suffering. As AI increasingly powers these critical decisions affecting people who can’t fight back, human empathy is the only line of defense.
In an era of polarization, war, and crisis, stress shrinks our capacity for empathy. But all is not lost: researchers believe empathy can be learned. In nearly all cases, it starts with being exposed to people with different experiences than ours. And remember that empathy is nuanced and multi-layered, focused on both thinking and feeling. Learning to be more empathetic is about learning to care.
It doesn’t happen overnight. But expanding our empathy is a critical step to ensuring that technology works, in Kennedy’s words, for “the progress of all people.”
Apollo 11 astronaut Edwin E. Aldrin, Jr. on the moon, July 20, 1969.
Getty Images
Sixty-three years after Kennedy’s speech, technology has gotten so advanced it feels like the moon has come to us. But what hasn’t changed is our hunger to extend human prowess with new technologies.
But these new technologies could get away from us if we don’t hold them in check. Just because AI can do something doesn’t mean it should. For so many applications, AI is easy, fast, and thorough. But it’s not always right. Deciding when AI is right—and when it has gone too far—is not a technical question. It’s a human one. And it demands critical thinking, ethical judgment, and empathy.
We don’t need to race AI. We need to lead it. The future won’t be decided by algorithms, but by the choices we make. It’s up to us to sharpen the skills to keep AI in check so in the months and years ahead we make the right—human—calls.