This post was originally published on this site.
Scientist and physicist Geoffrey Hinton believes there could be a one in five chance that humanity will eventually be taken over by artificial intelligence.
Hinton, a Nobel laureate in physics who’s been dubbed the ‘godfather of AI‘, made the startling prediction in an April 1 interview with CBS News that was aired on Saturday morning.
‘I’m in the unfortunate position of happening to agree with Elon Musk on this, which is that there’s a 10 to 20 percent chance that these things will take over, but that’s just a wild guess,’ Hinton said.
Besides his cost-cutting responsibilities in the federal government, Musk is the chief executive of xAI, the company that made the AI chatbot Grok.
Musk has said AI will become smarter than the entire human race by 2029. He’s also described a future where everyone will be pushed out of their jobs by AI who can do the tasks more efficiently.
Hinton agreeing with Musk’s warnings is alarming, largely because Hinton has arguably contributed more to the birth of artificial intelligence than anyone else.
‘The best way to understand it emotionally is we are like somebody who has this really cute tiger cub. Unless you can be very sure that it’s not gonna want to kill you when it’s grown up, you should worry,’ Hinton said.
The 77-year-old researcher won his Nobel prize last year based on his decades of extraordinary work on neural networks, machine learning models that mimic the reasoning processes done by the human brain.
He proposed this idea in 1986, and it’s now been integrated into the most popular AI products. That’s why when you converse with ChatGPT or any other AI model, it can eerily feel like you’re talking to another human being.
For the most part, AI models remain disembodied tools trapped in people’s phones and computers that exist only to answer our mundane questions.
But now, some scientists are making the additional leap of lending robot bodies to AI, so they’re able to do physical activities in the real world beyond just being a online repository of knowledge.
Chinese automaker Chery designed a humanoid robot with the appearance of a young woman and showed it off at Auto Shanghai 2025 on Thursday.
The robot was seen pouring orange juice into a glass at the event. It is designed to consult with people buying cars and give entertainment performances, according to Chinese state media.
And Hinton believes AI will be soon be able to do a lot more than serve drinks. Like Bill Gates, he thinks it will revolutionize the fields of education and medicine.
‘In areas like healthcare, they will be much better at reading medical images, for example,’ he said. ‘I made a prediction some years ago that they’d be better by now and they’re about comparable with the experts. They’ll soon be considerably better.’
‘One of these things can look at millions of X-rays and learn from them. And a doctor can’t,’ he said.
He went as far as to say that AI models will eventually be ‘much better family doctors’ that will be able to learn from patients’ familial medical history and diagnose them with greater accuracy.
When it comes to education, Hinton said AI will at some point become the best tutor money can buy.
‘We know that if you have a private tutor, you can learn stuff about twice as fast,’ he said.
‘These things, eventually, will be extremely good private tutors who know exactly what it is you misunderstand and exactly what example to give you to clarify it so you understand. So maybe you’ll be able to learn things three or four times as fast,’ he added. ‘It’s bad news for universities, but good news for people.’
Hinton also believes AI will have a role in mitigating climate change by designing better batteries and contributing to carbon capture technology.
For any of this to come to fruition, AI will need to reach a point experts typically call artificial general intelligence (AGI).
Max Tegmark, a physicist at MIT who’s been studying AI for about eight years, told DailyMail.com in February that AGI is defined as an artificial intelligence that is vastly smarter than humans and can do all work that was previously done by people.
Tegmark thinks humans will be able to make an AGI model before the end of the Trump presidency. Hinton has a more conservative estimate, putting it between five and 20 years from now.
Despite the possible benefits of attaining AGI, there still remains the threat of what an independently intelligent creation like this could be capable of.
Hinton criticized companies like Google, xAI and OpenAI of prioritizing profits over safety.
‘If you look at what the big companies are doing right now, they’re lobbying to get less AI regulation. There’s hardly any regulation as it is, but they want less,’ he said.
Hinton believes AI companies should be devoting far more of its resources to safety research, up to a third of their computing power.
The heads of all three of those companies have acknowledged the danger of AI in one form or another, but Hinton said simply stating their concerns and not taking action won’t cut it.
Hinton was particularly disappointed in Google, where he used to work, for going back on its word to never support military applications for AI.
Beyond discarding its pledge to not use AI for weapons of war, Google also provided Israel’s Defense Forces will greater access to its AI tools after the attacks on October, 7, 2023, The Washington Post reported in January.
There are some who are aware of AI’s destructive potential, and many of them have signed the ‘Statement on AI Risk‘ open letter.
The 2023 statement reads: ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.’
Hinton is the top signatory on that letter, alongside OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei and Google DeepMind CEO Demis Hassabis.