Elon Musk declared recently that artificial intelligence (AI) is on the verge of surpassing the intelligence of the smartest human beings, potentially as soon as next year or by 2026, setting off a vigorous debate among scholars, technologists and ethicists.
The Tesla CEO’s prediction in an interview on X highlights the accelerating race toward developing AI that mimics and exceeds human cognitive abilities. Experts are now dissecting the plausibility of Musk’s timeline and the profound questions it raises about the nature of intelligence, ethical boundaries, and the future relationship between humans and machines.
“Elon is right,” Yigit Ihlamur, an AI researcher and founder of Vela Partners, an AI investment firm, told PYMNTS. “AI is already smarter in some areas and will be smarter than us in more — but not all areas.”
In the recent interview, Musk highlighted hardware limitations as a potential impediment to the swift progress of AI technology. He mentioned that the sector faced a significant bottleneck last year due to a shortage of chips essential for AI training efforts.
This year, the obstacle shifts toward securing sufficient “voltage transformer supply” to meet the enormous electricity demands of AI systems, a resource Musk hints could become scarce over the next one to three years.
“My guess is we’ll have AI that is smarter than any one human probably around the end of next year,” Musk said during a live interview with Norges Bank CEO Nicolai Tangen on X, formerly Twitter, earlier this week that’s now an episode of his podcast “In Good Company.”
Musk added that the “total amount of sentient compute” — a concept that may refer to AI thinking and acting independently — will exceed all humans in five years.
Some experts say that the kind of AI that could outperform humans is a far cry from some of the simple chatbot interfaces in use today. Ihlamur said that AI agents that can reason, discuss, plan, execute and reflect might advance the field.
“It is expensive and slow, but it is working really well in some constrained scenarios,” he added. “Elon is projecting on this assumption that it is going to get cheaper, faster and easier to build these systems. I agree with him.”
The implications are enormous for those who believe in the future of supersmart AI. Ihlamur said that computers will perform knowledge work faster, better and cheaper.
“This will have a largely positive impact on society. We’ll have more access to doctors, teachers and inventions,” he added. Doctors will be 10 times better than before, and we’ll get much better advice and medicine.”
Genius-level AI will make the GDP grow significantly due to efficiencies, Ihlamur said. He predicted that products and services would become cheaper, increasing in accessibility.
“It would be similar to the positive impact that Uber/Lyft had: more drivers on the road, getting paid with a click of a button, and consumers finding rides in a few minutes,” Ihlamur added. There will be disruption and fear in society, but mostly, the impact would be positive.”
Some observers see the potential for harm in the advent of superintelligent AI. Abdullah Ahmed, founder of Serene Data Ops, told PYMNTS that smarter AI could produce even more misinformation and malicious content.
“We are already seeing the floodgates open for misinformation using all sorts of AI,” he added. “AI-generated images of the White House on fire caused a mini stock crash, AI deepfake voices of our politicians are being used to send people misinformation about voting, and AI deepfake voices of popular doctors are being used to sell shady supplements.”
Worst of all, it could be AI that decides to wipe out the human race. The “paperclip maximizer” is a thought experiment by Swedish philosopher Nick Bostrom. In this scenario, it might take extreme measures if a superintelligent artificial general intelligence (AGI) were programmed to maximize paperclip production without a built-in regard for human life. This could include eliminating humans to prevent them from interfering or even converting human atoms into paper clips to increase production.
But whether Musk is correct or not depends on the definition of intelligence, Flavio Villanustre, global chief information security officer of LexisNexis Risk Solutions, told PYMNTS. He mentioned that if we define intelligence as the skill to remember and use information from what we’ve learned before in new situations, then today’s AI can do this better than almost everyone. And soon, these AI systems will improve to levels beyond what humans can do.
“However, many animals have the ability to learn too, and we wouldn’t define them as ‘intelligent’ in a human sense,” he added. “Intelligence is also the ability to make sense of the world, look introspectively, reason and find meaning in the world in a way that affects self-perception. For this to exist, consciousness or self-awareness is required.”
Villanustre said we have yet to successfully create this level of intelligence in machines. In fact, we’re no closer to achieving it now than we were 50 years ago.
“Even though we can assume this [AGI] will happen at some point, since history proves that if something can happen, it will eventually, it is highly unlikely that we will achieve this capability in the next decade,” he added. “From this standpoint, I believe Musk is incorrect, and this type of superintelligence won’t be reached in 2026.”