Anthropic CEO Dario Amodei: AI Likely Smarter Than Humans This Decade

Artificial intelligence (AI) is likely to be smarter than most Nobel Prize winners before the end of this decade, Anthropic CEO Dario Amodei said.

Speaking with Bloomberg about his essay on predictions about AI, Amodei said he thinks 2026 is the earliest, though not necessarily the most likely, time that AI will achieve “powerful AI,” per a Friday report.

Amodei uses the term “powerful AI” instead of the commonly used “artificial general intelligence,” (AGI) according to the report.

“I can tell you a story where things get blocked and it doesn’t happen for 100 years. That is possible,” Amodei said in the report. “But I would certainly bet in favor of this decade.”

AGI aims to create machines that can think and reason like humans, adapt to new challenges and learn from experience, PYMNTS reported in April.

Currently, the majority of AI systems in use are often termed narrow or weak AI, designed to excel at specific tasks like facial recognition or language translation.

Elon Musk said in April that he believes AI could surpass the intelligence of the smartest human beings as soon as 2025 or by 2026.

Musk’s prediction in an interview on X highlighted the accelerating race toward developing AI that mimics and exceeds human cognitive abilities, and raised questions about the nature of intelligence, ethical boundaries, and the future relationship between humans and machines, PYMNTS reported at the time.

In July, OpenAI board member and Quora CEO and founder Adam D’Angelo said the advent of AGI is likely to happen “within five to 15 years” and will be a “very, very important change in the world when we get there.”

D’Angelo’s comments followed a report from earlier in July that OpenAI had developed a five-level classification system to track its progress toward building AGI.

Meta’s chief AI scientist, Yann LeCun, said in May that AI large language models (LLMs) have a limited grasp on logic and will never reach human intelligence.

These models “do not understand the physical world, do not have persistent memory, cannot reason in any reasonable definition of the term and cannot plan … hierarchically,” LeCun said.