To improve artificial intelligence (AI), it might pay to mimic the human brain.
Researchers have developed a novel training method called “Quiet-STaR” that improves the reasoning abilities of AI systems. The approach involves instructing AI to generate multiple internal rationales before responding to conversational prompts, mirroring how humans often think before speaking. It’s among several innovative strategies to enhance AI by incorporating reasoning similar to that of humans.
“The performance of AI will significantly improve if it can think like a human,” Venky Yerrapotu, the CEO of 4CRisk, which makes AI products for the compliance and risk sector, told PYMNTS in an interview. “Human-like thinking is unique and complex and communicates with context, nuance and implied meanings. AI with the capability to seamlessly understand human intent (and we are seeing LLMs [large language models] getting to this stage) can execute complex queries.”
Unlike conventional AI chatbots like ChatGPT, which generate responses without considering various possibilities for the next steps in a conversation, Quiet-STaR enables AI systems to anticipate future discussions and learn from ongoing ones, according to a new paper that has not yet been peer-reviewed. The method works by having the AI generate a mix of predictions with and without rationales, selecting the best answer, and discarding incorrect rationales.
When applied to the LLM Mistral 7B, the Quiet-STaR-trained version achieved a 47.2% score on a reasoning test, improving from its initial score of 36.3%. Although the AI still struggled with a school math test, scoring only 10.9%, this was nearly double the starting score of 5.9% before training.
The researchers believe this inner monologue training method could lead to more advanced AI systems capable of engaging in more natural and effective conversations. As AI continues to evolve, such advancements in reasoning and learning could have far-reaching implications across various industries and applications.
The lead author, Eric Zelikman, wrote on X, “Excitingly, self-teaching reasoning on diverse web text automatically improves other reasoning! Mistral self-taught by training on web data increases its zero-shot commonsense reasoning accuracy by a third and nearly doubles its zero-shot direct grade-school-math accuracy.”
Humans often rely on intuition, quickly grasping basic concepts like simple math based on past experiences, Binny Gill, founder and CEO of Kognitos, an AI startup that allows workers to automate complex processes using plain English, told PYMNTS in an interview.
AI is beginning to adopt this intuitive approach but faces similar challenges, such as making mistakes. People prefer paper for complex tasks, like multiplying large numbers to help organize thoughts and keep track of details.
Gill said that the latest AI models, highlighted by the “Attention Is All You Need” paper, mimic our ability to concentrate on specific tasks.
“There is a limit to how many things a human brain can pay attention to — the same is true with AI models today,” he said. “Hence humans write calculations down on a piece of paper, and yet get things wrong sometimes. AI models, when asked to ‘think step by step’ or when encouraged to have an inner monologue, are also doing the exact same thing.”
Quiet-STaR is only one of several approaches to using human-like thought for AI. Last year, Microsoft researchers introduced a novel AI training technique named “Algorithm of Thoughts” (AoT), which aims to enhance the efficiency and human-like reasoning capabilities of LLMs such as ChatGPT. The method guides the language model on a more effective path to solving problems by using “in-context learning,” much like humans do.
Researchers are striving to equip AI with human-like reasoning to address a fundamental challenge in the field: explainable AI. Researchers don’t fully understand how machine-learning algorithms work. The hope is that AI that reasons like a human may offer more transparent insights into its decision-making process.
In a recent study published in Nature Computational Science, researchers from the University of Texas Southwestern Medical Center developed a novel AI approach inspired by the human brain. This innovative method, called “deep distilling,” combines insights from brain network studies with traditional AI techniques that use explainable building blocks.
The AI system operates in a manner reminiscent of a child’s learning process, condensing various types of information into “hubs.” These hubs are then translated into easy-to-understand coding guidelines for human programmers, serving as a simplified explanation of the algorithm’s findings and patterns within the data.
Human-like reasoning for AI might also improve robots’ performance. Covariant recently released RFM-1, a Robotics Foundation Model, which it claims gives commercial robots the ability to reason like humans. The model uses generative AI to give robots a deeper understanding of language and the physical world.
Yerrapotu envisioned that AI, with advancements in human-like reasoning, would soon be able to independently learn and adapt to new environments with little human input.
“AI will develop enhanced reasoning and problem-solving skills,” he added. “It will continue to improve at natural language understanding.”