PYMNTS-MonitorEdge-May-2024

DeepMind, Deep Learning And The Dopamine Effect

For the inside scoop on why humans might learn relatively faster than algorithms, look to dopamine.

A paper in Nature Neuroscience, by Google unit DeepMind, noted that the neurotransmitter is, perhaps, what gives man an edge over computers. According to the findings, algorithms can learn video games of an era long gone think Pong and Atari classics.

“But as impressive as this performance is, AI still relies on the equivalent of thousands of hours of gameplay to reach and surpass the performance of human video game players,” said DeepMind on its site. “In contrast, we can usually grasp the basics of a video game we have never played before in a matter of minutes.”

Delving into just how humans can learn so much faster has spawned a theory, known as meta-learning, or as DeepMind put it, “learning to learn.” In that process, there are two timescales: the short-term, where individuals focus on learning by example, and the longer run where people focus on the abstract “rules” that tie into getting things done. DeepMind further noted that AI is able to approximate meta-learning, and yet “the specific mechanisms that allow this process to take place in the brain are still largely unexplained in neuroscience.”

The paper explores the role of dopamine in learning and likening it to what is known as the “reward prediction error signal” that is used in algorithms. Yet, dopamine transcends just the acknowledgement and anticipation of rewards, with a look back at past history, and helps the brain learn with efficiency and flexibility.

The researchers “virtually recreated” six meta-learning experiments from neuroscience, with a model that is known as a recurrent neural network and can draw on past experiences. A reward prediction network tied to the algorithm was analogous to dopamine. The neural network was comped against animals’ performance among the experiments.

In one example, known as the Harlow Experiment, algorithms chose between two randomly-generated images, with one of the pair tied to rewards (the animals, in this case, a group of monkeys). As noted in VentureBeat, the algorithm performed on par with the animals, “making reward associated choices from new images it had not seen before.”  The learning was centered in the recurrent neural network, which the site and the paper said supports dopamine as a component in meta-learning.

The site noted that, in the past, Google’s DeepMind built a “partial anatomical model” of the human brain. This model created a neural network that shadowed the brain’s own activity, with an AI machine that could operate more efficiently than most neural nets.

The researchers said the insights tied to the most recent tests “can be applied to explain findings in neuroscience and psychology highlights the value each field can offer the other. Going forward, we anticipate that much benefit can be gained in the reverse direction, by taking guidance from specific organization of brain circuits in designing new models for learning in reinforcement learning agents.”

PYMNTS-MonitorEdge-May-2024