Google Debuts Improved Version of Gemini AI Tool

Google Debuts Improved Version of Gemini AI Tool

One week after its rebrand as “Gemini,” Google’s artificial intelligence tool received a new update dubbed “Gemini 1.5.”

“Gemini 1.5 delivers dramatically enhanced performance,” Demis Hassabis, CEO of Google DeepMind, wrote in a company blog post Thursday (Feb. 15). “It represents a step change in our approach, building upon research and engineering innovations across nearly every part of our foundation model development and infrastructure.”

Google is releasing a model known as Gemini 1.5 Pro for early testing, which includes a “breakthrough experimental feature in long-context understanding,” Hassabis said in the post.

An AI model’s “context window” is composed of tokens, the building blocks used for information processing. The bigger the content window, the more information a model can ingest and process.

Gemini 1.5 can process up to 1 million tokens, which means it can handle “1 hour of video, 11 hours of audio, codebases with over 30,000 lines of code or over 700,000 words,” the post said.

Google said the model can process more information than the AI models from OpenAI and Anthropic, Bloomberg reported Thursday.

The tool can also demonstrate “highly-sophisticated understanding and reasoning tasks for different modalities, including video,” the blog post said.

“For instance, when given a 44-minute silent Buster Keaton movie, the model can accurately analyze various plot points and events, and even reason about small details in the movie that could easily be missed,” wrote Hassabis.

Gemini had been known as “Bard” before the rebrand that was announced last week. With the renaming came the debut of Gemini Advanced, as well as its new Google One AI Premium Plan, and a mobile experience for Gemini and Gemini Advanced, with a new Gemini app on Android and as part of the Google app on iOS.

The company earlier this week acknowledged the shortcomings of generative AI systems and pledged to address them.

One major concern is hallucinations, where the AI systems confidently produce incorrect statements.

Eli Collins, vice president of product management at Google DeepMind, proposed a solution to allow users to easily determine the sources of information provided by AI systems, and thus verify the information and determine its reliability.

For all PYMNTS AI and digital transformation coverage, subscribe to the daily AI and Digital Transformation Newsletters.