Google unveiled a range of artificial intelligence (AI) advancements at its annual developer conference Tuesday (May 14), including more sophisticated analysis powered by Gemini, smarter assistants like Astra, and an infusion of AI into its dominant search engine, marking one of the most dramatic changes to the company’s foundation since its inception and potentially triggering a sea change in web surfing habits, but also bringing new risks to an internet ecosystem heavily dependent on digital advertising.
“The Gemini upgrades could potentially revolutionize online commerce by providing a more fluid browsing experience,” Sunil Rao, CEO and co-founder at the AI company Tribble, told PYMNTS. “With AI browsers becoming mainstream, agents could potentially recommend specific brands or solutions, possibly as digital store associates.”
Google’s AI technology will generate web search summaries when it determines they are the most effective way to answer a user’s query, particularly for complex topics or when users brainstorm or plan. For more straightforward searches, such as finding local businesses or checking weather forecasts, the search engine will continue to prioritize traditional website links and ads.
The competition in generative AI is intensifying. OpenAI launched a new AI model and desktop version of ChatGPT on Monday, along with a new user interface. According to the company, the latest model, GPT-4o, is twice as fast as GPT-4 Turbo at half the cost.
Google’s latest move to integrate AI-generated summaries into its search results could impact how online commerce functions reshape advertising strategies and web traffic flow.
The company announced the change, stating that AI-powered overviews will now frequently appear at the top of search results, often replacing the traditional website links. This shift aims to provide users faster access to information but could alter how businesses approach search engine optimization (SEO) and online advertising. It was one of many AI announcements at the conference, covering topics from video creation tools to a new compact language model.
The AI summaries have been tested with a small group of users over the past year. They will initially roll out to U.S. search results before expanding to other countries. Google expects the new feature to be part of the search experience for about 1 billion users worldwide by the end of the year.
As Google incorporates more AI-generated content into its search results, businesses may have to adapt their online strategies to maintain visibility and reach their target audiences. This change could also stimulate innovation in the advertising industry as companies reevaluate their ad placement and targeting approaches in light of the new search result format.
“Every company wants to go ‘upstream’ in the customer journey,” David Nicholson, chief research officer at The Futurum Group, an advisory firm, told PYMNTS. “Gemini now makes it easier for Google to capture online buyers into a closed market ecosystem. It has the potential to mirror the ‘Amazon Effect,’ where commerce becomes increasingly centralized.”
The new AI-powered search will have a “profound impact on organic search traffic,” Natalie Lambert, founder and managing partner at GenEdge Consulting, an AI consulting company, told PYMNTS.
“For example, when researching a new topic or a new desired item, people tend to ask basic questions like, ‘What is a mirrorless camera?’ or ‘What is the best mirrorless camera for traveling?’ Lambert added. “There is enough data on the internet to enable AI to answer these questions with authority.
“Unfortunately, it is these questions that companies have built their SEO practices around. This is why [management consultant] Gartner has stated that ‘by 2028, brands’ organic search traffic will decrease by 50% or more as consumers embrace generative AI-powered search.’ I think it could be higher than this if Google starts rolling this out now for all users.”
Lambert emphasized that companies must update their strategies due to changes in search behavior. They need to ensure that search engines have the latest information, possibly using AI to refresh content summaries regularly. To stand out in searches, businesses should target precise, long-tail keywords that closely match specific consumer needs, like “The best mirrorless camera for beginners with great autofocus for wildlife.” Moreover, since visitors now arrive at websites better informed, companies should focus on converting these well-prepared prospects into customers.
Google announced several additions to its Gemma family of open-source AI models. The new offerings, which include the 27-billion-parameter Gemma 2 and the vision-language model PaliGemma, aim to compete with similar products from Meta and Mistral.
In a previous interview with PYMNTS, Sam Mugel, the CTO of Multiverse Computing, explained that smaller models are easier to deploy in various scenarios, such as remote operations or on devices with limited storage, because of their portability. He also mentioned that reducing the size of these models helps lower the energy required for their operation.
Gemma 2, set to launch in June, represents a significant upgrade from the current 2-billion- and 7-billion-parameter versions of the Gemma models released earlier this year. The increased parameter count is expected to boost the model’s performance and capabilities.
Alongside Gemma 2, Google introduced PaliGemma, a pretrained variant designed for image-related tasks such as captioning, labeling, and visual Q&A. PaliGemma is the first vision-language model in the Gemma family, expanding the scope of the ecosystem.
While the Gemma models are open for public use, they are not fully open source, allowing Google to maintain some control over the technology. This approach balances accessibility and oversight, as the company seeks to promote innovation while managing potential risks.
The launch of Gemma 2 and PaliGemma underscores Google’s commitment to advancing its open AI offerings and competing with established players in the field. As developers and researchers work with these new tools, the impact on various industries and applications will become more apparent.
Google is entering the AI-generated video market with Veo, a new model designed to challenge OpenAI’s Sora in the race to create high-quality, minute-long clips from simple text prompts.
Unveiled at the conference on Tuesday, Veo boasts the ability to generate 1080-pixel video content in various visual and cinematic styles, from sweeping landscape shots to artfully crafted time-lapses. The model also allows users to make edits and adjustments to previously generated footage, offering flexibility and control that could set it apart from its competitors.
“The text-to-video announcement will change how companies build videos, making it less costly and simpler,” Lambert said. “More immediately, solutions like this, Sora and Runway, will become the new B-roll as teams can specify exactly the scenes they want and have it on their screen in minutes.”
The move marks a big step for Google as it seeks to assert its dominance in the rapidly evolving field of AI-powered video creation. With OpenAI’s Sora already making waves in the industry, Google’s Veo is poised to ignite a fierce battle for supremacy in this cutting-edge domain.
As businesses and content creators turn to AI tools to streamline their video production processes, the success of models like Veo and Sora could have far-reaching implications for the future of multimedia creation. By democratizing access to high-quality video generation, these AI-driven solutions are set to reshape the landscape of digital content, from marketing and advertising to entertainment and beyond.
However, the rise of AI-generated video also raises questions about the potential for misuse, such as the creation of deepfakes or the spread of misinformation. As these technologies continue to advance, it will be crucial for companies like Google and OpenAI to prioritize responsible development and deployment, ensuring that their innovations enhance, rather than undermine, the integrity of online content.
With Veo’s launch, the stage is set for a showdown between two AI heavyweights, each vying to redefine what’s possible in video creation. As the competition heats up, one thing is sure: The future of multimedia will never be the same.
For now, those eager to try Veo will have to wait in line. The company plans to keep the model behind a waitlist on Google Labs, its hub for experimental technology, for the foreseeable future.
Google unveiled an enhancement to its Gemini AI system: the capability to establish a virtual teammate. This AI-powered team member is designed to integrate with Google Workspace and perform various tailored tasks, including monitoring and managing projects, organizing data, providing insights, detecting trends, and aiding in collaborative efforts.
The AI virtual teammate can participate in multiple chats and then be queried to summarize or take action on information across channels.
“This will superpower teams, as less information will fall through the cracks, more work will get done, and better decisions will be made,” Lambert said. “The productivity implication for teams and individuals is huge, as it gives every worker a personal assistant to ensure they don’t forget action items, have access to the most recent information, and, in many cases, have an assistant who can complete the repetitive tasks of their day.”
This summer, Google plans to unveil a feature that could revolutionize how users manage extensive photo collections. Dubbed “Ask Photos,” the tool employs Gemini to navigate users’ Google Photos libraries in response to specific inquiries, extending well beyond basic searches for pet snapshots.
In a demonstration, Google CEO Sundar Pichai used the feature to locate his car’s license plate number, which Gemini promptly provided along with a corresponding image for confirmation.
In what may be one of Google’s most ambitious and futuristic AI endeavors to date, Demis Hassabis, the head of Google DeepMind, unveiled Project Astra at the conference. This real-time, multimodal AI assistant has sprung straight from the pages of a science fiction novel.
“I’ve had this idea in my mind for quite a while,” says Hassabis, a longtime AI visionary. “We would have this universal assistant. It’s multimodal, it’s with you all the time. It’s that helper that’s just useful. You get used to it being there whenever you need it.”
If it comes to fruition, Project Astra can perceive its surroundings, identify objects, remember their locations, and assist with various tasks.
During a demo video that Hassabis assures was not doctored, an Astra user in Google’s London office puts the system through its paces. From identifying specific components of a speaker to locating misplaced glasses and even reviewing code, Astra tackled the tasks quickly, responding in near real-time as if it were a scene from a futuristic movie.
“Previous versions start feeling primitive,” Nicholson said. “It’s exciting. It’s impossible to predict how these tools will be used at the margins. Finally, OpenAI’s ChatGPT 4o and Gemini announcements back to back are mind-blowing.”
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.