Google, Anthropic Working to Address Limitations of GenAI

Google and Anthropic are working to address the limitations of their generative artificial intelligence (AI) systems.

These limitations, including issues around hallucinations, copyright and sensitive data, have raised concerns among potential customers, The Wall Street Journal (WSJ) reported Tuesday (Feb. 13).

Speaking Monday (Feb. 12) during The Wall Street Journal CIO Network Summit, representatives of Google and Anthropic acknowledged the limitations of generative AI systems, according to the report.

One major concern is the occurrence of hallucinations, where the AI systems confidently produce incorrect statements, the report said. Lawrence Fitzpatrick, chief technology officer of OneMain Financial, raised this concern at the event, emphasizing the need for accurate information in highly regulated industries.

Google aims to address these limitations and build trust with customers, per the report. Eli Collins, vice president of product management at Google DeepMind, proposed a solution to enable users to easily identify the sources of information provided by AI systems. This transparency allows users to verify the information and determine its reliability.

Anthropic, a startup specializing in AI, is actively working on reducing hallucinations and improving accuracy, according to the report. Jared Kaplan, co-founder and chief science officer of Anthropic, mentioned the development of data sets where the AI model responds with “I don’t know” when it lacks sufficient information. This approach ensures that the AI system only provides answers when it can provide citations to support its responses.

Kaplan emphasized the importance of striking the right balance between caution and usefulness, per the report. While reducing hallucinations is crucial, overly cautious AI models may respond with “I don’t know” to everything, rendering them less useful.

The issue of model training data provenance has also come to the forefront, the report said. The New York Times filed a lawsuit alleging unauthorized use of its content by Microsoft and OpenAI to train their AI models. Kaplan explained the difficulty of removing specific content from trained models.

Both Google and Anthropic recognize the importance of hardware in building powerful AI models, per the report. They are working on improving the availability, capacity, and cost-effectiveness of AI chips used for training. Google’s in-house chips, Tensor Processing Units (TPUs), have been developed to enhance efficiency and reduce costs.

This news came on the same day that it was reported that business users have giving mixed reviews to Microsoft’s AI assistant, Copilot for Microsoft 365.

While the AI upgrade has been deemed useful, concerns about its price and limited functionality have been raised.