Meta is reportedly working on a new artificial intelligence (AI) system to compete with OpenAI.
According to a Sunday (Sept. 10) report by The Wall Street Journal (WSJ) citing sources familiar with the matter, the Facebook owner hopes to launch its new AI system next year, and to make it substantially more powerful than its Llama 2 AI model released this summer.
Sources tell the WSJ Meta is building up data centers required for the job and collecting more H100s, the most advanced of the Nvidia chips used for AI training.
PYMNTS has contacted Meta for comment but has not yet gotten a reply.
Although Meta worked with Microsoft for the rollout of Llama 2, it apparently aims to train the new model on its own infrastructure, some of the sources said. The company Meta expects to start training the new AI system early next year, according to the report.
Sources also say Meta CEO Mark Zuckerberg wants to make this new AI open source, just like Llama 2, meaning companies will be able to use it for free to build their own AI tools.
“When software is open, more people can scrutinize it to identify and fix potential issues,” Mr. Zuckerberg wrote earlier this year on his personal Facebook.
The WSJ report notes that the new model is part of Zuckerberg’s plan to make Meta a force within the AI world after falling behind companies like Google and Microsoft in a crowded race by Big Tech firms to add AI to their offerings.
“While businesses — and in particular tech firms — have integrated AI features into both the front and back-end of their products for years, many incumbent giants like Apple were caught sleeping late last year as Microsoft and OpenAI launched their buzzy generative chatbot, ChatGPT, as a standalone solution kickstarting a new AI arms race,” PYMNTS wrote in July.
Apple, meanwhile, isn’t standing still. As noted here last week, the company’s voice assistant Siri is apparently getting an upgrade as part of a major increase in AI investment.
A report by The Information said Apple is planning to incorporate AI large language models into its devices to let users automate more complicated tasks.
For example, someone could tell Siri to create a GIF using the last five pictures on their camera roll and send it to a contact, a task that would now require multiple manual actions.