Artificial intelligence startup SandboxAQ is reportedly plotting a fundraise that would value the company at $5 billion.
The company, which spun off from Google parent company Alphabet, is in talks with possible investors about the equity funding round, Bloomberg reported Friday (Oct. 18), citing unnamed sources.
SandboxAQ, which develops AI and cybersecurity services, said two years ago that it raised $500 million without revealing a valuation, per the report. The company’s technology combines AI and quantum physics, training its AI using large numerical data — rather than with language, the way chatbots like ChatGPT work.
The largest of the company’s three business units makes simulation software based on AI and quantum algorithms to make drug discovery faster and develop new chemistries for improved products such as batteries, the report said.
PYMNTS explored the company’s efforts earlier this year in an interview with Chris Hume, senior director of business operations for SandboxAQ.
“The physical world is defined by quantum mechanics,” he said. “The more effectively we can understand those interactions and then model those interactions, the more efficiently and effectively you can build predictive models. With the algorithms that we’re developing combined with the classical computer hardware that’s available today, you can build better predictive models, and that’s the exciting part. And that’s the opportunity at hand.”
In other AI news, PYMNTS wrote Friday about research at Apple casting doubt on the mathematical abilities of large language models, which in turn challenges the idea that AI is on the brink of human-like reasoning.
“Any real-world application that requires reasoning of the sort that can be definitively verified (or not) is basically impossible for an LLM to get right with any degree of consistency,” Selmer Bringsjord, professor at Rensselaer Polytechnic Institute, told PYMNTS, while making a distinction between AI and traditional computing.
“What a calculator can do on your smartphone is something an LLM can’t do — because if someone really wanted to make sure that the result of a calculation you called for from your iPhone is correct, it would be possible, ultimately and invariably, for Apple to verify or falsify that result,” he said.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.