Ingo Payments Generation Instant Overpayment Disbursements June 2024 Banner

It’s Not Just FTX: Businesses Are Giving Anthropic Billions for Safer AI

Anthropic, AI, artificial intelligence

On the surface, cryptocurrency and artificial intelligence (AI) don’t have much in common. 

But there is a quiet connection between the firm responsible for possibly the greatest — allegedly criminal — failures of yesteryear’s hyped up technology, crypto, and the startup darling of the current innovation du jour, generative AI. 

This, as it was revealed in the criminal trial of Sam Bankman-Fried, the co-founder and former CEO of the collapsed cryptocurrency exchange FTX, that the firm’s executives made personal as well as institutional investments in the constitutional AI startup Anthropic. 

Alameda Research ex-CEO Caroline Ellison seeded the AI company $10 million, her only personal venture investment, while Bankman-Fried gave another $80 million. Both investments were on top of a $500 million investment from FTX’s venture arm. 

When discussing FTX’s half-a-billion-dollar investment into Anthropic, John J. Ray, the restructuring lawyer and current FTX CEO, reportedly dismissed the AI company as “just a bunch of people with an idea. Nothing.”

Anthropic is now being valued at north of $20 billion after receiving an investment from Amazon worth up to $4 billion and seeking another injection from Google of $2 billion. 

That multibillion-dollar bump has quintupled Anthropic’s valuation since March alone, and turned the FTX stake into a pot of gold — one that is worth so much that its very mention has been banned by the judge in Bankman-Fried’s case, who noted that Bankman-Fried’s source of capital for his risky bets is what is under trial, not whether those bets ended up eventually paying off. 

So, as large enterprises continue to partner with Anthropic, what exactly is it that everyone is seeing in the AI startup and its novel approach to an innovative technology?

Read also: Anthropic Says the Only Way to Stop Bad AI Is With Good AI

Safely Unleashing the Power of Generative AI

The big lure of Anthropic, relative to other foundational AI models out on the market from competitors like OpenAI, Google and Meta, is the firm’s commitment to building and deploying what it alleges are generative AI capabilities with stronger built-in guardrails, thanks to its training approach centered around “Constitutional AI.” 

AI models are inherently prone to hallucination and fabrication, and while mistakes are somewhat more acceptable in consumer-facing settings when users are just playing around, they can have far more drastic consequences within an enterprise workflow or across more sensitive industries like healthcare. 

As Erik Duhaime, co-founder and CEO of data annotation provider Centaur Labs, told PYMNTS, “If you want to write a song in the style of Bob Dylan, or virtually try on a T-shirt, it’s one thing if the specs are wrong. But it’s another thing entirely if you’re told you have cancer when you don’t, or an AI model tells you that you don’t have cancer and you do.”

Anthropic was founded in 2021 by Dario Amodei, who led the teams that built OpenAI’s ChatGPT-2 and ChatGPT-3, and his sister Daniela Amodei, who formerly oversaw OpenAI’s policy and safety teams.

It has become a hallmark of the recent AI boom’s success stories and counts among its enterprise clients travel media company Lonely Planet, the asset management firm Bridgewater Associates, and LexisNexis Legal & Professional, among others. 

Both Amazon and Anthropic signed the White House’s July pledge to foster the safe, secure, responsible and effective development of AI technology.

See moreGoogle and Microsoft Spar Over Training Rights to AI Data

Avoiding Garbage In, Garbage Out With AI 

The interest and investment in Anthropic underscores the reality of the AI ecosystem, where quality data and proprietary fine-tuning methods reign supreme when it comes to building a usable, scalable foundational model. 

As PYMNTS has written, the winner-take-most dynamics of digital tech’s operational ecosystem make access to the highest quality AI training data a crucial competitive moat when combined with additional proprietary fine-tuning techniques, such as Anthropic’s constitutional AI or the use of Reinforcement Learning from Human Feedback (RLHF).

“At a high level, the constitution guides the model to take on the normative behavior described in the constitution — here, helping to avoid toxic or discriminatory outputs, avoiding helping a human engage in illegal or unethical activities, and broadly creating an AI system that is helpful, honest and harmless,” Anthropic wrote in a blog post explaining its own approach to AI. 

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.