Anthropic Gets $100 Million Investment from Korea’s SK Telecom

Anthropic

Artificial intelligence (AI) firm Anthropic has teamed with Korea’s SK Telecom (SKT) to build a large language model for the telecom sector.

The partnership, announced Monday (Aug. 14), will also see SKT invest $100 million into Anthropic, which is backed by Google

According to a news release from SKT, the companies hope to build a multilingual large language model (LLM) that supports languages, including Korean, English, German, Japanese, Arabic and Spanish, combining SKT’s telecommunication expertise with AI technology like Anthropic’s Claude.

“In particular, Anthropic will work with SKT to fine-tune Claude to telco use cases, including industry specific customer service, marketing, sales, and interactive consumer applications,” the release said. “By customizing the model to the telco industry, telcos will benefit from increased performance relative to the use of more general models.”

The goal is to bring the multilingual LLM to the Telco AI Platform being built by the Global Telco AI Alliance, the release said.

The partnership is happening amid a debate about open-source and closed-source AI models, as PYMNTS wrote last week.

“With closed black-box AI models, any research or results are not reproducible or even verifiable — and the companies behind the models may change them at any time without warning, as well as revoke access,” that report said. “That is why critics of closed-source AI models have called for firms like OpenAI to open up their foundational code.”

However, the source code remains in the hands of the organizations that helped birth it, and thus can’t be manipulated by outside actors. Open-source AI, on the other hand, offers greater interoperability, customization and integration with third-party software or hardware. But this openness could also open the door for misuse and abuse by bad actors.

“The interaction between AI and nuclear weapons, biotechnology, neurotechnology and robotics is deeply alarming,” U.N. Secretary General António Guterres said last month, stressing that “generative AI has enormous potential for good and evil at scale.”

As PYMNTS wrote, some observers — and even some countries — worry that open-source models could help dictators and terrorists that want to weaponize AI. A team of researchers at MIT found that after just an hour, AI chatbots could be convinced to suggest step-by-step instructions for producing four potential pandemic pathogens.

“Widely accessible artificial intelligence threatens to allow people without formal training to identify, acquire, and release viruses that are highlighted as pandemic threats,” the MIT researchers wrote.