Velera How CUs Can Drive Engagement with Self-Service Banking Innovation July 2024 Banner

Google-Anthropic Partnership Catches Eye of UK Watchdog

Great Britain’s competition regulator is examining Google’s partnership with artificial intelligence (AI) startup Anthropic.

The Competition and Markets Authority (CMA) said Tuesday (July 30) it is “considering whether it is or may be the case that this partnership has resulted in the creation of a relevant merger situation under the merger provisions of the Enterprise Act 2002.”

If so, the CMA said in its announcement, the regulator will look into “whether the creation of that relevant merger situation has resulted, or may be expected to result, in a substantial lessening of competition within any market or markets in the United Kingdom for goods or services.”

The CMA is giving interested parties until Aug. 13 to comment on the situation.

Google last year invested $500 million into Anthropic, with the promise of significantly greater investments still to come. Similar AI-focused partnerships, such as the one between Microsoft and OpenAI, have also drawn scrutiny from regulators.

And last week, antitrust bodies from the U.S., U.K. and the European Union (EU) issued a rare joint statement outlining their concerns about market concentration and anti-competitive practices in the field of generative AI — the technology governing popular chatbots such as OpenAI’s ChatGPT.

“There are risks that firms may attempt to restrict key inputs for the development of AI technologies,” the watchdogs warned, stressing the need for action in a rapidly evolving field.

The regulators pointed to three key risks: control of critical resources, market power entrenchment and harmful partnerships, saying they were especially worried about how existing digital market leaders might leverage their positions.

The statement argued that “the AI ecosystem will be better off the more that firms engage in fair dealing,” underlying principles of interoperability and choice.

“While the authorities can’t create unified regulations, their alignment suggests a coordinated approach to oversight,” PYMNTS wrote. “In the coming months, this could mean a closer examination of AI-related mergers, partnerships and business practices.”

Meanwhile, PYMNTS wrote Tuesday about the burgeoning field of AI ethics, which asks a critical question: How do we make sure intelligent machines serve humanity’s best interests? This field grapples with the moral implications of an increasingly automated world, from job displacement to existential risks.

“AI ethics encompasses a wide range of concerns, including privacy, bias, transparency, accountability, and the long-term societal impacts of artificial intelligence,” that report said. “As AI systems become more sophisticated and autonomous, the ethical questions surrounding their development and deployment grow increasingly complex and urgent.”