PYMNTS-MonitorEdge-May-2024

EU, UK and US Unite in GenAI Concerns

AI, artificial intelligence

Competition authorities from the United States, European Union and United Kingdom have joined forces to address potential antitrust issues in artificial intelligence (AI), while tech companies like Meta express concerns over stringent regulations in Europe.

In a rare joint statement, top officials from the three regions on Tuesday (July 23) outlined concerns about market concentration and anti-competitive practices in generative AI — the technology behind popular chatbots like ChatGPT.

“There are risks that firms may attempt to restrict key inputs for the development of AI technologies,” the regulators warned, highlighting the need for swift action in a rapidly evolving field.

This move comes as AI development accelerates, with major tech companies pouring billions into the technology. Microsoft’s $10 billion investment in OpenAI and Google’s push with its Bard chatbot underscore the stakes.

The regulators identified three main risks: control of critical resources, market power entrenchment, and potentially harmful partnerships. They’re particularly wary of how existing digital market leaders might leverage their positions.

The statement asserted that “the AI ecosystem will be better off the more that firms engage in fair dealing,” emphasizing principles of interoperability and choice.

While the authorities can’t create unified regulations, their alignment suggests a coordinated approach to oversight. In the coming months, this could mean a closer examination of AI-related mergers, partnerships and business practices.

The tech industry, already grappling with increased regulatory pressure, now faces a new front in the AI arms race. As the joint statement clarifies, regulators are committed to addressing potential risks “before they become entrenched or irreversible harms.”

EU’s AI Regulation Sparks Concern From Meta

Meta, Facebook’s parent company, has raised alarm over the European Union’s approach to regulating artificial intelligence. Rob Sherman, Meta’s deputy privacy officer and vice president of policy, warned in a Financial Times interview that current regulatory efforts could potentially isolate Europe from accessing cutting-edge AI services.

Sherman confirmed that Meta received a request from the EU’s privacy watchdog to pause AI model training using European data voluntarily. While complying with this request, the company is concerned about the growing “gap in technologies available in Europe versus the rest of the world,” he said.

The EU’s regulatory stance, including the new Artificial Intelligence Act, aims to govern the development of powerful AI models and services. However, Sherman cautioned that a lack of regulatory clarity could hinder the deployment of advanced technologies in Europe.

This situation highlights the delicate balance between fostering innovation and ensuring responsible AI development. As tech companies race to commercialize AI products, they face constraints from the EU’s digital rules, including data protection regulations like GDPR.

Meta has already delayed the rollout of its AI assistant in Europe due to regulatory concerns. As the AI landscape evolves, the tech industry and EU regulators must find common ground to ensure Europe remains competitive in the global AI market while safeguarding user privacy and safety.

UK Labour Government Treads Carefully on AI Regulation

Prime Minister Keir Starmer’s new Labour government has signaled a measured approach to artificial intelligence regulation in Britain. The King’s Speech, which outlined the government’s legislative agenda, included plans to explore effective AI regulation without committing to specific laws.

The government aims to establish appropriate legislation for developers of powerful AI models, building on the previous administration’s efforts to position the U.K. as a leader in AI safety. This includes continuing support for the world’s first AI Safety Institute, focused on “frontier” AI models like ChatGPT.

While Starmer has promised new AI laws, his government is taking a careful, deliberate approach to their development. This strategy aims to balance innovation with responsible AI development, maintaining the U.K.’s attractiveness as a hub for AI research and investment.

AI Regulation: Congress Urges Caution in the Financial Sector

Republican lawmakers and industry experts advocated for a measured approach to AI regulation in finance during a House Financial Services Committee hearing. The four-hour session explored the complex intersection of artificial intelligence with banking, capital markets, and housing sectors.

Committee Chair Patrick McHenry set the tone, emphasizing the need for careful consideration over hasty legislation. “It’s far better we get this right rather than to be first,” McHenry said, reflecting a sentiment echoed throughout the hearing.

The discussion built upon a recent bipartisan report examining federal regulators’ relationship with AI and its impact across various financial domains. Participants highlighted that existing regulations are largely “technology neutral,” with many favoring a targeted, risk-based approach over sweeping changes.

Industry representatives, including Nasdaq’s John Zecca, praised the National Institute of Standards and Technology’s AI risk management framework. However, concerns were raised about overly restrictive approaches, such as the European Union’s upcoming AI Act, which some fear could stifle innovation.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

PYMNTS-MonitorEdge-May-2024