Battle Lines Being Drawn Between Open- and Closed-Source Goals for AI Regulation

AI regulation

Artificial Intelligence (AI) systems have added a new layer to the ongoing and increasingly philosophical debate about whether open-source or proprietary systems are better for society, the marketplace and even national security.

Given that the European Union (EU) is so far the only major Western economy to take concrete steps toward regulating AI with its Artificial Intelligence Act, the open-versus-closed-source debate puts governments in an interesting position when it comes to developing their own policy frameworks to govern AI technology.

This, as industry letters sent to the U.K. House of Lords Communication and Digital Select Committee regarding the importance of open-source AI offer crucial insight into the policy outcomes both startups and big companies developing open source AI models are hoping for.

“We believe governments should not deviate from longstanding technology policy principles supporting open source computing that have been widely accepted and legally enshrined for decades, since the advent of the Internet,” wrote venture capital firm Andreessen Horowitz (a16z) in its letter.

“It is critical to realize that restricting the ability to develop open-source software will undermine the competitive AI landscape and harm, rather than enhance, cyber-security,” the VC firm added.

Already, the EU is reportedly considering exempting open-source AI models from some of the AI Act’s guardrails unless they are determined to be high risk or use for purposes that have already been banned.

But just what role do open-source proponents, including Big Tech incumbent Meta and French AI unicorn Mistral AI, see governments playing when it comes to regulating AI — and how does their point of view differ from proprietary-source champions like OpenAI, Google and Microsoft?

Read more: AI Adds Fresh Parameters to Open- vs Closed-Source Software Debate

Driving Breakthroughs in Productivity and Scientific Discovery

Open-source models are the AI systems in which — as the moniker suggests — the source code is shared openly, letting users voluntarily improve its function and design, and creating a permanent and accessible record of its design.

The White House’s earlier Executive Order on AI has tasked the National Telecommunications and Information Administration (NTIA) with studying the open-source question and recommending actions. The agency is set to provide its recommendations in a report due July of this year.

The open-source debate is not a question that is unique to AI, rather open-source and proprietary-source proponents have been pushing their disparate approaches forward for primacy since the very intention of computing technology.

As PYMNTS has written, traditionally, businesses prefer closed source as it protects their trade secrets, while academics and researchers prefer open source, as it allows for democratized tinkering and exploration.

And the battle lines are increasingly being drawn.

In the U.S. a group of over 50 founding members — including NASA, Oracle, CERN, Oracle, Intel, and the Linux Foundation — and spearheaded by Meta and IBM — launched the AI Alliance last month (Dec. 5, 2023) which is meant to support “open innovation and open science in AI.”

At the same time, leading companies including OpenAI, Amazon, Anthropic, Microsoft, Google and other firms that prefer to build their AI systems with limited outside access to the underlying algorithms and data have created their own industry group, the Frontier Model Forum, to promote proprietary-centric legislation.

For example, rather than open sourcing its models, Google publishes “model cards” that provide information on the training parameters and other forms of transparent documentation around the company’s various AI systems.

See also: Tech Experts Share What to Ask When Adding AI to Business

The Role Policy Can Play in Supporting an Innovative AI Landscape

While Microsoft, Google, a16z and others all call for national policies for AI that implement “democratic values” in their written feedback, just what exactly democratic values might look like differ slightly depending on the ingoing preferences of the company recommending them.

Per a16z’s letter, open-source AI should be allowed to freely proliferate and compete with both big AI companies and startups. Development of open-source code should continue to be unregulated — as it is today. The VC firm also noted that use of open-source code by bad actors for illicit activity is already heavily regulated and criminally prohibited and those standards should apply to the use of open-source AI.

“It is commonly recognized that the approach of ‘security through obscurity’ has been a failure,” the company wrote.

Many companies believe that while the use-cases of AI should be regulated, the development of the foundational models underlying them should not be — except in high-risk environments.

“At Google we support a proportionate, risk-based approach to AI regulation. It is far too important not to regulate and too important not to regulate well,” the company wrote.

“[AI] is the most likely general-purpose technology to lead to massive productivity growth,” Avi Goldfarb, Rotman chair in AI and healthcare and a professor of marketing at the Rotman School of Management, University of Toronto, told PYMNTS in an interview posted Monday (Dec. 11). “The important thing to remember in all discussions around AI is that when we slow it down, we slow down the benefits of it, too.”

PYMNTS CEO Karen Webster wrote that one of the key trends to watch for 2024 will be role of AI-native businesses in marginalizing incumbents, and as the open-vs-closed-source debate plays out, the impact it has on the AI marketplace will surely be one to watch.