Artificial intelligence (AI) offers a new tool to genuinely help advance human learning and thought.
We are the first generation in the history of humanity to create machines that can make decisions that previously could only be made by people.
But government moves at a snail’s pace when compared to the commercially accelerated adoption of tech innovations, and these AI models and machines are currently operating and scaling absent any regulation or policy guardrails.
Industry insiders have drawn parallels between the purpose of AI regulation to both a car’s airbags and brakes; as well as to the role of a restaurant health inspector in previous discussions with PYMNTS.
So, what are tech giants telling global governments?
Frequently, observers say, the giants behind AI’s recent advances, including IBM, Google, Microsoft and OpenAI, are telling lawmakers in the U.S., China and EU each what they want to hear.
As the public-private battle lines shaping global AI development are drawn, three increasingly distinct viewpoints are emerging.
Different regulatory paradigms are already emerging in Washington, Beijing, and Brussels, each rooted in distinct values and incentives.
The EU has arguably moved the fastest with regulation that adheres to the rights-driven approach popularized by its existing tech-focused digital regulation policies.
The U.S., which when compared to China and the EU has made little policy progress, looks to be following a market-driven approach; while China, to the surprise of few, is enforcing a state-centric regulatory roadmap.
A handbook published by the White House in October 2022, “The Blueprint for an AI Bill of Rights,” offers guidance on how to safeguard the American public’s rights in the emergent AI age — but ultimately places final trust in tech companies’ own self-regulation.
Top AI companies have asked U.S. lawmakers to push forward AI oversight, claiming getting rules on the books is necessary to guarantee user safety while protecting the ability to compete effectively with foreign frenemies like China.
Meanwhile, in the EU, the same AI leaders are pushing back against data-privacy regulations they view as needlessly restrictive, with OpenAI even threatening briefly to leave the bloc.
“It’s an interesting Rorschach to figure out, you know, what is important to the EU versus what is important to the United States,” Shaunt Sarkissian, founder and CEO at AI-ID, told PYMNTS. “If you look at all rules that come out of the EU, generally they tend to be very consumer privacy-oriented and less fixated on how this is going to be used in commerce.”
“There needs to be clear demarcation lines of what is considered generative and output-based AI and what is just running analytics at existing systems,” added Sarkissian.
“One of the most effective ways to accelerate government action is to build on existing or emerging governmental frameworks to advance AI safety,” wrote Brad Smith, vice chair and president of Microsoft, in a blog post Thursday (June 29).
Smith was talking, of course, about Europe. The US hasn’t passed a new tech law in over two decades.
See also: Washington Races to Develop and Implement Effective AI Policy
Until regulation does appear, “Pandora’s box has been opened. AI is really powerful … The tail wags the dog now: Things happen online first, and then trickle down to real life,” Wasim Khaled, CEO and co-founder of intelligence platform Blackbird.AI, told PYMNTS.
But tech leaders know that EU’s incumbent policies may turn out to be the tail that wags the dog, too.
In what’s known as the “Brussels Effect,” tech companies frequently globalize adherence to EU regulations across most of their businesses to standardize their operations. And under the proposed EU AI Act, which will go into effect in the next 12-24 months, AI developers hoping to use data from the 27-member nation bloc to train their algorithms will be bound by the EU’s regulatory constraints even beyond the EU’s borders.
As PYMNTS has previously written, policymakers are currently contemplating several approaches to regulating AI, which broadly can be categorized across AI-specific regulations (EU AI Act), data-related regulations (GDPR, CCPA, COPPA), existing laws and legislation (antitrust and anti-discrimination law), and domain or sector-specific regulations (HIPAA and SR 11-7).
Observers believe that the U.S. and EU’s shared concern over China’s growing global digital influence could potentially lead to closer transatlantic cooperation.
This shared approach to policymaking could temper the techno-optimism and pursuit of innovation promoted by the U.S. with the EU’s user-centric privacy protections.
And while China leads the world in AI-driven surveillance and facial recognition technology, the country lags behind other nations in developing cutting-edge generative AI systems due to its censorship rules that limit the data that can be used to train foundation models.
The next few years will see major steps made as separate digital empires emerge and compete for control over the future of AI technology — with Washington, Brussels and Beijing increasingly looked to for interoperability guidance as other countries consider their own AI legislations.
That’s why collaboration between industry and regulators is crucial for the growth of the industry and the ongoing expansion of spheres of digital influence.