The European Union’s parliament has approved the world’s inaugural comprehensive artificial intelligence (AI) regulations, a move hailed by some and criticized by others who fear it may stifle business innovation.
The new law will govern both high-impact, general-purpose AI models and high-risk AI systems, requiring them to adhere to detailed transparency duties and EU copyright regulations. Additionally, it limits the ability of governments to employ real-time biometric surveillance in public areas, restricting its use to specific scenarios like the prevention of certain crimes, countering genuine threats such as terrorist acts, and locating individuals suspected of committing major offenses. The law could limit options for companies developing and using AI when it goes into effect.
“Regulations, when thoughtfully crafted, can serve as a catalyst for trust and reliability in AI applications, which is paramount for their integration into commerce,” Timothy E Bates, a professor at the University of Michigan who teaches about AI, told PYMNTS in an interview.
“However, there’s a caveat: Overly prescriptive or rigid regulations might hamper the pace of innovation and the competitive edge of businesses, especially smaller entities that might lack the resources to navigate complex regulatory landscapes. It’s crucial for regulations to offer guidance and standards without becoming a barrier to innovation.”
Introduced in 2021, the EU AI Act categorizes AI technologies based on their level of risk, from “unacceptable” risks that lead to a ban to high-, medium- and low-risk categories. This legislation, reaching an initial agreement in December, was overwhelmingly approved by the parliament, with 523 votes in favor, 46 against, and 49 abstentions.
Thierry Breton, the European commissioner for internal markets, wrote on X: “I welcome the overwhelming support from the European Parliament for our AI Act — the world’s 1st comprehensive, binding rules for trusted AI. Europe is NOW a global standard-setter in AI. We are regulating as little as possible — but as much as needed.”
Since 2021, EU officials have been implementing measures to mitigate the risks associated with the swiftly evolving technology, ensuring citizen protection and encouraging innovation across Europe. The push to implement this new regulation gained momentum following the introduction of OpenAI’s Microsoft-endorsed ChatGPT in late 2022, sparking an international race in AI development.
The rules’ implementation will be phased in starting in 2025, and they are expected to be officially adopted by May after final assessments and the European Council’s approval.
This new legislation represents just one aspect of the broader tightening of AI regulations.
On Thursday, the European Commission issued inquiries to eight platforms and search engines, such as Microsoft’s Bing, Instagram, Snapchat, YouTube, and X (previously Twitter), about their strategies to mitigate generative AI risks. Leveraging the Digital Services Act (DSA), introduced last year to regulate online platforms and protect users, the EU is now exercising its newfound authority to impose substantial penalties for noncompliance.
While awaiting the implementation of the AI Act— the first of its kind globally, which received legislative endorsement but won’t fully apply to generative AI until next year — the EU is using the DSA and other existing laws to manage AI challenges. Among the concerns are AI-generated false information, or “hallucinations,” and the use of AI to manipulate services in ways that could deceive voters
With the tightening of AI regulations, businesses will need to enhance their security measures to ensure compliance, Donny White, CEO of Satisfi Labs, a provider of AI agents, told PYMNTS in an interview.
“This adds another layer to development that could slow some of the projects that can roll out today,” he added. “It could also create a barrier of entry for small companies that want to jump into the AI pool.”
While regulations play a crucial role in controlling harmful AI practices, they are not a standalone solution, Jonas Jacobi, CEO and co-founder of the AI company ValidMind, argued in an interview with PYMNTS.
The regulations set standards for companies to follow, but these rules may only be effective with strict enforcement. Moreover, given the fast-paced evolution of AI technology and its expanding applications, it’s doubtful that regulations can consistently match this rate of advancement.
“Hence, the responsibility to curb dangerous AI rests mainly with the enterprise tech companies and emerging innovators at the forefront of this new era,” Jacobi added. “Regulating the internet didn’t prevent bad actors from taking advantage of society’s most vulnerable populations, and regulations are unlikely to stop bad actors from maliciously using AI.”
Industry observers are watching closely to see if the U.S. passes its own AI bill. Bates said that as AI and online businesses go global, more people see the importance of countries working together on rules for AI. Even though the U.S. might not follow the EU’s exact rules, there’s a growing trend toward agreeing on basic principles.
“My interactions with policymakers and industry leaders, especially during initiatives aimed at bridging the gap between technology and policy, suggest a growing awareness of the need for AI governance,” Bates added. “However, the U.S. approach may lean towards a more sector-specific regulatory framework rather than the broad, comprehensive approach seen in the EU.”