Europe is leading the world in the push to regulate artificial intelligence (AI), with the European Union (EU) taking a proactive approach to ensuring AI is developed and implemented in a responsible and ethical manner.
On Wednesday (June 14), the European Parliament passed a draft law known as the A.I. Act, considered the world’s first set of comprehensive rules for AI technology, which is expected to be approved by the end of this year or early 2024 at the latest.
The EU’s proposed legislation seeks to limit some uses of the technology and would classify AI systems according to four levels of risk, from minimal to unacceptable. This approach will focus on applications with the greatest potential for human harm, similar to the drug approval process.
High-risk AI systems used in areas such as employment and education, which could significantly impact a person’s life, will also face tough requirements such as transparency and accuracy in data usage.
Violations of these requirements could result in fines of up to €30 million ($33 million) or 6% of a company’s annual global revenue.
The EU’s proposed high-risk list also includes AI in critical infrastructure, education, human resources, public order and migration management.
Predictive policing tools and real-time remote facial recognition and biometric identification in public are banned, while the ban on real-time remote facial recognition and biometric identification in public has been widened as part of efforts to guard against any AI threats to health and safety.
The companies leading the charge in AI development are the same tech companies that have faced antitrust violations, violations of existing laws, and informational harms over the past decade. OpenAI CEO Sam Altman, for example, has even threatened to leave Europe if the AI Act overregulates AI.
However, as Margrethe Vestager, the European Commissioner of Competition, said, “AI should serve people, society, and the environment, not the other way around.”