In a bid to address the burgeoning concerns surrounding the ethical and societal implications of artificial intelligence (AI), Chile has introduced pioneering legislation aimed at regulating AI systems within its borders. Drawing inspiration from the European Union’s regulatory framework, the proposed legislation delineates various obligations contingent upon the perceived risks posed by AI applications.
The bill, which was presented to the lower house of the Chilean Congress on May 7, distinguishes between different categories of AI systems based on their potential risks and impacts. At the forefront of the legislation are measures designed to safeguard fundamental rights, mitigate potential harms to health, safety, and the environment and ensure consumer protection.
Read more: New Report Says AI Regulations Lag Behind Industry Advances
Under Chile’s proposed legislation, AI systems are classified into distinct risk categories:
- Unacceptable Risk Systems: These are AI systems deemed incompatible with the respect and guarantee of individuals’ fundamental rights, thereby warranting their outright prohibition. Notably, this category encompasses manipulative systems, with exceptions made for those serving authorized therapeutic purposes and with informed consent. Additionally, systems that exploit vulnerabilities to instigate harmful behaviors fall under this classification.
- High-Risk Systems: This category encompasses AI systems posing significant risks to health, safety, fundamental rights, the environment and consumer rights. Stringent regulations are envisaged to govern the development, deployment and usage of such systems to mitigate potential adverse outcomes.
- Limited-Risk Systems: AI systems presenting an insignificant risk of manipulation, deception or error during interaction with individuals are categorized as limited-risk systems. These systems are subject to lighter regulatory scrutiny compared to higher-risk counterparts.
- AI Systems without Obvious Risk: Finally, the legislation recognizes AI systems that do not exhibit discernible risks, signifying a lower regulatory burden for such applications.
Crucially, the bill outlines several prohibitions and obligations aimed at addressing specific concerns associated with AI deployment:
- Biometric Identification Systems: The use of AI for biometric identification purposes in public spaces in real-time is prohibited, except in cases of public security and criminal investigations.
- Facial Recognition Systems: AI systems utilizing facial scraping techniques to extract facial images from public sources or closed-circuit television are explicitly banned.
- Emotional State Evaluation Systems: Systems designed to evaluate an individual’s emotional state using AI algorithms are prohibited.
- Data Governance and Cybersecurity: The legislation imposes obligations on AI developers and operators to adhere to robust data governance and cybersecurity standards to mitigate the risk of data breaches and misuse.
Source: BN Americas
Featured News
Judge Allows FTC Antitrust Case Against Amazon to Move Forward
Oct 1, 2024 by
CPI
SAP Leader Urges Caution on EU AI Rules, Warns of Competitive Disadvantage
Oct 1, 2024 by
CPI
Colorado’s Grocery Workers Unite to Oppose $24.6 Billion Supermarket Merge
Oct 1, 2024 by
CPI
Canada’s Competition Bureau Warns Businesses of Tougher Enforcement
Oct 1, 2024 by
CPI
Top Antitrust Lawyers Launch New Boutique Firm
Oct 1, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Refusal to Deal
Sep 27, 2024 by
CPI
Antitrust’s Refusal-to-Deal Doctrine: The Emperor Has No Clothes
Sep 27, 2024 by
Erik Hovenkamp
Why All Antitrust Claims are Refusal to Deal Claims and What that Means for Policy
Sep 27, 2024 by
Ramsi Woodcock
The Aspen Misadventure
Sep 27, 2024 by
Roger Blair & Holly P. Stidham
Refusal to Deal in Antitrust Law: Evolving Jurisprudence and Business Justifications in the Align Technology Case
Sep 27, 2024 by
Timothy Hsieh