In a bid to address the burgeoning concerns surrounding the ethical and societal implications of artificial intelligence (AI), Chile has introduced pioneering legislation aimed at regulating AI systems within its borders. Drawing inspiration from the European Union’s regulatory framework, the proposed legislation delineates various obligations contingent upon the perceived risks posed by AI applications.
The bill, which was presented to the lower house of the Chilean Congress on May 7, distinguishes between different categories of AI systems based on their potential risks and impacts. At the forefront of the legislation are measures designed to safeguard fundamental rights, mitigate potential harms to health, safety, and the environment and ensure consumer protection.
Read more: New Report Says AI Regulations Lag Behind Industry Advances
Under Chile’s proposed legislation, AI systems are classified into distinct risk categories:
- Unacceptable Risk Systems: These are AI systems deemed incompatible with the respect and guarantee of individuals’ fundamental rights, thereby warranting their outright prohibition. Notably, this category encompasses manipulative systems, with exceptions made for those serving authorized therapeutic purposes and with informed consent. Additionally, systems that exploit vulnerabilities to instigate harmful behaviors fall under this classification.
- High-Risk Systems: This category encompasses AI systems posing significant risks to health, safety, fundamental rights, the environment and consumer rights. Stringent regulations are envisaged to govern the development, deployment and usage of such systems to mitigate potential adverse outcomes.
- Limited-Risk Systems: AI systems presenting an insignificant risk of manipulation, deception or error during interaction with individuals are categorized as limited-risk systems. These systems are subject to lighter regulatory scrutiny compared to higher-risk counterparts.
- AI Systems without Obvious Risk: Finally, the legislation recognizes AI systems that do not exhibit discernible risks, signifying a lower regulatory burden for such applications.
Crucially, the bill outlines several prohibitions and obligations aimed at addressing specific concerns associated with AI deployment:
- Biometric Identification Systems: The use of AI for biometric identification purposes in public spaces in real-time is prohibited, except in cases of public security and criminal investigations.
- Facial Recognition Systems: AI systems utilizing facial scraping techniques to extract facial images from public sources or closed-circuit television are explicitly banned.
- Emotional State Evaluation Systems: Systems designed to evaluate an individual’s emotional state using AI algorithms are prohibited.
- Data Governance and Cybersecurity: The legislation imposes obligations on AI developers and operators to adhere to robust data governance and cybersecurity standards to mitigate the risk of data breaches and misuse.
Source: BN Americas
Featured News
PBMs Push Back Against FTC, Filing Lawsuit Over Regulatory Actions
Nov 21, 2024 by
CPI
Amazon Faces Legal Setback in Antitrust Lawsuit Over Pricing Practices
Nov 21, 2024 by
CPI
Google Allegedly Encouraged Evidence Destruction to Dodge Antitrust Scrutiny: Report
Nov 20, 2024 by
CPI
Veteran DOJ Prosecutor Joins Farella Braun + Martel as Partner
Nov 20, 2024 by
CPI
DuckDuckGo Urges EU to Expand Google Probes Over Compliance Issues
Nov 20, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Remedies Revisited
Oct 30, 2024 by
CPI
Fixing the Fix: Updating Policy on Merger Remedies
Oct 30, 2024 by
CPI
Methodology Matters: The 2017 FTC Remedies Study
Oct 30, 2024 by
CPI
U.S. v. AT&T: Five Lessons for Vertical Merger Enforcement
Oct 30, 2024 by
CPI
The Search for Antitrust Remedies in Tech Leads Beyond Antitrust
Oct 30, 2024 by
CPI