Governments around the world are racing to regulate the use of artificial intelligence (AI) tools as the technology rapidly advances, posing new challenges and risks. From Australia to the United Nations, national and international governing bodies are taking various steps to establish guidelines and laws for AI, reported Reuters.
Let’s take a closer look at the latest developments in different countries and organizations:
Australia is planning to introduce regulations that require search engines to draft new codes to prevent the sharing of child sexual abuse material created by AI and the production of deepfake versions of the same material. The country’s internet regulator aims to combat the misuse of AI in these sensitive areas.
In Britain, the Financial Conduct Authority is consulting with the Alan Turing Institute and other legal and academic institutions to enhance its understanding of AI. The competition regulator is also examining the impact of AI on consumers, businesses, and the economy to determine if new controls are necessary.
China has already implemented temporary regulations, requiring service providers to undergo security assessments and obtain clearance before releasing mass-market AI products. Several Chinese tech firms, including Baidu Inc and SenseTime Group, have launched their AI chatbots to the public after receiving government approvals.
The European Union (EU) is planning regulations and has called for a global panel to assess the risks and benefits of AI, similar to the global IPCC panel for climate change. EU lawmakers have agreed to changes in a draft of the bloc’s AI Act, with the biggest issue being facial recognition and biometric surveillance. Some lawmakers advocate for a total ban, while EU countries seek exceptions for national security, defense, and military purposes.
France’s privacy watchdog, CNIL, is investigating possible breaches related to ChatGPT, an AI chatbot. France’s National Assembly has approved the use of AI video surveillance during the 2024 Paris Olympics, despite concerns raised by civil rights groups.
The Group of Seven (G7) leaders have acknowledged the need for governance of AI and immersive technologies. They have agreed to have ministers discuss the technology as part of the “Hiroshima AI process” and report the results by the end of 2023. G7 digital ministers have also recommended adopting “risk-based” regulation on AI.
Ireland’s data protection chief emphasizes the need to regulate generative AI properly, without rushing into prohibitions. The governing bodies should find the right balance between innovation and the preservation of human rights.
Related: CFPB Begins To ‘Muscle Up’ AI Regulations
Israel is also seeking input on AI regulations to strike a balance between innovation and human rights. The country has published a draft AI policy and is collecting public feedback before making a final decision.
Italy’s data protection authority plans to review artificial intelligence platforms and hire AI experts to ensure compliance with privacy rules. ChatGPT faced temporary bans in Italy over concerns about privacy breaches.
Japan expects to introduce regulations on AI by the end of 2023, which are likely to be closer to the U.S. attitude than the stringent ones planned in the EU. The country’s privacy watchdog has warned OpenAI, the developer of ChatGPT, not to collect sensitive data without people’s permission.
Spain’s data protection agency is investigating potential data breaches by ChatGPT and has requested the EU’s privacy watchdog to evaluate privacy concerns surrounding the AI tool.
The United Nations (UN) has recognized the importance of regulating AI and held its first formal discussion on the topic. U.N. Secretary-General Antonio Guterres supports the creation of an AI watchdog similar to the International Atomic Energy Agency. The UN Secretary-General has also announced plans to establish a high-level AI advisory body to review AI governance arrangements.
In the United States, Congress held hearings on AI, and the White House announced voluntary commitments governing AI signed by companies like Adobe, IBM, and Nvidia. These commitments include steps such as watermarking AI-generated content. The U.S. Federal Trade Commission has opened an investigation into OpenAI on claims of consumer protection law violations. Additionally, a Washington D.C. district judge ruled that AI-generated artwork without human input cannot be copyrighted under U.S. law.
As governments worldwide grapple with the regulation of AI, it is clear that the technology’s impact on society and the need for responsible governance are at the forefront of discussions. The race to regulate AI tools reflects the complexity of the task and the importance of striking the right balance between innovation and protecting individuals’ rights.
Source: Reuters
Featured News
Judge Appoints Law Firms to Lead Consumer Antitrust Litigation Against Apple
Dec 22, 2024 by
CPI
Epic Health Systems Seeks Dismissal of Antitrust Suit Filed by Particle Health
Dec 22, 2024 by
CPI
Qualcomm Secures Partial Victory in Licensing Dispute with Arm, Jury Splits on Key Issues
Dec 22, 2024 by
CPI
Google Proposes Revised Revenue-Sharing Limits Amid Antitrust Battle
Dec 22, 2024 by
CPI
Japan’s Antitrust Authority Expected to Sanction Google Over Monopoly Practices
Dec 22, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – CRESSE Insights
Dec 19, 2024 by
CPI
Effective Interoperability in Mobile Ecosystems: EU Competition Law Versus Regulation
Dec 19, 2024 by
Giuseppe Colangelo
The Use of Empirical Evidence in Antitrust: Trends, Challenges, and a Path Forward
Dec 19, 2024 by
Eliana Garces
Some Empirical Evidence on the Role of Presumptions and Evidentiary Standards on Antitrust (Under)Enforcement: Is the EC’s New Communication on Art.102 in the Right Direction?
Dec 19, 2024 by
Yannis Katsoulacos
The EC’s Draft Guidelines on the Application of Article 102 TFEU: An Economic Perspective
Dec 19, 2024 by
Benoit Durand