Sixteen prominent companies leading the charge in Artificial Intelligence (AI) development have made a resolute pledge to global leaders to prioritize the safe advancement of this transformative technology. The commitment comes amidst a backdrop of rapid innovation that outpaces regulatory frameworks, raising concerns about emerging risks.
According to a report by Reuters, the pledge was made during a global meeting, where industry giants such as Google, Meta, Microsoft and OpenAI, alongside firms from China, South Korea and the United Arab Emirates, joined forces.
This coalition was supported by a broader declaration from influential entities including the Group of Seven (G7) major economies, the European Union (EU), Singapore, Australia and South Korea. The virtual meeting, hosted by British Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol, served as a platform to underscore the importance of AI safety, innovation and inclusivity.
Emphasizing the urgency of the matter, President Yoon highlighted how AI safety is essential to societal wellbeing and democracy, citing concerns over risks like deepfake technology. The agreement reached at the meeting prioritized AI safety, innovation and inclusivity, as per South Korea’s presidential office.
Related: New Report Says AI Regulations Lag Behind Industry Advances
Participants stressed the significance of interoperability between governance frameworks, proposed the establishment of a network of safety institutes and advocated for engagement with international bodies to strengthen collective efforts in addressing AI-related risks effectively.
Among the companies committing to ensuring AI safety were notable names such as Zhipu.ai, supported by China’s tech giants Alibaba, Tencent, Meituan and Xiaomi, as well as the UAE’s Technology Innovation Institute, Amazon, IBM and Samsung Electronics, as reported by Reuters. These entities pledged to publish safety frameworks for assessing risks, steer clear of models where risks couldn’t be adequately mitigated and uphold principles of governance and transparency.
Commenting on the declaration, Beth Barnes, founder of METR, a group dedicated to promoting AI model safety, underscored the necessity of international consensus to define “red lines” beyond which AI development could pose unacceptable risks to public safety, according to Reuters.
Source: Reuters
Featured News
Electrolux Fined €44.5 Million in French Antitrust Case
Dec 19, 2024 by
CPI
Indian Antitrust Body Raids Alcohol Giants Amid Price Collusion Probe
Dec 19, 2024 by
CPI
Attorneys Seek $525 Million in Fees in NCAA Settlement Case
Dec 19, 2024 by
CPI
Italy’s Competition Watchdog Ends Investigation into Booking.com
Dec 19, 2024 by
CPI
Minnesota Judge Approves $2.4 Million Hormel Settlement in Antitrust Case
Dec 19, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – CRESSE Insights
Dec 19, 2024 by
CPI
Effective Interoperability in Mobile Ecosystems: EU Competition Law Versus Regulation
Dec 19, 2024 by
Giuseppe Colangelo
The Use of Empirical Evidence in Antitrust: Trends, Challenges, and a Path Forward
Dec 19, 2024 by
Eliana Garces
Some Empirical Evidence on the Role of Presumptions and Evidentiary Standards on Antitrust (Under)Enforcement: Is the EC’s New Communication on Art.102 in the Right Direction?
Dec 19, 2024 by
Yannis Katsoulacos
The EC’s Draft Guidelines on the Application of Article 102 TFEU: An Economic Perspective
Dec 19, 2024 by
Benoit Durand