A UAE official said artificial intelligence (AI) needs the same level of oversight as weapons-grade uranium.
UAE Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications Omar Al Olama said there is a need for a global coalition to oversee AI, The National reported Friday (May 19).
“Even if we were the most progressive, most proactive country on Earth and put in place the best guardrails and safeguards, if [AI] goes off on the wrong tangent in China, or the U.S., or the U.K. — or anywhere else — because of our interconnectedness, it is going to harm our people,” Al Olama said during The National’s Connectivity Forum.
Al Olama said that the world community needs to have for AI that same sort of mechanisms it has that enable it to find out if a country is enriching uranium, even if the country doesn’t disclose that it is doing so, according to the report.
“We need to have the same level of rigor, the same level of oversight on AI,” Al Olama said.
Other government officials and industry leaders have also said there is a need for regulation around AI.
Related: FTC Monitoring Competition In The Artificial Intelligence Field
OpenAI CEO Sam Altman told a U.S. Senate subcommittee Tuesday (May 16) that the technology needs oversight to prevent possible harm.
“I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that,” Altman said. “We want to work with the government to prevent that from happening.”
About two weeks earlier, on May 4, the White House underscored the importance of ensuring AI products are safe and secure as it announced new initiatives around the technology and as officials met with CEOs of leading companies in the field.
Vice President Kamala Harris said in a statement released after the meeting: “As I shared today with CEOs of companies at the forefront of American AI innovation, the private sector has an ethical, moral and legal responsibility to ensure the safety and security of their products.”
In March, Tesla, Twitter and SpaceX owner Elon Musk was among the first signatories to an open letter published by AI watchdog group Future of Life Institute on the potential dangers of AI.
Featured News
Judge Appoints Law Firms to Lead Consumer Antitrust Litigation Against Apple
Dec 22, 2024 by
CPI
Epic Health Systems Seeks Dismissal of Antitrust Suit Filed by Particle Health
Dec 22, 2024 by
CPI
Qualcomm Secures Partial Victory in Licensing Dispute with Arm, Jury Splits on Key Issues
Dec 22, 2024 by
CPI
Google Proposes Revised Revenue-Sharing Limits Amid Antitrust Battle
Dec 22, 2024 by
CPI
Japan’s Antitrust Authority Expected to Sanction Google Over Monopoly Practices
Dec 22, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – CRESSE Insights
Dec 19, 2024 by
CPI
Effective Interoperability in Mobile Ecosystems: EU Competition Law Versus Regulation
Dec 19, 2024 by
Giuseppe Colangelo
The Use of Empirical Evidence in Antitrust: Trends, Challenges, and a Path Forward
Dec 19, 2024 by
Eliana Garces
Some Empirical Evidence on the Role of Presumptions and Evidentiary Standards on Antitrust (Under)Enforcement: Is the EC’s New Communication on Art.102 in the Right Direction?
Dec 19, 2024 by
Yannis Katsoulacos
The EC’s Draft Guidelines on the Application of Article 102 TFEU: An Economic Perspective
Dec 19, 2024 by
Benoit Durand