A UAE official said artificial intelligence (AI) needs the same level of oversight as weapons-grade uranium.
UAE Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications Omar Al Olama said there is a need for a global coalition to oversee AI, The National reported Friday (May 19).
“Even if we were the most progressive, most proactive country on Earth and put in place the best guardrails and safeguards, if [AI] goes off on the wrong tangent in China, or the U.S., or the U.K. — or anywhere else — because of our interconnectedness, it is going to harm our people,” Al Olama said during The National’s Connectivity Forum.
Al Olama said that the world community needs to have for AI that same sort of mechanisms it has that enable it to find out if a country is enriching uranium, even if the country doesn’t disclose that it is doing so, according to the report.
“We need to have the same level of rigor, the same level of oversight on AI,” Al Olama said.
Other government officials and industry leaders have also said there is a need for regulation around AI.
Related: FTC Monitoring Competition In The Artificial Intelligence Field
OpenAI CEO Sam Altman told a U.S. Senate subcommittee Tuesday (May 16) that the technology needs oversight to prevent possible harm.
“I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that,” Altman said. “We want to work with the government to prevent that from happening.”
About two weeks earlier, on May 4, the White House underscored the importance of ensuring AI products are safe and secure as it announced new initiatives around the technology and as officials met with CEOs of leading companies in the field.
Vice President Kamala Harris said in a statement released after the meeting: “As I shared today with CEOs of companies at the forefront of American AI innovation, the private sector has an ethical, moral and legal responsibility to ensure the safety and security of their products.”
In March, Tesla, Twitter and SpaceX owner Elon Musk was among the first signatories to an open letter published by AI watchdog group Future of Life Institute on the potential dangers of AI.
Featured News
Canadian Breadmakers Settle Price-Fixing Lawsuit
Jul 25, 2024 by
CPI
EssilorLuxottica Open to Meta as Shareholder, Says CEO Francesco Milleri
Jul 25, 2024 by
CPI
California Supreme Court Upholds Proposition 22, Securing Independent Contractor Status for Uber and Lyft Drivers
Jul 25, 2024 by
CPI
Paramount Global Investor Sues to Block Skydance Media Merger
Jul 25, 2024 by
CPI
Software Vendors Win Class Action Status in Antitrust Case Against CDK Global
Jul 25, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – International Trade & Antitrust
Jul 26, 2024 by
CPI
What is Wrong with the WTO Discipline on Subsidies?
Jul 26, 2024 by
CPI
The Abiding Tension Between Trade Remedy Law and Antitrust
Jul 26, 2024 by
CPI
Trade and Antitrust: An End to Isolationism
Jul 26, 2024 by
CPI
International Trade Law and Domestic Regulation of Generative Artificial Intelligence: Divergent Approaches?
Jul 26, 2024 by
CPI