In a move to regulate the expanding use of artificial intelligence (AI) in federal agencies, the White House has announced stringent measures aimed at safeguarding Americans’ rights and ensuring safety.
The directive, issued by the Office of Management and Budget (OMB) on Thursday, mandates federal agencies to adopt concrete safeguards by December 1, as reported by Reuters.
Under the new guidelines, agencies utilizing AI technologies are obligated to monitor, assess, and test the impacts of AI on the public. Additionally, efforts must be made to mitigate the risks of algorithmic discrimination while providing transparent insights into the government’s AI usage. This entails conducting thorough risk assessments and establishing operational and governance metrics to ensure accountability and transparency.
President Joe Biden had previously signed an executive order in October, invoking the Defense Production Act to compel developers of AI systems posing risks to national security, economy, public health, or safety to share safety test results with the U.S. government prior to public release.
Related: White House Pushes for Pro-Small Business AI Policy
The White House emphasized that the implementation of these safeguards is crucial, particularly in instances where AI deployment could impact Americans’ rights or safety. Detailed public disclosures regarding the usage of AI by the government will be made to ensure transparency and accountability.
Notable provisions include the ability for air travelers to opt-out from Transportation Security Administration (TSA) facial recognition screenings without delay and the requirement for human oversight in federal healthcare systems where AI supports diagnostic decisions.
Generative AI, which has raised both excitement and concerns, particularly regarding job displacement and potential societal upheavals, is also addressed in the directive. Government agencies are now mandated to release inventories of AI use cases, report metrics on AI usage, and disclose government-owned AI code, models, and data, provided they do not pose significant risks.
The Biden administration underscored the ongoing utilization of AI across various federal agencies. For instance, the Federal Emergency Management Agency (FEMA) employs AI to assess structural hurricane damage, while the Centers for Disease Control and Prevention (CDC) utilizes AI for disease spread prediction and opioid use detection. Additionally, the Federal Aviation Administration (FAA) leverages AI to enhance air traffic management in major metropolitan areas, ultimately improving travel efficiency.
Source: Reuters
Featured News
Judge Allows FTC Antitrust Case Against Amazon to Move Forward
Oct 1, 2024 by
CPI
SAP Leader Urges Caution on EU AI Rules, Warns of Competitive Disadvantage
Oct 1, 2024 by
CPI
Colorado’s Grocery Workers Unite to Oppose $24.6 Billion Supermarket Merge
Oct 1, 2024 by
CPI
Canada’s Competition Bureau Warns Businesses of Tougher Enforcement
Oct 1, 2024 by
CPI
Top Antitrust Lawyers Launch New Boutique Firm
Oct 1, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Refusal to Deal
Sep 27, 2024 by
CPI
Antitrust’s Refusal-to-Deal Doctrine: The Emperor Has No Clothes
Sep 27, 2024 by
Erik Hovenkamp
Why All Antitrust Claims are Refusal to Deal Claims and What that Means for Policy
Sep 27, 2024 by
Ramsi Woodcock
The Aspen Misadventure
Sep 27, 2024 by
Roger Blair & Holly P. Stidham
Refusal to Deal in Antitrust Law: Evolving Jurisprudence and Business Justifications in the Align Technology Case
Sep 27, 2024 by
Timothy Hsieh