In response to President Biden’s October executive order on artificial intelligence (AI), the Biden administration announced on Tuesday its first steps toward establishing essential standards and guidance for the secure deployment of generative AI. The initiative, led by the Commerce Department’s National Institute of Standards and Technology (NIST), aims to shape industry standards around AI safety, security, and trust.
Commerce Secretary Gina Raimondo emphasized the importance of developing guidelines that would position America as a leader in the responsible development and use of rapidly evolving AI technology. The effort invites public input until February 2, with a focus on crucial testing methods essential for ensuring the safety of AI systems.
NIST’s undertaking involves creating comprehensive guidelines for evaluating AI, facilitating the development of industry standards, and establishing testing environments for AI systems. The agency’s request for input extends to both AI companies and the public, focusing particularly on generative AI risk management and mitigating the risks associated with AI-generated misinformation.
Generative AI, capable of producing text, photos, and videos in response to open-ended prompts, has sparked both excitement and concerns in recent months. The technology’s potential to render certain jobs obsolete, influence elections, and surpass human capabilities raises significant questions about its responsible deployment.
Read more: President Biden Joins Federal Agencies Fighting Against Surprise Fees Harming American Consumers
President Biden’s executive order directed agencies to set standards for testing, addressing not only AI-related risks but also chemical, biological, radiological, nuclear, and cybersecurity risks. NIST is actively working on guidelines for testing, exploring areas where external “red-teaming” – a practice borrowed from cybersecurity – can be most beneficial for AI risk assessment and management.
The term “red-teaming” refers to a practice that has been employed for years in cybersecurity, involving simulations to identify new risks. NIST’s focus on setting best practices for red-teaming aligns with the broader goal of establishing a robust framework for the safe and responsible development of generative AI technology.
This initiative marks a significant stride toward ensuring that the deployment of AI aligns with ethical considerations, national security imperatives, and safeguards against potential risks, reflecting the administration’s commitment to staying at the forefront of AI development while prioritizing safety and responsibility.
Source: Reuters
Featured News
Google Allegedly Encouraged Evidence Destruction to Dodge Antitrust Scrutiny: Report
Nov 20, 2024 by
CPI
Veteran DOJ Prosecutor Joins Farella Braun + Martel as Partner
Nov 20, 2024 by
CPI
DuckDuckGo Urges EU to Expand Google Probes Over Compliance Issues
Nov 20, 2024 by
CPI
Apple Seeks Dismissal of DOJ Antitrust Case in Latest Big Tech Legal Battle
Nov 20, 2024 by
CPI
New European Cloud Regulator Aims to Tackle Unfair Licensing Practices
Nov 20, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Remedies Revisited
Oct 30, 2024 by
CPI
Fixing the Fix: Updating Policy on Merger Remedies
Oct 30, 2024 by
CPI
Methodology Matters: The 2017 FTC Remedies Study
Oct 30, 2024 by
CPI
U.S. v. AT&T: Five Lessons for Vertical Merger Enforcement
Oct 30, 2024 by
CPI
The Search for Antitrust Remedies in Tech Leads Beyond Antitrust
Oct 30, 2024 by
CPI