A Look at U.S. Initiatives on AI Use and Its Development

President Biden issued an executive order on Monday (Oct. 30) on artificial intelligence (AI) to ensure that America leads in harnessing the potential of AI while managing its risks. 

Two days later, on Wednesday (Nov. 1), the Biden Administration followed up the executive order with guidance on how the federal government is to use AI, its plans to fight fraud and also form an AI safety institute.

The executive order and guidance come as nations worldwide are scrambling to set regulatory frameworks in place for AI and its smarter sibling, generative AI. PYMNTS breaks down both U.S. announcements.

AI Safety and Security

The executive order mandates that developers of AI systems share safety test results and critical information with the U.S. government. Companies developing AI models with serious risks to national security, economic security, or public health must notify the government and share red-team safety test results.

The National Institute of Standards and Technology will set rigorous standards for safety testing, and the Department of Homeland Security will apply these standards to critical infrastructure sectors. Measures will also be taken to protect against the risks of using AI to engineer dangerous biological materials and to detect AI-generated fraudulent content.

Protecting Americans’ Privacy

The executive order emphasizes the protection of Americans’ privacy in the context of AI. It calls on Congress to pass bipartisan data privacy legislation and prioritizes federal support for privacy-preserving techniques, including those that use AI. The order aims to strengthen privacy-preserving research and technologies, evaluate how agencies collect and use commercially available information, and develop guidelines for evaluating the effectiveness of privacy-preserving techniques.

The president’s order provides clear guidance to prevent discrimination in AI algorithms used by landlords, federal benefits programs and federal contractors. 

Consumer Protection and Healthcare

The executive order promotes the responsible use of AI in healthcare and the development of affordable and safe healthcare practices involving AI. It aims to leverage AI’s potential to transform education by creating resources to support educators deploying AI-enabled educational tools.

Supporting Workers and Promoting Innovation

The order calls for a report on AI’s potential labor-market impacts and options for strengthening federal support for workers facing labor disruptions caused by AI.

Global Collaboration and Responsible Government Use of AI

Recognizing the global nature of AI challenges and opportunities, the Biden-Harris Administration will collaborate with other nations to support safe, secure and trustworthy AI deployment. This includes expanding engagements with international partners, accelerating the development of vital AI standards and promoting the responsible development and deployment of AI abroad. 

Lastly, the order focuses on the responsible and effective government use of AI. It provides guidance for agencies’ use of AI, streamlines AI product and service acquisition and accelerates the hiring of AI professionals across the government.

Vice President Kamala Harris also announced a series of new U.S. initiatives to advance the safe and responsible use of AI, building upon the executive order.

Establishing the United States AI Safety Institute (US AISI)

The United States AI Safety Institute (US AISI) will be established inside the Department of Commerce. It will develop guidelines, tools, benchmarks, and best practices for evaluating and mitigating dangerous capabilities of AI. The US AISI will collaborate with peer institutions internationally and partner with outside experts from civil society, academia, and industry.

Draft Policy Guidance on Government Use of AI

The Biden-Harris Administration is releasing its first draft policy guidance on the use of AI by the U.S. government. This draft policy outlines concrete steps to advance responsible AI innovation, increase transparency and accountability, protect federal workers, and manage risks from sensitive uses of AI. It includes specific safeguards for uses of AI that impact the rights and safety of the public.

Funders Initiative and AI-Related Philanthropic Organizations

Vice President Harris announced a new funders initiative with philanthropic organizations related to AI. Ten leading foundations have collectively committed more than $200 million in funding toward initiatives that align with the Vice President’s priorities. The initiative aims to ensure AI protects democracy and rights, drives AI innovation in the public interest, empowers workers, improves transparency and accountability of AI, and supports international rules and norms on AI.

Countering AI-Generated Fraud and Deception

Efforts will be made to counter fraudsters using AI-generated voice models to target vulnerable individuals. The administration will host a virtual hackathon to build AI models that can detect and block unwanted robocalls and robotexts. The administration also calls on all nations to support the development and implementation of international standards for content authentication to increase global resilience against deceptive or harmful AI-generated or manipulated media.

Developing Responsible and Rights-Respecting Practices

The Biden-Harris Administration intends to work with the Freedom Online Coalition to develop a pledge incorporating responsible and rights-respecting practices in government development, procurement and use of AI. This pledge aims to ensure that AI systems are developed and used consistently with applicable international law and democratic institutions and processes.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.