Department of Homeland Security First Gov’t Agency to Embrace AI

Department of Homeland Security

The Department of Homeland Security has become the first federal agency to embrace artificial intelligence (AI).

The DHS on Monday (March 18) unveiled a “roadmap” for its planned use of AI in three pilot projects.

In one, Homeland Security Investigations (HSI) will test AI on detecting fentanyl fighting child sexual exploitation. Second, the Federal Emergency Management Agency (FEMA) will use AI to help communities with hazard mitigation plans. And the U.S. Citizenship and Immigration Services (USCIS) will use AI on immigration officer training.

“One cannot ignore it,” DHS Secretary Alejandro Mayorkas told The New York Times in a Monday report.

“And if one isn’t forward-leaning in recognizing and being prepared to address its potential for good and its potential for harm, it will be too late and that’s why we’re moving quickly.”

The report said the DHS will work with companies like OpenAI, Anthropic and Meta on the pilot programs.

According to the report, DHS plans to hire 50 AI experts to protect critical infrastructure from AI-generated attacks and to stop the use of the technology to create things like child sexual abuse material or biological weapons.

In addition, the department will use chatbots to train immigration officials to conduct interviews with refugees and asylum seekers. It will also use chatbots to glean information about communities across the country to help them come up with disaster relief plans.

DHA will report on the results of its pilots by year’s end, Eric Hysen, the department’s chief information officer and head of AI, told the Times. He said the agency chose OpenAI, Anthropic and Meta to experiment with a variety of tools and will also use cloud providers Microsoft, Google and Amazon for the pilots.

“We cannot do this alone,” said Hysen. “We need to work with the private sector on helping define what is responsible use of a generative AI.”

The efforts by the DHS follow last year’s launch of a range of White House initiatives designed to govern the use of AI, including the United States AI Safety Institute (AISI), a Department of Commerce program to develop technical guidelines used by regulators.

Last year also saw President Joe Biden issue an executive order designed to promote safe AI development, requiring the developers of the “most powerful AI systems” to share their safety test results and other crucial information with the government.

Meanwhile, PYMNTS last week examined the debate about the threats posed by AI, noting that some AI specialists think gloomy headlines about the technology are overblown.

“Simply put, the machine needs humans — and will for quite some time,” Shawn Daly, a professor at Niagara University, told PYMNTS in an interview.

“We provide not only the infrastructure but also critical guidance the machine can’t do without. As for evil influences utilizing AI to nefarious ends, we’ve managed the nuclear age pretty well, which I find encouraging.”