Proposed Federal AI Oversight Plan Faces Hurdles, Experts Say

Congress

Sen. Mitt Romney’s call for enhanced federal oversight of artificial intelligence (AI) faces hurdles, according to experts, who say the technology’s rapid advancement and wide-ranging applications pose significant challenges to effective regulation.

Many observers agree that AI can pose risks, but they also point out how hard it is to decide which AI systems need strict monitoring. They stress the importance of finding a balance between encouraging new technological developments and keeping those risks in check. 

“The evidence is clear — existing safeguards for widely available LLMs [large language models] can be easily bypassed,” Daniel Christman, co-founder of the AI cybersecurity firm Cranium, told PYMNTS. “Red teams and malicious actors have repeatedly exploited vulnerabilities to create and spread threats to safety and security. For instance, there have been cases where LLMs were manipulated to output instructions for creating harmful devices.”

Currently, the United States has limited federal legislation specifically addressing AI. In March, the European Parliament passed the Artificial Intelligence Act, marking the world’s first extensive horizontal legal framework for AI. This legislation establishes uniform EU rules regarding data quality, transparency, human oversight and accountability. 

Call for Action on AI

Romney and his colleagues, Sens. Jerry Moran, Jack Reed, and Angus S. King, Jr., wrote a bipartisan letter to congressional leaders on Tuesday (April 16). In the letter, they discussed the potential dangers of AI but also its benefits, like improving Americans’ quality of life. However, they warned that AI could lead to issues like disinformation, fraud, bias and privacy problems, and impact elections and jobs. 

To manage these risks, the senators proposed four ways to regulate AI, including setting up a new committee that aligns efforts across government agencies, using the Department of Commerce and the Department of Energy’s existing resources and expertise. The lawmakers also suggested creating a new agency specifically for AI oversight. 

The plan could face bureaucratic obstacles. Nicholas Reese, a professor at the New York University Center for Global Affairs, told PYMNTS that the proposal’s key challenge will be clearly defining which types of AI and specific use cases it will encompass. He noted that AI encompasses a wide range of technologies, making it challenging to determine how each will be regulated. 

Reese explained that the plan includes setting up new government agencies with special powers, which would require passing new laws to give them those powers.

“Commerce and NIST [National Institute of Standards and Technology] are not set up as ‘oversight’ agencies as the plan envisions, and they would require adjustments in their authority, which have to happen in statute,” he said.

“Second, creating a specific new federal government agency for national security risks to AI is an extreme step,” Reese added. “It will mean that AI (however it is defined) implementation for the DOD and the IC will now have an additional layer of bureaucracy for approval. It is going to add complexity to organizations that need to be more agile.”

Reese said that although AI will improve sectors such as biotechnology, oversight should not be conducted by a newly established U.S. government agency. As an alternative, he said, the Department of Homeland Security has a dedicated Counter Weapons of Mass Destruction office.

“They would be ideally positioned to oversee and mitigate risks of AI convergence with Weapons of Mass Destruction,” he said. 

Jon Clay, vice president of threat intelligence at the cybersecurity firm Trend Micro, told PYMNTS he believes a thoughtful balance is necessary for regulation.

“Government oversight should not hinder technology advances unless those advances pose a material threat to humanity or U.S. critical infrastructure,” Clay said. “But there also shouldn’t be a total hands-off approach and allow private industry to develop anything it wants. … There needs to be a balance to allow technological advances to occur without hindering or limiting them too much.”

Clay said global competition in AI development must be considered: “Other nation-states are likely to be moving forward quickly with this same technology, and the U.S. should not restrict its own developments to ensure parity of progress.”

Is AI Really a Threat?

As PYMNTS has reported, experts vary significantly in their assessment of the risks AI might pose, highlighting the active debate over AI’s potential effects on humanity and business.

For example, a March study by the Forecasting Research Institute surveyed researchers, AI experts and elite predictors known as “super forecasters” to gather their opinions on the dangers of AI.

The study revealed that AI experts are generally more concerned about AI risks compared to super forecasters. Although there are grave warnings about an impending AI takeover, numerous AI professionals maintain a more measured perspective on the technology.

When asked about the current threat posed by AI, Clay maintained a pragmatic perspective.

“We seem to be in the early stages of this technology, and the threat at this point is mostly adversaries’ use in enhancing cyber-attacks like phishing, deepfakes and misinformation campaigns,” Clay said.

He acknowledged that while “Skynet isn’t a reality at this time,” the potential benefits of AI still seem to outweigh the risks.

AI is rapidly shaping various sectors, but it also poses unique threats that could impact global security, according to Christman.

“AI is indeed a dual-edged sword,” Christman explained. “On one side, we see remarkable benefits — improvements in healthcare diagnostics, financial forecasting and much more. However, on the flip side, there’s a darker aspect where AI can be misused in ways that can amplify traditional security threats.”

Just as the internet revolutionized communication and commerce but also introduced new forms of cyber-attacks and security breaches, AI carries similar transformative yet potentially hazardous capabilities, Christman said.

“Think about cyberattacks today and then imagine them powered by AI. They could be more sophisticated, faster, and harder to detect,” he said.

Christman emphasized the importance of preemptive action in the form of robust regulatory frameworks to address these concerns.

“As the technology evolves, so does the potential for its misuse. We need frameworks that are not only stringent but also adaptable to preemptively mitigate these risks,” he said.