As businesses embrace generative artificial intelligence (AI), regulators and industry groups are scrambling to keep pace, calling for balanced oversight that fosters innovation while managing risks. From financial services to aviation, experts argue that existing regulatory powers could be leveraged to govern AI applications, potentially sidestepping the need for sweeping new legislation in the short term.
As companies rapidly adopt generative AI (GenAI), executives are bracing for a wave of new regulations that could reshape the AI landscape, according to a recent KPMG survey of 225 senior business leaders.
The study found that 83% of companies plan to increase their GenAI investments over the next three years, but they’re doing so with a wary eye on the evolving regulatory environment. A significant 63% of executives anticipate more stringent data privacy requirements shortly, prompting 60% to actively review and update their data handling practices.
“With the growing adoption of GenAI, prioritizing risk management and governance, with a focus on cybersecurity and data privacy, is crucial for innovation and retaining stakeholder trust,” said Emily Frolick, trusted imperative leader at KPMG, in a news release.
The specter of increased regulation is not without consequence. More than half (54%) of the surveyed leaders expect AI regulation to drive up organizational costs. Businesses are already grappling with implementation challenges, with only 16% reporting they are highly equipped for GenAI utilization.
Despite these concerns, companies are forging ahead with GenAI integration. The technology is already shaping competitive positioning for 52% of respondents and opening new revenue opportunities for 47%. However, the push for innovation is tempered by risk mitigation efforts, with 79% focusing on cybersecurity and 66% on data quality.
As legislators worldwide grapple with the rapid advancement of AI technology, businesses are taking proactive steps. The survey found that sixty percent of organizations are implementing stringent data privacy measures, while others are deploying ethical AI frameworks to avoid potential regulations.
The American Bankers Association (ABA) and 21 state banking associations have called for a balanced approach to regulating AI in the financial sector. In a letter to the U.S. Treasury Department, the groups emphasized the need for federal preemption of state AI laws and updated regulatory guidance.
The letter, authored by ABA Vice President Ryan T. Miller, argued that while AI offers significant opportunities for the banking industry, its deployment “should only take place in an environment that carefully considers potential risks with appropriate mechanisms in place to manage those risks.”
The associations made two recommendations. First, they suggested that “any new horizontal federal law pertaining to AI preempt state requirements and clearly exclude banks from any duplicative obligations.” This approach aims to avoid the “inconsistent levels of consumer protection and significant compliance burden” seen in the privacy landscape.
Second, the letter called for “updated model risk management guidance from the prudential regulators to clarify expectations in the wake of changes to the ecosystem, but only after an appropriate notice and comment period.”
The letter noted that banks have used AI responsibly for decades within mature risk management frameworks. However, it said that the rapid emergence of GenAI presents new challenges, particularly in cybersecurity and third-party risk management.
The associations argued that any new regulations should be “industry-focused, risk-based, and tied to use case” rather than focusing on the technology itself. They also stressed the importance of international cooperation in developing AI governance frameworks to ensure interoperability across sectors and jurisdictions.
As AI rapidly evolves, policy experts are calling on U.S. federal agencies to use their current regulatory authorities to govern AI rather than wait for new legislation.
In a recent blog post for the think tank Council on Foreign Relations, Jack Corrigan, senior research analyst, and Owen J. Daniels, an Andrew W. Marshall fellow, both at Georgetown University’s Center for Security and Emerging Technology (CSET), said that many regulators already have the statutory powers to oversee AI systems in their respective domains.
“Using existing authorities can help regulators address what are likely to be a range of highly sector-specific AI applications,” the authors wrote. This approach could be faster and more effective than creating a new AI-specific agency or waiting for comprehensive legislation.
Corrigan and Daniels pointed to the Federal Aviation Administration as an example, noting its authority to set safety standards for aircraft systems, including those incorporating AI.
However, challenges remain. The post acknowledged that “regulators will need to overhaul software assurance procedures, testing and evaluation standards, and other processes to accommodate AI’s unique challenge.”
While some agencies, like the Department of Health and Human Services, have already assessed their AI governance capabilities, the authors urged others to do the same.
“Armed with this knowledge, policymakers can start setting guardrails for the AI systems under their purview,” they said.