Summer is over, but the generative artificial intelligence (AI) revolution continues to march on.
Many of the bleeding-edge large language models (LLMs) from firms like OpenAI, Google and Anthropic were updated in August — yet outside of China, little real progress has been made on the regulatory front as it relates to enacting guardrails around the new technology’s applications and capabilities.
However, that might be about to change.
Senate Majority Leader Chuck Schumer (D-NY) is kicking off his series of bipartisan “AI Insight Forums” next Wednesday (Sep. 13), reportedly promising a “supercharged” highway to AI regulation when the U.S. Senate returns from its summer recess, per a Monday (Sept. 4) Fox News report.
“In the twenty-first century, we cannot behave like ostriches in the sand when it comes to AI. We must treat AI with the same level of seriousness as national security, job creation and our civil liberties,” Schumer said.
The AI forums are designed to educate American lawmakers around the realities of AI’s rapidly advancing potential. The first meeting next week is slated to include tech sector leaders like Tesla’s and X’s Elon Musk, Meta’s Mark Zuckerberg, Google CEO Sundar Pichai, NVIDIA CEO Jensen Huang, OpenAI CEO Sam Altman, Microsoft Co-founder Bill Gates and current Microsoft CEO Satya Nadella, as well as others with a high-level knowledge of both tech and policy.
This forum comes as Britain on Monday laid out the five objectives of its upcoming AI Safety Summit, taking place Nov. 1-2.
Those objectives are: To create a shared understanding of the risks posed by frontier AI and the need for action; to establish a forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks; to develop appropriate measures which individual organizations should take to increase frontier AI safety; to identify areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance; and to showcase how ensuring the safe development of AI will enable AI to be used for good globally.
Read also: UN Security Council Wants to ‘Exercise Leadership’ in Regulating AI
“Let’s get ahead of [AI], rather than trying to react after it’s already … full speed ahead,” Senator Mike Rounds, a South Dakota Republican, said earlier this year.
His statement underscores the fact that, historically speaking, U.S. lawmakers take their time when responding with policy frameworks meant to manage the applications of revolutionary technologies.
The U.S. has not passed a major comprehensive framework regulating the technology sector in decades – and observers believe that the tech industry, for their part, would like to keep it that way.
But as increasingly sophisticated generative AI systems continue to propagate and grow at a hyper-rapid clip, experts, academics, lawmakers and even some tech executives are telling the U.S. government to move as fast as they can in drafting a regulatory policy.
Still, as PYMNTS has written, tackling a problem as broad, dynamic and rapidly evolving as AI will require a deep level of knowledge.
“I don’t think that we can expect any one single institution to have the kind of knowledge and capacity to address the varied problems [of AI regulation],” Cary Coglianese, founding director of the Penn Program on Regulation, told PYMNTS.
That’s because generative AI’s myriad use cases present an equal number of challenges for regulators.
Looking to the past might help. Television guardrails, for example, were based on previous ones in place for radio and telephones, and regulation typically happens faster when new innovations resemble older ones. At the same time, railroads were the first U.S. industry to be subject to federal regulation, and it took about 60 years for lawmakers to rein them in.
“The nature of AI’s uses vary widely, and many of those uses fall into categories that, first of all, already have regulators. … There’s no question that the National Highway Traffic Safety Administration is going to be a better regulator of autonomous automobile technology than some kind of new startup AI regulator would be,” Coglianese told PYMNTS.
Television and nuclear energy were the fastest innovations to become federally regulated, at 5 and 4 years into their respective lifespans. We are not even 11 months into the commercialization of generative AI.
Read more: How AI Regulation Could Shape Three Digital Empires
Shaunt Sarkissian, founder and CEO at AI-ID, told PYMNTS that industry players should approach lawmakers with the attitude of, “We know this is new, we know it’s a little bit spooky, let’s work together on rules, laws and regulations, and not just ask for forgiveness later, because that will help us grow as an industry.”
While it is possible that U.S. lawmakers will defy history and pass a comprehensive framework for generative AI, it is more likely that an effective legal framework will bubble up from the approaches taken by individual states.
At least 25 states have introduced AI-related legislation this year, with measures passing in 14 of them. As PYMNTS reported, by allowing states to experiment with regulations tailored to their specific needs, the federal government can gain valuable insights into the real-world implications of different regulatory approaches to inform the creation of cohesive national regulations that balance safety, innovation and economic growth.