Jennifer Huddleston, technology policy research fellow at the Cato Institute, told PYMNTS that regulators must take a careful approach as they examine artificial intelligence (AI).
“We’ve seen a sudden advance in AI — and a sudden advance, too, in the conversations about policy surrounding artificial intelligence and machine learning technologies,” she said. The conversation has taken shape, she said, as ChatGPT and AI assistants used in online search functions have risen to prominence.
The approaches to policymaking have been bifurcated, she noted. In the EU, for example, there’s been more emphasis on a “regulatory approach” to technology, where permission (of sorts) from the government needs to be in place before technological innovation gets underway.
In the U.S., generally speaking, “we’ve seen the opposite,” said Huddleston. There’s a “permissionless approach” that lets innovators put forth their ideas, their products and services, and the market determines whether they succeed or fail.
Those approaches are crystallizing, in both the U.S and in Europe, when it comes to AI policy. In Europe, of course, there’s the AI Act, and the U.S. has been home to a lighter touch approach — at least so far.
Asked by PYMNTS whether we might be headed toward a future where the U.S. Congress becomes the de facto regulator of the AI industry as a whole, Huddleston cautioned that “we have to take a step back first because these are very broad categories [of technology]. I have a lot of caution around calls to say ‘regulate AI writ large.’” AI is being used in several everyday activities, during chatbot interactions or when returning an item through a retailer’s website, she illustrated — and curbing AI’s reach entirely may have a significant impact. She added that regulation needs to look at “specific … clear-cut harm” to ensure that AI’s beneficial aspects are not adversely affected. The concept of harm, and the collection of data and its use, may look different in agriculture, for example, than it does in education.
With a nod to data and its use, Huddleston said key questions arise when considering how companies use the information and whether they should be held liable for how that data might be used, especially in the case of “misinformation” via bias. Many of the harms that people are concerned about may have existing legal solutions and need not have additional regulation.
“If there is going to be any kind of AI regulation, there should at least be some kind of formal delegation and some guardrails from Congress,” said Huddleston, who added that “given the wide range of applications there, you could see several different agencies trying to rush to claim authority over these new ideas.”
The question is whether AI is going to be biased, said Huddleston, “whether implicitly or explicitly, because of the data it is trained on.” One approach to correct bias is to train the models with larger and larger data sets so that those models are less likely to make certain presumptions about, say, images or text.
“There’s the question of whether AI could actually improve data privacy and security,” she noted to PYMNTS, “as it may be able to identify behavior that is incongruent with someone’s usual password practices.”
She told PYMNTS, “If we only look at regulation from the point of view of the technologies that we have, rather than considering how innovation may come into play, we might accidentally place roadblocks to things that could wind up being better long term when it comes to data privacy and security.”