Navigating the Multi-Headed Opportunity That is Gen AI

Generative artificial intelligence (AI) is a tool — not a sentient sci-fi creature.

But as the tool becomes increasingly commercialized across today’s operating landscape, and increasingly integrated into the workflows and outputs of various sectors, the risk that AI might run afoul of existing technology laws is also increasing.

That’s why nations around the world are racing to enact effective legal frameworks around the innovative technology that can both contain and support its change-the-game potential.

“One of the questions that is immediately raised [around AI] is how do you draw the line between human-generated and AI-generated content,” John Villasenor, professor of electrical engineering, law, public policy and management at UCLA and faculty co-director of the UCLA Institute for Technology, Law and Policy, explained to PYMNTS as part of the monthly TechReg TV series [The Grey Matter] presented by AI-ID.

Yet, implementing such disclosure requirements around the root of generated content is not without its complexities.

“At the extremes, it’s easy to classify things as one or the other,” Villasenor said. “But if you look at, for example, grammar suggestions on an academic paper that somebody’s writing to the extent that those grammar suggestions might be enabled by AI, I think most of us would agree that that shouldn’t convert the paper into an AI-generated paper.”

That’s why, if — as the European Union is hoping to do with its AI Act — there are going to be required disclosures around content that is generated by AI, there must also be clear rules or procedures for deciding what exactly, and what specifically, meets the legal definition of AI-generated content or not.

“I haven’t seen a clear, easily implementable proposal that would define [AI-generated content] across a wide range of media types,” he said.

Navigating the Dynamic Challenges of Regulating AI

The fact that disruptive technologies can be used for many purposes, several of them good, but some of them bad, is “not new to generative AI,” Villasenor explained.

That’s why as nations, and nation blocs like the EU, start to move forward with their own distinct regulatory approaches, it is important to take a step back and understand that even the most well-intentioned prohibitions, once encoded into law, might unintentionally impede what could be beneficial uses of AI when done right.

Villasenor provided as an example the banning of emotional recognition in applications such as educational institutions in the latest version of the AI Act, noting that it could be “very useful” if an AI tutoring system could recognize whether somebody is confused or puzzled as they’re trying to grapple with a new concept.

“I think all reasonable solutions should be on the table in terms of how to engage in the dialogue on these very complicated technologies,” he said. “And I would add that with AI, it’s particularly complicated because, by definition, AI systems learn and adapt their behavior as they get more data. Even the creators of AI systems won’t necessarily know all the details of exactly what sort of computations are going on inside the code after the AI has evolved.”

Experimenting Within Regulatory Sandboxes

There are long-established patterns of exclusion that have led to biases in data involving things like access to financial services, and the use of biased data for AI training purposes is one of the more pressing concerns around scaling up the technology.

“I think almost everybody who works in AI is aware at least to some extent of bias.,” Villasenor said. “And I think, without in any way discounting the importance of addressing it, I think the good news is that designers of AI systems are very well aware of the need to be cognizant of and try to address it.”

Beyond AI developers, regulators must also grapple with the need to strike a balance between fostering innovation and safeguarding against AI’s potential risks and biases. To navigate these challenges, regulators and companies have explored the concept of constructing regulatory sandboxes.

These sandboxes provide a safe environment for collaboration between regulators and companies to develop effective regulations.

However, as Villasenor explained, regulatory sandboxes may not always be sufficient in addressing the diverse concerns of regulators, companies and consumers, who may find themselves motivated by different and even competing incentives as they relate to the development of AI.

There also exist ongoing concerns about regulatory capture, where the interests of powerful entities influence the implementation of AI regulations, further complicating the regulatory landscape.

With the potential for AI to reshape industries and society, robust and adaptable regulatory frameworks must be established to ensure responsible and equitable use of the technology, Villasenor said. It is the “how” of getting there that remains the tricky part.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.