Technological innovations bring with them vast economic opportunities, but also an equal number of questions.
These can include questions around their most effective use, questions around their impact both inside and outside of an organization, questions about their regulation and questions about their capabilities, to name just a few.
For no innovation is this more true than generative artificial intelligence (AI), which continues to evolve at a rapid pace.
And as financial enterprises and payment players alike look to define their implementation strategy with the revolutionary — and still unregulated — technology, several fundamental questions surrounding AI and its mechanics remain unanswered.
PYMNTS has identified six, the first of which being: just how exactly should businesses prepare to introduce AI systems into their organization?
See also: Tailoring AI Solutions by Industry Key to Scalability
Just a handful of giant tech companies, and a few well-funded startups, are behind the vast majority of advancements in AI — but the limited field doesn’t mean that choosing a foundational model or application is an easy decision.
The optimal way to integrate AI systems into workflows is different from the way other software tools are embedded into existing tech stacks, but the first step is the same: undergoing a self-assessment to define the desired use case.
AI tools, like businesses themselves, are goal-oriented, meaning they need to be pointed toward a real business problem, not just kept on the shelf as a shiny object or leveraged for PR.
“You don’t want to boil the ocean and try to solve for everything at once,” Corcentric CEO Matt Clark told PYMNTS in June. “Firms need to look at [transforming their existing processes] as a kind of crawl-walk-run mentality to get to where they need to go.”
It is also crucial to erect guardrails around the AI system so that it adheres to governance and compliance standards, and so that the results produced are auditable and verifiable.
Once an AI system has been deployed, the next unanswered question raises its head: just what will AI’s impact on the workforce be?
Read also: AI’s Future Is Becoming Indistinguishable From the Future of Work
As PYMNTS has reported, AI’s fundamental phase shift in the productivity gains available to businesses has already spurred a reallocation of resources, including staffing cuts, across sectors.
The Japanese government has called on its domestic tech companies to be human-centric when developing or using generative AI, and 25% of CEOs believe that AI will lead to job cuts. Media and entertainment, banking, insurance and logistics leaders are the most likely to expect this.
“In 2024, we’re going to shift from a world where it was a risk to try using generative AI to become more efficient, into a world where there is actually a bigger risk of being left behind if you don’t try it,” James Clough, chief technology officer and co-founder of Robin AI, told PYMNTS during a conversation for the “AI Effect” series.
But beyond just enterprise use case questions, there still exist many questions unanswered around the fundamentals of the AI models themselves.
“We always overestimate the first three years of a technology, and severely underestimate the 10-year time horizon,” Bushel CEO Jake Joraanstad told PYMNTS in December.
Read more: Nothing Transformative About OpenAI’s Copyright Abuses, Says New York Times Lawsuit
Generative AI systems are only as good as the data they’re trained on, and the largest players are in an arms race to acquire the best training data sets — at times relying on the tech sector ethos of moving fast and breaking things.
And while the data landscape is becoming increasingly competitive and expensive as tech firms large and small alike ramp up their efforts to build differentiated content libraries, companies are increasingly being sued by the copyright holders of the content they are using to train their models.
The most prominent lawsuit is the one filed by The New York Times against both Microsoft and OpenAI’s family of operating subsidiaries.
“I think [the lawsuit is] going to put a shot across the bow of all platforms on how they’ve trained their data, but also on how they flag data that comes out and package data in such a way that that they can compensate the organizations behind the training data,” Shaunt Sarkissian, founder and CEO at AI-ID, told PYMNTS.
“The era of the free ride is over,” he said.
But the importance and legality of the questions around AI systems’ data provenance are only beginning.
And those questions lead to the next hanging chad adding opacity to the AI operating landscape, namely how the innovation will be regulated.
Read more: Is the EU’s AI Act Historic or Prehistoric?
While there are calls for global regulation coming from all quarters, so far the European Union (EU) and China are the only major market economies to get the ball rolling on regulating AI.
But concerns around AI capabilities to spread false information are stepping up the urgency for governments to act.
The shape that global regulation takes, or any framework passed by the U.S., will have huge ramifications for the AI sector going forward.
They will also need to be shaped around, and designed to shape, the answers to the two remaining questions hanging over AI’s future: will the technology ever become, as many of its proponents claim, smarter than humans — and if so, how will we control it? As well as the question fundamental to the notoriously expensive-to-develop technology, how will it be monetized?
“There’s a long way to go before there’s a futuristic version of AI where machines think and make decisions. … Humans will be around for quite a while,” Tony Wimmer, head of data and analytics at J.P. Morgan Payments, told PYMNTS in March. “And the more that we can write software that has payments data at the heart of it to help humans, the better payments will get.”