Artificial intelligence (AI) may not be as mysterious as people think — but it is as pervasive.
What generative AI capabilities have done, i2c CEO and chairman Amir Wain tells PYMNTS, is “brought in another level of abstraction” to the predictive AI and machine learning (ML) capabilities.
“The power of good technology is its ability to hide all the complexities and simplify the interface for the user,” Wain said, emphasizing that as generative AI becomes more commercialized, this “simplification” will allow the technology to become “easier to implement and more efficient.”
AI tools now advance in the span of days and weeks, but behind the flywheel effect of generative AI suddenly being integrated everywhere is a long history of experimentation, research and investment.
It is similar to the contemporary environment of digitization: at an inflection point where the tools can be seamlessly integrated.
“This didn’t happen overnight,” said Wain. “There’s been a lot of work going on in AI, and now the product is at a stage where it can be deployed commercially across various applications.”
Yet Wain cautioned against rushing full speed ahead and embracing the new technology, particularly for financial services businesses.
“Because of the quality [and regulatory] requirements, the financial services sector should use an augmented AI approach,” he said.
Get the research: Preparing for A Generative AI World
A Profound Change in the Way Information Is Activated
Generative AI has already spurred a profound change in how end-users access, activate and interact across data-rich environments.
Wain sees specific areas where the technology can drive efficiencies and accelerate process optimization beyond historical limitations.
“Compliance is a sweet spot,” he said. “It is difficult to ensure 100% compliance [within financial services], but by leveraging generative AI capabilities, businesses can both reduce the cost of ensuring compliance and increase the quality of compliance.”
He gives as an example i2c’s customer contact centers, where complaints and disputes are regulated.
By tapping future-fit AI solutions, including natural language processing (NLP) tools, his firm can process millions of calls and identify key problem areas in real time, rather than having to parse through an immense amount of data manually.
Still, he said, there remains a human in the loop that handles quality control and ensures that the process remains compliant. That’s because if bad data becomes the source of an AI response, it can be further propagated by serving as an informational foundation for future AI responses.
Read more: GPT Takes the Manual Out of Labor in the Industrial Economy
Augmenting, Not Replacing, Human Outputs
As PYMNTS has reported, complicating the situation for firms looking to capture an innovative edge is that at the center of many business use concerns around the integration of innovative generative AI solutions lie ongoing questions around the integrity of data and information fed to the AI models, as well as the provenance and security of those data inputs.
“There is a new skillset that’s required,” Wain said. “If you have two or three people ask ChatGPT the same question, they may get different answers depending on the nuances of their queries.”
He explained that similar to how novice Excel users often take “17 steps” to get somewhere, whereas an advanced user more fluent in the program’s macros can do the task in seconds, activating the full potential of AI tools will require a similar grasp of the technology’s subtleties.
“If not a skillset, it will at least be an exposure that people need to understand — how to best prompt the AI to get the appropriate answer,” Wain said.
He said that AI already offers ready-to-use advances across fraud detection and prevention, customer acquisition and retention, and real-time personalization of experiences.
“You want to make sure that it’s cost efficient, so you can add more value to the product, and that at the end of the day, integrating AI remains convenient,” Wain said.
He also flagged generative AI’s tendency for “hallucinating,” or providing fabricated results, as a particular problem area for financial services firms.
“Based on the quality standard and compliance requirements in financial services, we’ve got to be careful how we use this technology in a compliant manner,” he said. “We cannot be at the bleeding edge of technology dealing with people’s money and funds … we need to put a compliant framework around the tool.”
Wain noted that, as with all emergent technologies, it’s important not to let the “rogue users hurt the technology” and for businesses to stay ahead of the risks.
Still, he said, the next decade is shaping up to be “super exciting.”