The story of artificial intelligence to this point – at least when it comes to financial institutions – is arguably more about failure than success, even as the promise of the technology continues to grow. But success cannot come without stumbles, and as Akli Adjaoute, CEO of Brighterion (a Mastercard company) discusses with Karen Webster, the lessons learned by AI failure – and how those shortcomings and mistakes could lead to future gains in customer service, fraud prevention and revenue growth.
As documented by research into artificial intelligence by PYMNTS and Brighterion, among the main AI problems for financial institutions is lack of understanding. Executives and managers often confuse artificial intelligence (which is capable of unsupervised learning) with its less sophisticated but close cousin, machine learning (which is capable of supervised learning). In fact, as Webster and Adjaoute discussed, confusion about what AI really is can lead to delays in deployments. Some 15 percent of financial institutions that have not yet implemented AI (but want to do so) report difficulties getting buy-in from executives.
Journey to Yes
That journey to yes, Adjaoute said, is hampered because “there is a lot of confusion about AI. Executives today need to see it to believe it.” And it’s more than just that: “A lot of people confuse AI with just the algorithms,” he told Webster. “But it’s way more than that.”
Indeed, it is.
Many companies still think AI is mostly about a machine learning algorithm. The truth is that AI is all about engineering and deploying scalable, resilient, intelligent applications into real-time production environments.
Additionally, most financial institutions seems to think AI is mostly about fraud prevention but as other research from PYMNTS and Brighterion has shown, artificial intelligence can also provide hyper-personalized, real-time customer service – along with security – via the deployment of so-called smart agents.
The larger promise of AI’s capabilities does seem to be getting out, however. The PYMNTS-Brighterion research shows that 41.1 percent of commercial banks are “very” or “extremely” interested in adopting smart agents. Not only that, but 45 percent of decision makers working in fraud detection are interested in adopting smart agents.
Data Concerns
Questions about data — the fuel for any system based on artificial intelligence — also can lead to confusion about AI, and delays in deployments. Another lesson is that data earmarked for use in AI systems does not have to be perfectly clean or labeled in order to work. That can lead to anxiety within the organization about having to sift through multiple databases and get all that information in order — a massive undertaking that could even require the hiring of temporary workers or the diversion of employees from other tasks. But the lesson is this, according to Adjaoute: “You should use the data the way it is.”
Lack of transparency also hinders deployment of artificial intelligence and can also be considered a source of failure – at least so far in the AI story as it relates to working within financial institutions. PYMNTS research found that 42 percent of FIs said the AI model is not transparent enough.
Think of it this way: That means supporters of AI within an organization cannot find solid ways to explain the advantages and benefits the technology can bring. Sure, probably everyone working in digital commerce and payments knows about AI from science fiction, to say nothing of whitepapers, conferences, seminars and college classes – but that’s not enough, according to Adjaoute.
Human Factor
“You need to provide something that a normal human being who does have a computer background” when trying to sell the AI idea to higher-ups, he said. It goes deeper than that, however. The big appeal of artificial intelligence, of course, is its intelligence – its ability to make decisions and spot potential problems (such as fraud) from a distance. But human nature is human nature, and “we as human beings always like to understand why a decision has been made, why I am saying ‘yes’ to this and ‘no’ to that,” Adjaoute told Webster.
Settling concerns over transparency requires serious consideration of its return on investment – that seems obvious, but it stands as a valuable lesson, given how new tech can sometimes dazzle people enough that they forget about the hard questions of financials. That may or may not be the case in 2019 when it comes to the financial institutions’ use of AI, but the next lesson sure does: AI needs high-level supporters, including executives and board members, Adjaoute noted.
Artificial intelligence has a long way to go before it becomes a daily part of FI operations – but AI is coming one way or another. That much seems about as certain as anything can get. The failure so far – the confusion, the lack of transparency and other factors – will serve to pave the way toward future deployments and use cases for AI.