In the age of machine learning, when relying on algorithms that consume, process and ultimately deliver a verdict on large amounts of data, it can be challenging to understand and explain what’s going on behind the scenes — and why the outcome is what it is.
Featurespace Product Manager Richard Graham told PYMNTS that model explainability is critical in financial services, and indeed any other vertical that relies on advanced technologies to reach everyday decisions that affects people’s daily lives. At a high level, model explainability is simply the concept of being able to understand what is happening as inputs are, eventually, turned into outputs.
Many artificial intelligence (AI) solutions produce a decision, but they often don’t shed any light on the logic behind it, said Graham.
Take applying for a job. You submit a resume online with a broad range of details that paint a picture of career progression and aspirations. Then comes an e-mail from HR that simply states that the application will not be moving forward. But that’s it; there’s no explanation about why the decision has been made.
“Model explainability takes inputs, processes the data, and then is able to give outputs about how it came to the conclusion,” Graham told PYMNTS. “This is the useful and valuable information businesses can then use to justify the accuracy of the technology’s decision making.”
This level of transparency — understanding the “why” and not just the “what” — is critical in financial services, where banks and other financial institutions (FIs) take in billions of data points across millions of customers, tied to hundreds of millions of transactions.
An Effective Safeguard
Increased attempts at fraud and money laundering make this more important than ever. Legacy technology is good for spitting out explainable rules, Graham said — but all FIs, amid the great digital shift, are seeking to introduce more machine learning and behavioral analytics to their customers.
That’s easier said than done.
“Some of the barriers that I’ve seen for FIs as they are adopting new technology are tied to trust,” he said. “Can you trust these new models and algorithms to derive meaningful insights from the significant information coming in — and importantly, can you trust that they’re better than the existing rules that are already in the legacy technology?”
Explainability can boost those trust levels and reduce some of the false positives that are a hallmark of the existing, legacy processes, he said.
The Goals of Model Explainability
A well-designed model will show the user all the data it used to arrive at a conclusion, Graham said. For instance, it will examine whether a bank app user has logged in multiple times from different locations — and determine whether spending patterns have changed. Digging deeper into the cascading volumes of pandemic-era online payments demands that FIs more carefully weigh evidence tied to red flags.
That “will give the investigator lower false positives, and they are going to better understand the different types of fraudulent activity that is coming through, and they are getting more complete information to base their decisions on,” Graham said.
Ultimately this will benefit end customers too. For instance, a bank can pinpoint to that end customer — beyond flagging account behavior that might be “several standard deviations” beyond what might be typical spending — that someone masquerading as them has logged in from a new location from which they’ve never transacted.
In 2022 and beyond, banks will be able to leverage risk and fraud management as a competitive and strategic advantage, he said.
“Fraud will force every single bank to fight for its reputation, and FIs are already starting to convey that they have the fraud controls in place to better protect customers,” he said. “Fraud prevention technology is going to be a differentiator in 2022.”