An AI-driven model offers users great power when it comes to handling data, making sense of seemingly far-flung information. But there also is great responsibility on the part of those who build and use those models. Visa’s Melissa McSherry, global head of credit and data products, tells Karen Webster about the most vital considerations of AI practitioners. First, do no harm.
To steal a line from the Marvel Universe, “with great power comes great responsibility.”
To steal another line from the Hippocratic oath, penned centuries ago, “first, do no harm.”
Those two maxims extend into the world of artificial intelligence (AI) — to the models built on machine learning and AI, and to the humans who come up with the models in the first place. After all — technology without guiding principles, without scrutiny of the beginning goals and end outputs — is just data in and data out, where unintended consequences may accrue.
As Melissa McSherry, senior vice president and global head of credit and data products at Visa, told Karen Webster, the responsible use of AI that examines huge swathes of data (such as those housed in the payment giant’s own databases), “is a vital consideration for AI practitioners.”
The ultimate goal, as McSherry sees it, is applications that see the consumer benefiting from the collection and use of the aforementioned data. The benefits can materialize in any number of ways, added McSherry. Perhaps the data sets are being examined in order to protect individuals and businesses from fraud. Perhaps the data might be useful in shortening eCommerce transaction times, smoothing the online checkout experience, or speeding underwriting decisions — where, as McSherry said, greater financial inclusion or more relevant consumer offers can result.
But with the complex models that take shape with the aid of machine learning and AI, ensuring those benefits do make the leap from goal to reality is no easy task. As McSherry told Webster, “When you are dealing with a relatively simple (modeling) system, it can be quite easy to understand what it is doing.” Within the labyrinthine constructs of AI-driven models, “it’s important that we really understand what our systems and our tools and our models are doing.”
That means, she said, making sure the outcomes and decisions of those systems are logical, reasonable and — just as important — unbiased. Against that backdrop, the person or group building those models must ensure the outcomes are interpretable. The ease and automation that marks the models and technological tools housed within those models — along with the creation of new tools, said McSherry — still requires human understanding (and insight and control) along the way.
Within her own department, McSherry told Webster, AI finds its way into combating application fraud and aiding issuers in making credit-underwriting decisions, such as in credit line increases.
McSherry and her team, for example, conducted disparate impact analysis [where some groups of borrowers may be the victims of unintentional discrimination], which ensured comfort that the net effect of these tools would be positive.
“It’s not sufficient to just demonstrate good practice in the building of the models — you have to also look at the actual outcome of the models,” she said.
Used judiciously, with a nod to the huge boost in computing power and algorithms that can be building blocks of the modeling process, McSherry said users can expose model bias — and avoid introducing bias in the first place.
As she noted, putting a biased data set into almost any algorithm will produce a biased outcome. And the person who is building the model has to adjust the data set, or adjust the model to compensate for bias in that data set.
The Data Considerations
McSherry said, too, that as technology has evolved, these models need a lot of data to feed them — and now, more than ever, lots of data has been moving through various ecosystems and changing hands.
A lot of time has been spent, and is still being spent, on de-identifying data so that it is not, in McSherry’s words, “moving in the clear.” With the growth in AI, she continued, there is also a growing economic incentive to share data. There is room for a more robust discussion, and development of frameworks, for how data can be shared across organizations, she said, adding that based on Visa’s consumer research, there is a reasonably large gap between common practice and what consumers expect.
In the end, AI, when used responsibly, can help make better predictions across any number of settings, well beyond the confines of battling fraud or underwriting credit. Picture the firm that wants to approach a consumer with an offer for international travel. Knowing that someone has traveled recently, and making informed predictions about whether they might be inclined to travel again, can make all the difference between a merchant or bank’s targeted offer or promotion being helpful or merely annoying.
Said McSherry: “I have yet to find an application where, if you have the data, AI doesn’t make it better.”