It won’t be medicine — or manufacturing, human resources, commerce or logistics. No, when it comes to the biggest immediate impact that artificial intelligence (AI) will have upon the world, financial services will likely take that honor — a looming prize that provides fresh opportunity to consider how machines will overcome bias.
Anyone who reads the news or follows science fiction knows how important AI is becoming to business, and how much change it will almost certainly bring to civilization. However, Sunil Madhu, founder and chief strategy officer of digital Identity verification and predictive analytics firm Socure, told Karen Webster during a recent podcast interview that bias remains a vital issue for the future of AI and machine learning.
After all, he said, “human bias cannot be trained out,” especially since the majority of human decision making, according to the latest scientific findings, is governed by the biological, chemical and electrical mechanics of the subconscious. People don’t even know what they know, or how they end up knowing what they know. Nor are humans fully capable of understanding how certain computers arrive at decisions.
“You really don’t know what happens inside the layers of a neural network,” Madhu said, referring to computing technology that mimics the operations of the human brain. “If you are not aware of the biases in training data, then you really don’t know what happens inside. That’s where you get unintended consequences.”
That’s hardly a theoretical concern — not anymore. Not with the services provided by banks and other players in the financial industry used for a host of life-changing decisions, not the least of which involve gaining access to affordable mortgages and business loans.
So, what can be done short of throwing up our hands and surrendering to the growing wisdom of the machines? Madhu takes a view that seems more pragmatic and optimistic than many other observers: a view that constant testing, data accumulation and the refinement of the mathematical rules governing those smart machines can protect against unwanted, illegal or destructive biases.
“The issue of bias is a well-known problem,” he told Webster. “It’s a well-researched problem. It’s not like people don’t know there’s bias. At the end of the day, any bias can be eliminated depending on how large the [data] sample gets over time.”
He also puts faith in the ability of machines to learn how to operate without human biases. He offers the example of the movie “WarGames” — the 1983 Cold War classic in which the supercomputer responsible for launching nuclear weapons goes more than a bit off track — as a demonstration of his idea.
As you may recall (the movie is now a quarter-century old, after all) the computer in the movie finally learns that global thermonuclear war is unwinnable after playing round upon round of tic-tac-toe as the countdown to World War III continues. Any kid stuck indoors during a rainstorm knows that victory in that game is all but impossible to achieve after a few tries, a lesson learned by the defense computer.
In the real world, a computer has learned to master the famously difficult game of Go after being programmed with the rules and left to its own devices. According to Madhu, such a technique — letting the machine learn on its own instead of submitting to human supervision — renders moot the danger of “human bias creeping in.” The machine, after all, “trained itself.”
That may be true, but such luminaries as Elon Musk and the late Stephen Hawking have warned about the dangers of AI — might not a machine that gains sentience decide that computers and robots are better off in charge of the planet instead of humans?
Well, if one looks past the fact that people are pretty good at messing up their own lives, politics, economies and societies with little help from AI, one might find reason to be optimistic — at least for now, and at least according to Madhu.
For one, sentience for machines has not yet arrived. AI is being used to build expertise in various sectors — medicine, law, transportation — and that trend looks likely to continue. Granted, the same machine learning that can lead to more efficient and profitable financial services is being used in military technology.
However, so far, the humans remain in charge — and can still apply rules that, say, prevent drones from killing civilians.
“I’m not worried that AI will kill us anytime soon,” said Madhu.
In fact, when talking about AI and financial services, he is pretty sunny. He already told Webster that banks are “as large as software development companies,” and financial trades live and die by algorithms. Greater financial inclusion, more access to financial services and such positives are certain to follow from the enhanced decision making that comes from machine learning and AI, he said.
“Those are things that are going to have global effects,” he said.