Zest AI, which is developing software that it claims will help wring the bias out of lending, just got a multimillion-dollar vote of confidence.
Zest on Tuesday (Oct. 20) announced that the global software developer Insight Partners will invest $15 million into the firm.
The Los Angeles-based software startup said it will use the injection of capital on research and development as well as the marketing of its Model Management System, designed to remove racial, gender and other bias from the credit and loan approval process.
In particular, Zest, which was formerly Zest Finance before it renamed itself last year, contends that its software can help correct for unintended bias in loan approval algorithms, which has become a major concern in the industry.
In one of the most high-profile cases of machine bias, Goldman Sachs was hit with a major backlash after it launched the new Apple credit card. Applications by women were disproportionately rejected, including the wife of Apple Co-founder Steve Wozniak.
“The Zest team has deep domain expertise in both lending and explainable AI [artificial intelligence], and this has led to strong product-market fit; however, what has excited us is the opportunity to tilt the lending landscape toward equity and inclusion,” said Insight Partners Managing Director Deven Parekh in a press release.
Zest contends that its fair lending software can significantly boost loan approvals while also reducing the level of risk.
One lender was able to reduce the approval gap between white and minority borrowers by nearly a third without any increase in portfolio risk, the company noted in a press release.
Zest’s lending approval software goes well beyond the traditional method of using credit scores, with the company having developed a way to vet AI loan approval models through what it calls “adversarial debiasing.” This involves pitting two machine models against each other – one focused on creditworthiness and the other on race and gender, among other things.
“Competition in this game drives both models to improve their methods until the predictor can no longer distinguish the race or gender outputs of the first model, resulting in a final model that is accurate and fair,” Zest noted in an explainer piece on its website.