The threat of a data breach is now an ever-present part of life for customers and the banks that serve them.
A reported 3,813 data breaches across a number of industries — collectively exposing 4.1 billion customer records — occurred in the first six months of 2019, for example.
This has become a larger problem for FIs as they must not only deal with protecting customers from fraud, but also guard against bad actors armed with 4.1 billion stolen credentials. Account opening fraud is a favorite tactic among such cybercriminals, many of whom rely on these credentials to pose as legitimate customers.
Banks thus need to accurately determine potential customers’ legitimacy as soon as they decide to sign up. Most importantly, they must do this without creating friction for legitimate users.
FIs are also offering more ways for customers to enroll, such as through mobile, with 36 percent of banks and credit unions now offering end-to-end mobile account opening features for users. Fraud tends to go where users are, and there was a 117 percent increase in overall mobile fraud in 2018.
Customers can easily become frustrated when asked to provide additional verification in mobile channels, however. Banks cannot afford to ignore this frustration if they wish to remain competitive, especially with 76 percent of users preferring to open accounts on mobile, according to one recent study.
FIs are thus turning to a host of technologies to protect customers from account opening and other mobile channel fraud forms. Machine learning (ML) and artificial intelligence (AI) have been used by banks for fraud protection for several years, but these technologies are just starting to take on main roles: They are now displacing traditional knowledge-based authentication (KBA) methods like PINs and passwords.
AI, ML Innovations Necessary to Stop Account Opening Fraud
Account opening fraud can be difficult to detect because it is often tied to data breaches at other banks or FinTechs, meaning banks do not yet have the data necessary to recognize illegitimate customers. This is where AI and ML technologies can come in handy.
Banks can use ML and AI to better analyze and understand customer behavior, providing them with more robust views of how legitimate customers act. This approach relies on algorithms to identify users based on available data, and the amount and type FIs can collect for fraud protection is critical to these technologies’ success.
Such algorithms could be used to better determine fraud risk before a customer’s application is accepted or declined, too. Banks often pair these tools with human components, providing final checks on potential users who may have been flagged as high risk.
These tools can provide benefits to banks and customers, including speed and convenience. Users can often go through these identity checks without being alerted to them, which stops them from needing to provide additional information and decreases the time it takes to have new accounts authenticated.
Banks, meanwhile, can better keep fraudsters out while increasing the number of customers they can onboard. In one particular case, an unnamed bank used a third-party AI service from fraud technology provider Feedzai and saw a 70 percent increase in customer onboarding — without a corresponding increase in fraud losses.
These technologies still have a few hurdles to overcome, though. As with any anti-fraud method, AI and ML algorithms are only as good as the data to which they have access.
AI, ML and Future Fraud Protection Challenges
Banks must also make sure the AI and ML tools they use are unique to the type of fraud they are trying to prevent. This task is especially difficult with account opening fraud, as FIs have limited data on potential customers, and fraudsters are relying on synthetic identity theft or stolen credentials to slip through initial identification checks. Bad actors can do so even if FIs are using AI and ML, simply because the systems do not yet have enough data to differentiate between fraudsters and real customers.
One solution is to increase the amount of data AI- and ML-enabled systems can access before customers open their accounts, but challenges exist here as well.
FIs in certain markets, such as the European Union and U.K., are limited in the customer data they can use thanks to the General Data Protection Regulation (GDPR) and revised Payment Services Directive (PSD2). Even in markets without explicit regulations on data collection, new scrutiny on data privacy among both regulators and consumers is forcing many FIs to reevaluate data usage.
It remains to be seen how this will affect data-driven fraud protection methods, especially as both account opening and synthetic identity fraud continue to gain speed. The only certainty is that banks can no longer rely on KBA for new account opening, as such static information is too easily hacked.
AI and ML provide better opportunities to protect legitimate customers as they open new accounts, but FIs will need to continue to innovate their uses of these technologies if they want to stay a step ahead in the fraud game.