Fraudsters are deploying tactics ranging from impersonating tax officials to selling fake PPE on peer-to-peer (P2P) payment apps amid the COVID-19 pandemic. Fighting these threats requires an equally wide-ranging defense, says Jamie Armistead, vice president of Zelle. In this month’s Preventing Financial Crime Playbook, Armistead discusses how artificial intelligence (AI) can offer a bird’s eye view of suspicious transactions and stop fraudsters from exploiting legitimate customers’ vulnerabilities.
Peer-to-peer (P2P) payment apps have gained ubiquity over the past decade, supplanting or replacing cash transactions, personal checks or wire transfers for payments between individuals or sometimes even small businesses. Gone are the days of diners handing waiters a stack of credit cards to split a bill, and instead seamless money transfers occur in a matter of seconds. It is no wonder that payment apps like CashApp, Venmo and Zelle are used by more than 70 percent of Americans.
These apps are not without danger, though, and fraudsters are looking to cheat and scam their way to paydays from innocent users. Venmo users alone lost more than $40 million in 2018. Meeting this fraud threat will require in-depth knowledge of fraudsters’ techniques and advanced technology and customer vigilance, according to Jamie Armistead, vice president and business line leader for banking app Zelle.
“We split out scams and frauds,” he explained in a recent interview with PYMNTS. “Fraud means someone is trying to access your device, while scams consist of people being tricked into sending money.”
Armistead recently offered PYMNTS an inside look into the different techniques that fraudsters leverage as well as the initiatives Zelle undertakes to protect itself and its users.
The Scams And Frauds Facing P2p Payment Apps
Zelle largely categorizes the threats it faces into two broad categories: scams that swindle users into sending bad actors money and frauds that utilize more technical means to infiltrate user accounts. The former encompasses a wide range of different schemes, and fraudsters go to great lengths to hide their identities and present a sympathetic face to their victims.
“There are extremely elaborate scams with puppies for sale, for example,” Armistead explained. “There are really elaborate but completely bogus websites of puppies, with pictures of the puppies, links to bogus pet transport services, et cetera. Unfortunately, they’re highly effective at getting people to pay hundreds of dollars for a puppy that doesn’t exist.”
Scams have taken on a new life amid the COVID-19 pandemic, according to Armistead, with scammers impersonating tax officials, personal protective equipment (PPE) sellers or bank personnel to trick victims into sending them money. The economic anxiety associated with the pandemic has made victims especially vulnerable.
Frauds typically involve perpetrators infiltrating user accounts directly. Cybercriminals deploy a variety of tactics, and phishing is the most popular. Fraudsters send out mass emails and trick users into surrendering their app login details, which they then use to take over accounts and either send money to their own bank accounts or steal personal data.
Securing both of these risk avenues is a difficult undertaking that Zelle does not conduct alone. The app relies on both in-house analytics systems and its financial institution (FI) partners’ security systems to fight these frauds and scams.
Leveraging AI To Fight Cybercrime
Fraudsters often work together in organized crime rings, so payment apps and their bank partners collaborate, too. Zelle’s first line of defense against fraudsters is its bank partners’ security systems, meaning that any potential bad actor must first clear the onboarding system of the bank associated with a given Zelle account.
“It starts at the financial institution level, as they already have a number of tools and technologies in place,” Armistead explained. “But as you go through that enrollment process, we are doing a number of things behind the scenes to validate you are who you say you are. We can access and look at the risk profile of an email address, for example.”
Zelle also relies on its artificial intelligence (AI)-based pattern recognition system to locate suspicious transactions that could indicate fraud. These anomalies vary by customer type, location and numerous other variables, all of which the system takes into account.
“We try to identify patterns associated with fraud or scams because we see all the transactions at the network level,” he said. “Maybe they’re using one account to run a scam and using another account to move the money, or they’re just temporarily holding the funds and they’re liquidating it once they get it to a third point in that equation. That’s the type of stuff that we would sometimes use AI to try to examine.”
Scams are a little trickier, as they can often appear legitimate to automated systems. Identifying scams relies much more on customer awareness, though arming users with as much information as possible significantly reduces risk.
“We do things to help people ensure they’re sending money to the right place; for example, making sure that when they enter a phone number, it comes back with a name prompt to let them know who they’re sending money to,” Armistead said. “We also have warning messages that remind people to only send money to people they know and trust.”
Payment apps like Zelle ultimately rely on a combination of customer vigilance and prevention measures to prevent scams. Fraudsters work together on cybercrime and banks and payment apps collaborate on fraud prevention. So too can apps and their customers cooperate to drive down fraud attempts and scams to manageable levels.