Behavioral Analytics Turn the Tables on Fake Accounts

You would think that decades into life lived online, we would have figured out an easier, more accurate way to authenticate someone.

For brands — pick a vertical, any vertical — one of the most aspirational goals is getting as many individuals signed up to the app, or the platform, or clicking to buy, as possible. But promotions that are designed to encourage consumers to open new accounts can have unintended consequences.

It happened Sunday (Feb.13) during the Super Bowl, when low-fidelity QR codes popped up on our TVs without any explanation. Curious viewers who followed the link discovered that Coinbase was offering them $15 worth of bitcoin to start trading on its app. It proved popular enough to attract scads of consumers — and crash the site.

PayPal’s latest earnings release, with the disclosure that 4.5 million accounts were found to belong to illegitimate users and were shut down, shows that even a digital giant with sophisticated checks in place can fall victim to bad actors, wielding bot farms as weapons.

Neuro-ID CEO Jack Alton told Karen Webster that there’s a digital identity crisis confronting any firm that relies on internet interactions to keep operations humming (and so that means pretty much everyone). Every time companies seek to cut out friction and strive for a “lightweight, frictionless process, what happens is that the rates of fraud, and specifically identity fraud, increase.”

It’s been a thorny problem, this tradeoff between friction and fraud, and one that’s bedeviled companies that have moved from doing business in person to doing business online.

“We’re still struggling,” said Alton.

That struggle continues even though many of the attack methodologies remain the same. Alton noted that many of the attacks occurring today are called “rinse and repeat” attacks in which criminals are using the same techniques and move from company to company as they choose new victims. Once they find a weakness to exploit, they ratchet up the attacks.

“We’re not seeing anything new other than the use of more bots and increasing use of automated attacks with the same techniques,” said Alton. “The fraudsters are either harvesting personally identifiable information at scale or inputting it at scale.”

Why We’re Still Struggling

The struggle is rooted in the fact that most identity checks today look backward — at where you lived, what you earned or who your current employer might be.

Most companies rely on huge databases from credit bureaus and other companies for verification. So, while they may have access to reams of data about their customers over the years, they don’t have the tools on hand to detect fraud when it’s happening at scale — right now as they invite individuals to open accounts and join platforms.

The fact is, most of that data has already been compromised, Alton said. That means the bad actors are increasingly able to use what at first (or second or third) glance appear to be “good” consumer profiles — at scale, as evidenced in the PayPal news.

And knowledge-based authentication (KBA) is imperfect in practice, certainly from a standpoint of consumer experience. We’ve all drawn a momentary blank when we’ve been prompted to recall the exact make and model of our first car or the name of our favorite college hangout.

And although nothing is immutable, said Alton, brands are looking for new layers of protection to help them stop insulting their good customers — interrupting transactions to challenge the consumer — while preventing bad actors from opening accounts at scale.

Parsing ‘Digital Body Language’

Alton said analytics platforms such as Neuro-ID’s can assess identities by parsing 300 million digital onboarding journeys, along with behavioral data, to separate legitimate customers from would-be fraudsters.

“If you are who you say you are, and you’re pulling that information from your long-term memory, you’re not going to make a lot of mistakes,” he said.

The subtle tells are what make a difference, giving clues as people navigate their devices and send signals just as surely as someone sitting across a desk. Alton said that unlocking behavioral insight before users submit data is akin to observing “digital body language” as consumers seek to open new accounts or apply for credit.

That digital body language can be writ large to look at crowd-level behavior, to find those bot attacks before they do real damage. It’s a real-time check rather than a look back in time that rests with static information.

“Now we provide portfolio-level protection,” he told Webster. “Taking into account everybody that comes through that digital onboarding journey, we’re looking at what the crowd did yesterday, what the crowd did today and what the crowd will probably do tomorrow.”

That broad analysis can help brands and enterprises fend off bot attacks in real time. Another benefit accrues from such proactivity, he said. The alerts ferret out the bad actors, yes, but they also ensure that good customers get through.

Looking ahead, he said there will be continued investment across the identity and verification space, but there will be increasing value in examining what happens with consumers before they hit the “submit” button.

“There will be a continuous cat-and-mouse game of staying one step ahead of the fraudsters and having some sort of proactive protection,” he told Webster. “Once people see the power that behavioral data has in stopping fraud, they can’t unsee it.”