For the Twitter bid a la Elon Musk, the lesson is: Easy come, easy go.
Well, beyond that assessment, of course, lies the reason Musk has put the $44 billion deal on hold.
For Twitter — and for other platform firms, too, we note — there is the ever-present concern over fake accounts. By extension, that impacts the number of monetizable users.
Where there are fake accounts, too, there remains the potential for fraud.
Twitter is not the only company that has trained a spotlight on the problem. PayPal said earlier this year that it had identified and had also removed 4.5 million illegitimate accounts.
During commentary on its earnings call back in February, PayPal said that an incentive program had been effectively hacked by bot farms. As CFO John Rainey said on the call, “we regularly assess our active account base to ensure the accounts are legitimate,” adding that “this is particularly important during incentive campaigns that can be targets for bad actors attempting to reap the benefit from these offers without ever having an intent to be a legitimate customer on our platform.”
Now, 4.5 million accounts out of a then-reported roughly 426 million users may not seem like fraud committed on a massive scale. For Twitter, the platform has reported that fake accounts represent something “less than” 5% of users. That’s a finding on a much wider relative scale — one that speaks to the fact that bad actors are finding profitable low-hanging fruit as they seek create accounts, lie low, and then scam legitimate users and companies.
Verification, With Speed, Security and High Tech
It’s unrealistic to think that any enterprise, especially ones as large as Twitter and PayPal, can eliminate all fraud, especially where payments are concerned. But the millions and millions of fake accounts show that the means and methods many companies employ to battle the fraudsters are falling short.
As a slew of recent PYMNTS data, reports and interviews have shown, a multi-layered approach works best in besting the bots.
The Federal Trade Commission (FTC) reported that consumers lost $5.8 billion to fraud in 2021, a 70% increase from the previous year. Biometrics represent one arrow in the quiver; but used in tandem with multi-factor authentication, companies can verify that legitimate users are logging on.
See also: PYMNTS Intelligence: Leveraging Behavioral Analytics to Complement Other Fraud Prevention Measures
The costs of not taking a proactive, tech driven approach, especially at the point of onboarding, can be devastating. As PYMNTS has found, customers are likely to abandon eTailers entirely after experiencing data theft or fraud. In fact, 65% of consumers in a recent PYMNTS study said they would be “slightly” or “not at all” likely to continue using merchants after having their data stolen.
Companies such as Neuro-ID can leverage real-time behavioral analytics to monitor the existing onboarding process — looking at individual but also crowd-level actions — that can pinpoint bot attacks that may be otherwise hidden in website traffic and account sign up surges.
We’re likely to see a convergence of the private and public sectors when it comes to verification — giving platforms some additional ammunition. As Jumio Chief of Digital Identity Philipp Pointner remarked recently, PYMNTS and Jumio research showed that 73% of respondents support using a government digital ID solution to access public online services. We’re headed toward a form of ID with government-grade security, tied to individuals’ unique data, that holders can also use to access private-sector services.
“That data can be shared with the click of a button and maybe the scan of your face to make sure that it is actually you that is using that identity,” Pointner said. “Imagine all the places online and in the real world you can go as you use that identity — from one vendor to another, from one service to another, from one country to another.”