Experts Say OpenAI and Amazon Are Keepin’ It Real for AI

As AI Sharpens Its Act, Tech’s New Tools Scout Out Fakes

In a world increasingly flooded with artificial intelligence-generated content, tech behemoths are now locked in a high-stakes battle to develop cutting-edge tools that distinguish between what’s real and what’s not.

From OpenAI’s launch of a new system to identify images created by its own DALL-E 3 text-to-image generator to Amazon’s deployment of AI to spot fraudulent reviews, companies are attempting to stay one step ahead of bad actors seeking to exploit their platforms. As AI continues to increase across industries, developing robust fake detection tools is becoming a priority, with far-reaching implications for fields ranging from eCommerce to journalism to politics.

“Retailers can leverage AI to combat fraud more effectively, meeting bad actors where they are and, even better, beating them at their own game,” Sophia Carlton, a fraud transformation executive at Accenture, told PYMNTS.

PYMNTS Intelligence’s “Fraud Management in Online Transactions” found last year that 82% of U.S. eCommerce merchants with international sales faced cyber breaches, causing lost customers and revenue for nearly half, and 68% struggled to balance security with customer satisfaction.

AI Tools to Fight Fraud

When fraud attacks are successful, retailers suffer harm to their reputations, Christophe Van de Weyer, CEO of identity solutions company Telesign, told PYMNTS. For example, according to Telesign’s Annual Trust Index, nearly half of fraud victims surveyed attributed blame to the affected brand, with 64% reporting a negative impact on their perception of the brand, marking a 36% increase since last year.

“This negative perception can rapidly spread as victims share their experiences,” he said. “Thirty-seven percent advised friends and family to steer clear of the brand, and 34% posted about the fraud incident on social media, potentially reaching a wide global audience. This data emphasizes the need for AI-enabled solutions to protect both customer identities and revenue streams. By utilizing AI fraud protection solutions, organizations can detect and respond to fraudulent activities in real time, thereby reducing financial losses and preserving customer trust in an ever more competitive retail environment.”

Companies are taking note of the growing fraud problem and working to prevent it. OpenAI announced Wednesday (May 7) that it is introducing a tool designed to identify content produced by its DALL-E 3 text-to-image generator. Additionally, the company is opening applications for an initial group of testers for its image detection classifier. The tool is designed to assess the probability that an image was created by DALL-E 3.

“Our goal is to enable independent research that assesses the classifier’s effectiveness, analyzes its real-world application, surfaces relevant considerations for such use, and explores the characteristics of AI-generated content,” OpenAI said in a statement.

Typically, when machine learning spots suspicious behavior, it doesn’t immediately take action, Keegan Keplinger, senior threat researcher at security firm eSentire told PYMNTS. Instead, it uses specialized models to score activities based on how unusual they are and whether they match known fraud patterns. This scoring helps human analysts decide which cases to investigate first based on the level of risk. The machine learning algorithms analyze various aspects of each transaction, such as the time, place, amount and the parties involved. These parties might have a history of transactions that suggest fraudulent behavior.

“A common example is when your credit card company blocks one of your purchases or calls you after you’ve attempted to make a transaction while traveling,” he said. “Your spending habits, prior to travel, establish a baseline in your transaction history, but the sudden change in location and perhaps higher-value transaction amounts, associated with travel, can be a sign of fraud, such as when credit cards are, for example, skimmed from eCommerce sites by hackers, then sold and used by criminals.”

OpenAI said the tool accurately detected about 98% of images created by DALL-E 3 and falsely identified less than 0.5% of human-made images as AI-generated. The company further noted that common image alterations such as compression, cropping and changes to saturation minimally affected the tool’s effectiveness. However, it admitted that other modifications could impair its performance. The classifier also showed reduced effectiveness in distinguishing between images produced by DALL-E 3 and those from other AI models.

Carlton explained that generative AI can assist in generating new programming code, completing partially written code, and translating code between programming languages. These applications can lead to “more effective fraud models, quicker model development for emerging schemes, or more efficient fraud model tuning and management,” she said.

AI is set to revolutionize fraud prevention and detection for retailers in the future, Carlton said. By harnessing its capabilities, retailers can improve fraud management, reduce losses and costs, and better protect their customers.

“AI is positioned to revolutionize fraud prevention and detection for retailers,” Carlton said, adding that it will enable them to tackle fraud “more efficiently and effectively than ever before.”

Amazon’s Fraud-Blocking Efforts

Amazon is also intensifying its use of AI to combat fraudulent reviews. The company reported that its AI systems blocked over 200 million suspected fake reviews worldwide in 2022. This is part of Amazon’s strategy to preserve the integrity of its review system, which is essential for consumers who rely on these evaluations for purchasing decisions and businesses that depend on genuine customer feedback.

“Fake reviews intentionally mislead customers by providing information that is not impartial, authentic or intended for that product or service,” Josh Meek, senior data science manager on Amazon’s Fraud Abuse and Prevention team, said in an April blog post. “Not only do millions of customers count on the authenticity of reviews on Amazon for purchase decisions, but millions of brands and businesses count on us to accurately identify fake reviews and stop them from ever reaching their customers. We work hard to responsibly monitor and enforce our policies to ensure reviews reflect the views of real customers and protect honest sellers who rely on us to get it right.”

Andrew Sellers, head of technology strategy at Confluent, told PYMNTS that “AI can build highly granular models for assessing fraud risk that incorporate many features of end customer behavior such as transaction time, amount, location and purchase/claim history.” These models can be based on rules defined by experts or patterns learned by machines from transaction data.

“Doing this kind of analysis at scale is only possible with automation,” Sellers added.

When it comes to preventing fraud in the retail sector, Sellers noted that “AI models can assess fraud risk in real time when transactions occur at the point of sale. If a creditor is using data streaming, this fraud risk characterization can happen in line with the purchase approval process. This real-time assessment ensures that highly suspicious transactions are declined even if the account appears to be in good standing.”

Looking to the future, Sellers predicted that “the AI algorithms will continue to get more accurate in their assessments and more precise when considering the individual circumstances of consumers.”

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.