A lie gets halfway around the world before the truth has a chance to get its pants on.
That’s a quote from Winston Churchill, who never lived to see the internet, much less today’s connected digital transformation. But it remains truer than ever within today’s landscape, particularly given the scale and speed at which misinformation and false narratives can spread.
Publicly traded companies, for example, lose about $39 billion annually due to disinformation-related stock market losses, while $78 billion globally is lost each year.
Increasingly, misinformation is becoming a favorite tactic in the toolkit of cyber criminals as a deceptive means to compromise organizations, individuals, financial institutions and governments.
The U.S. Securities and Exchange Commission (SEC) found its official X, formerly known as Twitter, account compromised by a security breach Tuesday (Jan. 9). The hack turned the prospect of Wall Street’s top regulator approving a bitcoin exchange-traded fund (ETF) into a major cybersecurity incident with a fake post saying the SEC had done just that.
“The SEC Twitter account was compromised, and an unauthorized tweet was posted,” SEC Chair Gary Gensler tweeted. “The SEC has not approved the listing and trading of spot bitcoin exchange-traded product.”
The @SECGov twitter account was compromised, and an unauthorized tweet was posted. The SEC has not approved the listing and trading of spot bitcoin exchange-traded products.
— Gary Gensler (@GaryGensler) January 9, 2024
But the misinformation had been shared — and before the truth could put its pants on, that misinformation had spread like wildfire across the cryptocurrency community.
Of course, a day later on Wednesday (Jan. 10) the SEC did end up approving spot bitcoin exchange-traded products, after all.
Still, underscoring the urgency of synthetic information’s threat, the World Economic Forum (WEF) labeled misinformation and disinformation as the top risk facing the world in the next two years in its newly published Global Risks Report 2024.
Staying ahead of the curve in countering misinformation is not only a technological challenge but also a crucial component of safeguarding the trust and integrity of organizations in the digital era. That’s why, for the “Attack Vectors 2024” series, PYMNTS is unpacking what organizations need to know about misinformation’s rising threat, as well as how to develop a targeted defense.
Read also: Misinformation Fears Surge as Increasing Gen AI Use in Finance Amplifies Risks
The digital age has ushered in unprecedented connectivity and convenience, but it has also paved the way for new and sophisticated forms of cyber threats. While traditional attack vectors such as malware and phishing remain prevalent, misinformation has gained traction as an attack vector that can work to destabilize and undermine targets.
While innovative tools, like generative artificial intelligence, offer efficiencies, they also increasingly allow fraudsters to scale and disseminate false information attacks across global media networks.
“The disruptive capabilities of manipulated information are rapidly accelerating, as open access to increasingly sophisticated technologies proliferates and trust in information and institutions deteriorates,” the WEF report stated. “In the next two years, a wide set of actors will capitalize on the boom in synthetic content…”
“No longer requiring a niche skill set, easy-to-use interfaces to large-scale artificial intelligence (AI) models have already enabled an explosion in falsified information and so-called ‘synthetic’ content, from sophisticated voice cloning to counterfeit websites,” the report added. “…Synthetic content will manipulate individuals, damage economies and fracture societies in numerous ways over the next two years.”
Given the scale and speed at which misinformation spreads, traditional methods of detection are often inadequate.
Machine learning systems can play a role in identifying and combating misinformation. These systems can analyze large datasets in real time, identifying patterns and anomalies that may indicate the presence of false information. Natural language processing models are particularly effective in discerning subtle nuances in textual content, helping to flag potentially misleading information.
See also: Attack Vectors 2024: Protecting Against What’s Next in Deepfake Fraud
Misinformation spreads rapidly through social media platforms and other online channels, making timely detection and response critical, while attackers often use a variety of content formats, including text, images and videos, to disseminate misinformation, making it challenging to develop one-size-fits-all detection methods.
As noted in PYMNTS Intelligence’s latest “Generative AI Tracker®,” nearly 80% of consumers are concerned about the spread of misinformation facilitated by generative AI, underscoring how the very tools designed to enhance experiences and streamline access to knowledge also hold the power to amplify and disseminate misleading or inaccurate content.
“Misinformation and disinformation can be a company killer,” Wasim Khaled, CEO and co-founder of intelligence platform Blackbird.AI, told PYMNTS in June.
“Threat intelligence solutions and cybersecurity measures need to account for the growing impact of a new generation of audience manipulation capabilities.”
Educating employees and stakeholders about the tactics used in misinformation campaigns can empower them to critically evaluate information and reduce susceptibility, while establishing networks for sharing threat intelligence among organizations can enhance collective resilience against misinformation campaigns.
Basic cyber hygiene can also help play an effective role. For example, the SEC’s Twitter was reportedly compromised because it didn’t turn on two-factor authentication.