Data Breaches Are Surging: What That Means for Enterprise LLMs

cybersecurity

This has been the year of enterprise artificial intelligence (AI).

From healthcare and financial services to government agencies, critical sectors around the globe are embracing the benefits that large language models (LLMs) and other AI systems can provide when it comes to driving efficiencies, enabling data-driven decision-making and powering innovative products and services.

But 2024 has also been the year of the data breach and the cyberattack, with high-profile disruptions downing critical sectors — like healthcare, finance, retail and even government agencies.

As AI technologies become increasingly integrated into our enterprise operations, the potential for misuse and abuse is growing, necessitating robust strategies to safeguard against malicious use.

And with the news Monday (Aug. 2) that Meta has released a new suite of security benchmarks for LLMs, CYBERSECEVAL 3, to empirically measure LLM cybersecurity risks and capabilities, the fundamental need for protecting data privacy in the development and deployment of AI technologies is top of mind for businesses processing sensitive information for algorithmic development and deployment.

Read more: At Your Service: Generative AI Arrives in Travel and Hospitality

Advancing the Evaluation of Cybersecurity Risks in LLMs

Data breaches, like AI systems, are not a new phenomenon, but their scale and impact have grown exponentially in recent years as digital transformation has swept the business world and the cost of computing power has significantly decreased relative to its capabilities.

Against this backdrop, the increasing sophistication of cybercriminals, coupled with the vast amount of data being generated and stored by businesses to train purpose-built AI models for enterprise use, has created a perfect storm for data breaches.

“AI is vulnerable to hackers due to its complexity and the vast amounts of data it can process,” Jon Clay, vice president of threat intelligence at cybersecurity company Trend Micro, told PYMNTS in an earlier discussion. “AI is software, and as such, vulnerabilities are likely to exist which can be exploited by adversaries.”

One of the key risks associated with LLMs is the possibility of data leakage. If a breach occurs during the training phase, sensitive information could be inadvertently exposed within the model itself. For instance, if an LLM is trained on email communications that include sensitive information, such as contracts or financial data, that information could be retrievable from the model even after the training process is complete.

For enterprises using LLMs trained on sensitive data, the implications of a data breach are far-reaching. First and foremost, there is the risk of regulatory noncompliance. In many jurisdictions, companies are required to adhere to strict data protection laws, such as the General Data Protection Regulation (GDPR) in Europe or the California Consumer Privacy Act (CCPA) in the United States. A breach involving sensitive data could result in significant fines and legal action, not to mention damage to the company’s reputation.

PYMNTS Intelligence finds that over a quarter of surveyed firms (27%) use AI for high-risk, complex tasks, while nearly 90% have at least one high-impact use case for the innovative technology.

Read more: Most CFOs See Limited ROI From GenAI, but Boost Its Investment

The Intersection of Data Breaches and AI Security

According to the research paper published by Meta, key strategies in mitigating the risks associated with powerful AI tools include red teaming; adversarial training; robustness checks; transparency in AI development, with comprehensive documentation of models, datasets, and methodologies; and engaging with the broader AI community.

“CYBERSECEVAL 3 assesses 8 different risks across two broad categories: risk to third parties, and risk to application developers and end users,” the researchers wrote.

PYMNTS explored the business impact of Meta’s Llama 3.1 in July, noting that businesses are weighing the implications of access to powerful, cost-free AI against the difficulties related to implementation and security.

And as enterprises increasingly rely on AI and LLMs to drive innovation and growth, the risks associated with data breaches cannot be ignored. The sensitive nature of the data used in training these models, combined with the growing threat of cyberattacks, makes securing AI systems a top priority for businesses across all industries.

But by implementing robust security measures, adopting ethical AI practices and preparing for potential breaches, enterprises can protect their valuable data assets and maintain the trust of their customers in an era where data breaches are everywhere.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.