AI Chatbots Could Be A Trojan Horse in the Office

AI chatbots, cybersecurity

As artificial intelligence (AI) chatbots enter workplaces, they’re not just boosting productivity — they’re opening digital back doors to corporate secrets, with over a third of employees unwittingly playing the role of gatekeepers.

A Sept. 24 survey by the National Cybersecurity Alliance revealed a startling trend: 38% of employees share sensitive work information with AI tools without their employer’s permission. The problem is particularly acute among younger workers, with 46% of Generation Z and 43% of millennials admitting to the practice, compared to 26% of Generation X and 14% of baby boomers.

Dinesh Besiahgari, a frontend engineer at Amazon Web Services (AWS) with expertise in AI and healthcare, warned of the dangers behind seemingly innocuous AI interactions. 

“What stands out most is the scenario where employees use chatbots to make payments or make any form of financial transactions where they have to give out payment details and other account information,” Besiahgari told PYMNTS. 

The Invisible Data Leak

Despite warnings from AI companies like OpenAI, which wrote in its ChatGPT user guide: “We are not able to delete specific prompts from your history. Please don’t share any sensitive information in your conversations,” the average worker may find it challenging to constantly consider data exposure risks.

“People tend to share information with chatbots the same way they would with another person or a secure system,” Akmammet Allakgayev, CEO of the AI company MyChek, which helps immigrants navigate the process, told PYMNTS. “This can lead to some serious security issues … Employees might unknowingly share things like personal information, sensitive company data or even financial information.”

The scope of the problem is significant.

“IBM Security X-Force Threat Intelligence Index 2021 reveals that most organizations reported data breaches of their users because of one use of AI or the other, indicating that a lot was still left to be desired concerning AI use in terms of security,” Besiahgari said.

Recent data from data management firm Veritas Technologies further underscores the urgency of this issue. In a survey of 11,500 office workers, 22% reported using public generative AI tools at work daily. More alarmingly, 17% believe there is value in inputting confidential company news into these tools, while 25% see no issue with sharing personally identifiable information such as names, email addresses and phone numbers.

Perhaps most concerning is the need for more awareness among employees. The Veritas survey found that 16% of respondents believe there are no risks to their business when using public generative AI tools in the workplace. This perception gap is exacerbated by a lack of clear guidance from employers, with 36% of workers reporting that their company has never communicated any policies on using these tools at work.

Battling the AI Security Threat

To combat these risks, experts recommend a multi-pronged approach. Allakgayev shared insights from MyChek’s integration of a chatbot with Google Gemini:

“Encrypt everything. Make sure the data being shared with the chatbot is encrypted both while it’s being sent and after it’s stored. This keeps prying eyes away,” he advised. “Limit access; don’t give the chatbot access to every system in the company. Make sure it only gets to see and process what’s necessary.”

A new threat on the horizon is the rise of “shadow AI” — the unauthorized use of AI tools by employees without organizational approval.

“This is when employees start using AI tools without the IT department even knowing about it,” Allakgayev said, “People often turn to these tools because they’re convenient and help them get work done faster, but if IT isn’t aware, they can’t manage the risks.”

The consequences of failing to manage shadow AI can be severe.

“Companies could face massive fines for violating data privacy laws,” Allakgayev warned. “There’s also the risk of damaging trust with customers or losing valuable company information to competitors.”

To avoid these pitfalls, companies need to create clear rules about which AI tools can be used, provide secure alternatives for employees and monitor AI activity closely inside the company. This approach not only mitigates risks but also allows organizations to harness the power of AI safely and effectively.

“AI is powerful, but without the right safeguards, it can easily lead to unintended data exposure,” Allakgayev said. “In the race to embrace AI, we’re inadvertently building digital Trojan horses — and the price of letting them in could be higher than we ever imagined.”