AI Video Tool Scams Target Content Creators

malware

Cybersecurity researchers have uncovered a sophisticated malware campaign using fake AI video generation software to steal sensitive data from Windows and Mac users, highlighting new risks as businesses rush to adopt artificial intelligence tools.

Security experts warn that the campaign, first reported by Bleepingcomputer, employs stolen code-signing certificates and professional-looking websites. It represents an emerging threat vector as organizations embrace AI content tools. Victims are advised to immediately reset compromised credentials and enable multi-factor authentication on sensitive accounts.

“A recent rise of fake AI video generation tools is a worrying development that shows how cybercriminals take advantage of newly emerging trends,” Ed Gaudet, CEO and founder of Censinet, told PYMNTS. “With AI video creation becoming popular, companies must have measures to verify tools, set up security protocols, and protect their creative teams from scams.”

The surge in AI-related scams threatens to undermine consumer confidence in legitimate eCommerce platforms selling artificial intelligence content tools, potentially slowing adoption among online shoppers and merchants. Small businesses and content creators who fall victim to these scams face severe disruption to their online operations, as compromised payment credentials and authentication tokens can lead to fraudulent transactions and account takeovers on major eCommerce platforms.

Fake Videos

The scam revolves around “EditProAI,” a fraudulent video editing application promoted through social media with deepfake political videos. When downloaded, the software installs information-stealing malware that harvests passwords, cryptocurrency wallets and authentication tokens — creating potential entry points for corporate network breaches.

The scammers promote the malicious software through targeted social media ads featuring attention-grabbing deepfake content, like fabricated videos of political figures, that link to convincing copycat websites. These sites mimic legitimate artificial intelligence platforms with standard website elements like cookie consent banners and professional design, making them difficult to distinguish from authentic services.

When victims click “Get Now,” they download malware tailored to their operating system — Lumma Stealer for Windows or AMOS for MacOS. These programs masquerade as AI video editing software while covertly collecting stored browser data, which attackers then aggregate through a control panel and sell on cybercrime marketplaces or use to breach corporate networks.

New Breed of Cybercrime

AI-generated video scams using malware are becoming more sophisticated and dangerous. For instance, cybercriminals have created YouTube tutorials offering free access to popular software like Photoshop and Premiere Pro. These videos include links leading to malicious programs such as Vidar, RedLine and Raccoon, which steal personal information like passwords and payment data. One example involved malware disguised as a cracked version of the software, which infected thousands of devices, extracting sensitive details from unsuspecting users. Such AI-generated content is often professionally produced, mimicking legitimate tutorials and exploiting users’ trust, making malware campaigns harder to detect and combat effectively.

“Downloading niche software exposes users to risks like ransomware, info stealers, crypto miners, and the like, which used to be at the top of security professionals’ minds years ago,” Tirath Ramdas, founder and CEO of Chamomile.ai, told PYMNTS. “But I don’t think these problems will reemerge to the same extent as before because protection has genuinely improved.”

Ramada said endpoint detection software has improved. Today, all antivirus solutions benefit from artificial intelligence technology to provide improved detection capabilities. Browsers have also become better at preventing the installation of PUA (potentially unwanted apps).

“Mac and Windows operating systems have become hardened by default,” he added. “And for enterprises, a shift to zero trust architecture means that even if someone in marketing is tricked into installing malware, the impact is better isolated than before.”

Gaudet said that when under tight deadlines, creative teams become more susceptible to scams that promise fast results.

“To combat this, companies need to make cybersecurity awareness training specific to the creative team’s unique challenges,” he said. “It is very important to educate employees to recognize phishing attempts and software authenticity and report any suspicious activities.”

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.