PYMNTS Intelligence Alternate Banner June 2024

National Bank of Romania Warns Consumers About Deepfake Videos

National Bank of Romania

Romania’s central bank governor, Mugur Isarescu, found himself unwittingly associated with false financial recommendations after being targeted by a deepfake video.

The video, which utilized Isarescu’s image and voice, portrayed him promoting fraudulent investments, Bloomberg reported Monday (Feb. 5).

This incident has raised concerns about the rising number of deepfake attacks, which involve the use of artificial intelligence (AI) to manipulate audio and video content, according to the report.

The National Bank of Romania responded to the deepfake video by issuing a warning to consumers, emphasizing that neither Isarescu nor the central bank provides investment advice, the report said. A central bank spokesperson expressed concern over the incident and urged citizens to exercise caution in their financial transactions.

The deepfake video coincided with a surge in interest in equity investments in Romania, according to the report. Last year, the country witnessed its largest initial public offering (IPO), and the Bucharest Stock Exchange has been offering above-average returns. However, cybercriminals have taken advantage of Romania’s relatively low financial mediation standards, exploiting the situation for their fraudulent activities.

Deepfake attacks are expected to intensify this year, the report said. Romania’s upcoming rounds of elections, including parliamentary, presidential, European Union and local ballots, create a fertile ground for disinformation campaigns and cyberattacks.

Romanian Prime Minister Marcel Ciolacu has also faced cybersecurity challenges, per the report. He had to change his ID card after hackers stole a copy and posted it on the dark web. The attack on Ciolacu’s ID card was linked to Russia’s invasion of Ukraine.

AI is reducing the effort required to generate deepfakes, PYMNTS reported Jan. 9. The AI-granted ability to generate human-like text in an instant, virtually clone voices based on just snippets of audio and scale behavioral-driven attacks with the click of a button has increasingly democratized access to cybercrimes that were previously only relegated to the realm of the most sophisticated bad actors.

OpenAI said in a Jan. 15 blog post that it is taking steps to prevent the misuse or exploitation of its AI technology in upcoming global elections. The company proactively anticipates and prevents potential abuses like deepfakes, according to the post.