How Russia is Using AI to Fuel Disinformation in Europe

AI and Deepfakes: The New Weapons of Disinformation

by Faruk Imamovic
How Russia is Using AI to Fuel Disinformation in Europe
© Getty Images/Ian Walton

The digital landscape has become a fertile ground for the dissemination of false information, with artificial intelligence and sophisticated technologies such as deepfakes leading the charge. Morgan Wright, a prominent figure at SentinelOne, an American cybersecurity firm, sheds light on this escalating threat. SentinelOne, in collaboration with EU DisinfoLab, has identified a Russia-based influence network known as “Doppelgänger,” active in Europe since 2022. This network has skillfully created clone sites of major European media outlets such as the Guardian and Bild in Germany, using these platforms to spread false and misleading information.

The primary focus of these fabrications has been on contentious issues such as the wars in Ukraine and Gaza. However, in recent times, the climate crisis has also emerged as a significant target. For instance, one false narrative detailed the tragic death of a teenage cyclist in Germany due to streetlights being turned off to save electricity, a decision falsely attributed to an energy crisis spurred by sanctions on Russia. Despite being debunked by numerous credible sources, this story found a breeding ground on social media platforms like Facebook.

Paula Gori, secretary general of the European Digital Media Observatory (EDMO), explains that spreading such deceitful narratives serves Russia’s geopolitical interests, particularly as it faces sanctions affecting its oil and gas sector. Disinformation campaigns claim that the European Union (EU) is floundering because of these sanctions and that its shift to renewable energy sources is inadequate, contradicting official statistics that reported renewables accounted for 23% of the EU's energy consumption in 2022.

Disinformation: A Tool for Division

The strategic dissemination of false information appears to aim more at sowing discord and confusion rather than promoting substantive change in policies. “Russia has been very opportunistic. It’s looking for controversy and strife, and any current issues that they can exploit,” Wright comments on the motivations behind these actions. The overarching goal is to incite conflict and prevent any unified approach to addressing pressing global issues like climate change.

Additionally, Russia aims to undermine EU's climate advocacy globally, especially in regions like Africa and Asia, where it competes with the West for influence and business. By portraying European climate policies as exploitative, Russia seeks to hinder these nations' industrial progress under the guise of environmental concern.

This tactic of exploiting divisions isn't new but rather a continuation of a century-old strategy of Russian intelligence, dating back to the creation of Cheka, Russia's first intelligence organization in 1917. The tools might have evolved—now leveraging AI and social media—but the essence of the tactics remains unchanged.

UN Climate Change Summit In Copenhagen
UN Climate Change Summit In Copenhagen© Getty Images/Miguel Villagran

The Real-world Impacts of Online Disinformation

These online campaigns have tangible effects, influencing political narratives and legislative actions across Europe. False stories have stoked fears that EU climate policies will lead to extreme measures like "climate lockdowns" or force citizens to switch to insect-based diets. Such narratives have permeated national parliaments, where populist politicians amplify these baseless claims, impacting public opinion and policy-making.

The EU’s ambitious climate goals, such as reducing carbon emissions by 90% by 2040 compared to 1990 levels, are under threat not only from internal political shifts but also from these pervasive misinformation campaigns. Misinformation has also sparked protests, like those seen across various EU states by farmers who mistakenly believed they would lose their lands to solar farms due to misreported agricultural policies.

To combat this rising tide of digital misinformation, the EU has implemented measures like the Digital Services Act, which forces social media platforms to mitigate the spread of illegal content and disinformation. Despite these efforts, challenges remain, as highlighted by a recent incident where a fake AI-generated audio recording almost influenced parliamentary elections in Slovakia.

The Role of Social Media and Government Responses

Social media platforms have been central to the spread of disinformation, and the EU has attempted to address this through legislation. The Digital Services Act targets illegal content, misleading advertising, and disinformation. It compels social media companies to clean up their platforms, with the European Commission recently opening formal proceedings against Facebook and Instagram for disinformation targeting the European elections.

Last month, the EU also imposed sanctions on the Prague-based Voice of Europe, accusing it of running a pro-Russian influence operation. However, these measures are not enough to combat the scale of the problem. Climate Action Against Disinformation, an international coalition, has criticized the response from social media companies and governments as insufficient.

Moving Forward

The battle against disinformation is complex and multifaceted. While legislative measures such as the Digital Services Act represent a step in the right direction, they are not sufficient to address the problem comprehensively. Social media platforms must enhance their policies and enforcement mechanisms to effectively counteract the spread of disinformation. Additionally, increased public awareness and media literacy are crucial in helping individuals discern fact from fiction in an era where false narratives can spread rapidly online. Only through a combined effort from governments, social media companies, and the public can the pervasive threat of disinformation be mitigated.

Russia Europe