EU Commission Proposes Guidelines to Combat AI Misinformation in Elections


EU Commission Proposes Guidelines to Combat AI Misinformation in Elections
© Getty Images

The European Commission is setting its sights on enhancing election security through new guidelines. These guidelines are specifically designed for tech platforms and aim to mitigate the risks associated with AI-generated misinformation in the lead-up to the upcoming European elections in May.

Public Consultation on Draft Guidelines

The Commission has initiated a public consultation process for its proposed election security guidelines, targeting very large online platforms (VLOPs) and very large online search engines (VLOSEs).

This consultation invites feedback on measures to address the democratic threats posed by the use of generative AI technologies that can create and spread synthetic content, potentially misleading voters and manipulating the electoral process.

The public consultation period is open in the European Union until March 7, signaling the Commission's commitment to inclusive policy development.

The draft guidelines suggest several measures to safeguard the electoral process, including the detection of AI-generated content, risk mitigation planning, and directing users to authoritative information sources.

These recommendations underscore the importance of alerting users to potential inaccuracies in content produced by AI and preventing the generation of misleading information that could significantly impact user behavior.

Guiding Principles and Best Practices

A notable aspect of the guidelines is the recommendation for platforms to disclose the concrete sources of information used as input data for AI-generated text, enabling users to verify the reliability and context of the information.

This approach aligns with the EU's broader legislative framework on digital services, drawing from the principles outlined in the AI Act and the non-binding AI Pact.

The urgency of these guidelines reflects growing concerns over the capabilities of advanced AI systems, such as large language models, which have gained significant attention following the viral popularity of generative AI tools like OpenAI’s ChatGPT in 2023.

While the European Commission has not specified when companies will be required to label manipulated content under the EU’s content moderation law, the Digital Services Act, tech giants are already responding to the call for greater transparency.

Meta, for example, has announced forthcoming guidelines to label AI-generated content on its platforms, including Facebook, Instagram, and Threads, ensuring that such content is visibly marked, whether identified through metadata or intentional watermarking.