Skip to main content

Strengthening online platforms' responsibility

Overview

As part of the European Union's ongoing efforts to tackle online disinformation, the Commission has introduced a range of policy measures to increase the responsibility of online platforms. 

These measures are designed to ensure that online platforms take proactive steps to combat the spread of false or misleading information, ensuring a safer and more reliable digital environment for all.

The EU Code of Practice on Disinformation

The EU Code of Practice on Disinformation, established in 2018, is the world’s first voluntary self-regulatory instrument for the online platforms. The Code commits the platforms who sign it to set standards and commitments to fight disinformation. 

The Code was strengthened in June 2022, with 34 signatories agreeing to increase the transparency and accountability of their platforms’ actions. The Transparency Centre includes information on the Code and implemented actions.

The Digital Services Act

In August 2023, the Digital Services Act (DSA) became legally enforceable for designated Very Large Online Platforms and Very Large Online Search Engines. 

These platforms must now share their annual risk assessments on illegal content disseminated through their service. They must also adjust mitigation measures.

The Artificial Intelligence (AI) Act

The AI Act is the world's first-ever legal framework on AI. It addresses the risks of AI and positions Europe to play a leading role globally.

The aim of the new rules is to foster trustworthy AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, safety, and ethical principles and by addressing risks of very powerful and impactful AI models.

Transparency of political advertising

On 9 April 2024, the new Regulation on the transparency and targeting of political advertising entered into force. The Regulation aims to ensure that the provision of political advertising is in full respect of fundamental rights and that voters are better placed to make well informed choices.

Under the new rules, political adverts must be clearly labelled as such and must indicate who paid for them, how much, to which elections, referendums or regulatory processes they are linked and whether they are targeted. 

Countering illegal hate speech online

To prevent and counter the spread of illegal hate speech online, the Commission created a Code of Conduct which several of the most important online platforms have signed up to. 

Under the Code of Conduct, companies must remove content flagged for hate speech from their platforms within 24 hours.

Addressing the dissemination of terrorist content online

The Regulation to address the dissemination of terrorist content online applies since 7 June 2022. Based on the Regulation, terrorist content must be taken down within one hour after it is identified online. This applies for online platforms offering services in the EU, to ensure the safety and security of citizens.