IRDT Conference on 13 and 14 October 2022 in Trier
Digital platforms like Facebook, YouTube, TicToc and Twitter play a key role in both private and public communication. The platforms enable everyone to contribute to matters of public debate without the need to pass classic gatekeepers such as newspapers, radio or TV stations. This realises the idea of a very large and open marketplace of ideas, which is at the heart of democracy. Gradually, however, and increasingly in recent years, the dark side of unfiltered online content has become visible: Hate speech mobilises against minorities, incites crime and intimidates politicians. Fake news threaten the fairness of electoral campaigns and the integrity of election results, they help spread conspiracy theories and prevent an effective fight against the pandemic. Other forms of problematic content include child pornography, violent videos, copyright infringement, or leaks of private data.
Up to this point, the digital platforms have resorted to self-regulation based on community standards or other forms of private standard setting in order to draw the fine line between forbidden content and free speech. Thus, private companies effectively define the scope and the limits of freedom of opinion within their digital spheres. But European states and the European Union have also started legislating, as the German Network Enforcement Act (2017), the French Law on the fight against the manipulation of information (2018), and the EU Regulation on the dissemination of terrorist content online (2021) show.
In addition to sector-specific rules, the EU’s proposed Digital Services Act (COM(2020) 825 final) introduces an updated horizontal framework for all categories of content, products, services and activities on intermediary services and aims to harmonise the fragmented rules of platform liability. This raises several questions. How far do – national and European – free speech guarantees go? If hate speech and defamation can be outlawed to protect the victims’ rights, how can the prohibition of fake news be justified? What is the remaining scope of the platforms for private content moderation? Who is responsible for fighting and taking down illegal content? How can the victims of de-platforming, content takedowns or shadow banning claim their right to freedom of opinion? And how will these legal responsibilities be enforced?
In our international conference, we aim to discuss the impact of the proposed Digital Services Act on the regulation of hate speech, fake news and other content on digital platforms in Europe.
The programme of the conference will be available here soon.