IRDT Conference on 13 and 14 October 2022 in Trier
Digital platforms like Facebook, YouTube, TikTok and Twitter play a key role in both private and public communication. The platforms enable everyone to contribute to matters of public debate without the need to pass classic gatekeepers such as newspapers, radio or TV stations. This realises the idea of a very large and open marketplace of ideas, which is at the heart of democracy. Gradually, however, and increasingly in recent years, the dark side of unfiltered online content has become visible: Hate speech mobilises against minorities, incites crime and intimidates politicians. Fake news threaten the fairness of electoral campaigns and the integrity of election results, they help spread conspiracy theories and prevent an effective fight against the pandemic. Other forms of problematic content include child pornography, violent videos, copyright infringement, or leaks of private data.
Up to this point, the digital platforms have resorted to self-regulation based on community standards or other forms of private standard setting in order to draw the fine line between forbidden content and free speech. Thus, private companies effectively define the scope and the limits of freedom of opinion within their digital spheres. But European states and the European Union have also started legislating, as the German Network Enforcement Act (2017), the French Law on the fight against the manipulation of information (2018), and the EU Regulation on the dissemination of terrorist content online (2021) show.
In addition to sector-specific rules, the EU’s proposed Digital Services Act (COM(2020) 825 final) introduces an updated horizontal framework for all categories of content, products, services and activities on intermediary services and aims to harmonise the fragmented rules of platform liability. This raises several questions. How far do – national and European – free speech guarantees go? If hate speech and defamation can be outlawed to protect the victims’ rights, how can the prohibition of fake news be justified? What is the remaining scope of the platforms for private content moderation? Who is responsible for fighting and taking down illegal content? How can the victims of de-platforming, content takedowns or shadow banning claim their right to freedom of opinion? And how will these legal responsibilities be enforced?
In our international conference, we aim to discuss the impact of the proposed Digital Services Act on the regulation of hate speech, fake news and other content on digital platforms in Europe.
Information about data processing
Your personal data will be processed on the basis of this agreement in accordance with Art. 6 (1) sentence 1 lit. b, lit. f DS-GVO. The data will be stored for as long as it is necessary for the implementation of the event "IRDT Conference: Regulating AI - The Commission's proposal of an Artificial Intelligence Act: Legal Assessments".
We only use the personal data collected for this purpose and only pass it on to third parties to carry out the information service and follow up on the registration, in particular to the University of Trier.
You have the right at any time to obtain information about the data stored about you, to correct incorrect data, to delete your data and to restrict processing. You have the right to object informally at any time to the processing with effect for the future. For this purpose, an e-mail to firstname.lastname@example.org is sufficient. You have the right to lodge a complaint against data processing with the competent supervisory authority.
Contact details: Prof. Dr. Antje von Ungern-Sternberg, c/o University of Trier | Institute for Law and Digitalization (IRDT)| Behringstraße 21 | 54296 Trier