The emergence of online platforms and social media has transformed modern communication. Online platforms provide many opportunities to express opinions and participate in public and political discussions. However, just as offline discussions are replicated or amplified online, so are expressions of hate. This is of increasing concern.
The EU has updated its laws and implemented policies to tackle illegal content online, such as through the Digital Services Act (DSA), to more effectively regulate online content, including hate speech. However, these changes are relatively recent. In addition, there are still uncertainties concerning how to better protect human rights online, with regard to combating online hate while protecting freedom of expression, and how to efficiently implement existing and newly developed laws.
This report aims to better understand whether standard tools to address online hate speech, hereafter referred to as ‘online hate’, are effective by looking at manifestations of online hate after social media platforms have applied their content moderation controls. This report presents findings covering four social media platforms – Telegram, X (formally Twitter), Reddit and YouTube. The platforms were selected based on their accessibility for research purposes, their popularity (i.e. audience reach) and the assumed magnitude of hate speech on them.
The report aims to achieve the following:
The study focuses specifically on online hate in social media posts targeted at women, people of African descent, Jews and Roma to explore the limits of online content moderation and the extent of hate speech against these groups. The study collected social media posts over 6 months, using specific keywords that could indicate potential online hate against these target groups.
In this report:
Online platforms should have specific regard to protected characteristics of users in the context of their terms and conditions, content moderation practices and monitoring policies, including addressing sexist online hate. Performance indicators should be in place to record the volume of misogyny online and the effectiveness of content moderation, looking at developments over time.
For very large online platforms (VLOPs), such as X and YouTube, misogyny should be one of the systemic risks considered in the context of the risk assessment and risk mitigation measures required by Articles 34 and 35 of the DSA.
The Council of Europe Convention on Preventing and Combating Violence against Women creates a coherent legal framework for the prevention, support and protection of women from violence – both offline and online. The EU is now party to this convention. EU Member States that have not yet signed and ratified the convention are urged to do so.
The European Commission and national governments should support, practically and financially, the creation of a wide and heterogeneous network of organisations acting as trusted flaggers to ensure that different forms of online hate are widely and reliably detected. Organisations representing groups with limited resources should not be put at a disadvantage in combating online hate. Users need to be made aware of easy ways to notify companies of hate speech, in line with Articles 16 and 22 of the DSA.
Given that views on what constitutes online hate may differ, a variety of measures to detect and report hate speech are needed. These include training on legal thresholds for identifying online hate for police, content moderators and trusted flaggers. The training could also ensure that platforms do not over-remove content.
The threshold for and description of what constitutes illegal online hate need more clarity. The changing landscape of expressions of hate and the magnitude of online hate mean that those involved in detecting and those responsible for addressing hate speech should not be left in any doubt regarding the rules. The EU and national legislators should consider clearer guidance and rules on what kind of online hate is illegal.
It must be ensured that AI-supported online content moderation decisions are not discriminatory. Providers and users (i.e. platforms) must assess the fundamental rights compliance of any AI system in line with the DSA and current and developing standards regulating the use of AI.
The EU should ensure that applicable EU law, such as the DSA, appropriately addresses potential discrimination through the use of AI content moderation and requires that these systems are not used in a discriminatory way.
The European Commission should ensure that risk assessments under the DSA – including with regard to online hate – are complemented by extensive independent research using a variety of methods to ensure the accuracy and diversity of assessments. This is necessary, as any single method for analysing online hate remains limited. Only a variety of approaches and tests will provide a fuller picture of the challenges linked to identifying and combating online hate. Independent research can offer critical views and the appropriate methodologies required to further the understanding of the complex and fast-changing landscape of online hate and content moderation.
The European Commission should ensure that independent research institutes and academic researchers can access the data of online platforms without burdensome administrative procedures or other potential obstacles and in line with data protection safeguards. This will allow researchers to better investigate: