
gr.euronews.com
ECRI Urges Stronger Action Against Persistent Online Hate Speech in Europe
The ECRI urged four European countries to strengthen measures against online hate speech targeting minorities, while the EOOH reported consistently moderate levels of online toxicity since early 2025, with antisemitic speech being the most toxic (0.34) and sexism the most frequent (nearly 3 million instances).
- How do the differing toxicity scores (e.g., antisemitism at 0.34 vs. sexism at 0.19) reflect the different forms and impacts of online hate speech?
- The European Observatory of Online Hate (EOOH) recorded a consistently 'moderate' level of online toxicity since early 2025, with April 2025 showing the highest level at 0.22. Analysis of over 2.5 million messages across six social media platforms revealed antisemitic speech as the most toxic (0.34), followed by anti-Roma (0.30), anti-LGBTQI+ (0.29), and anti-Muslim (0.28) speech. The platform X accounted for the vast majority of these.
- What specific actions are needed to curb the high levels of online hate speech targeting vulnerable groups in Europe, given the persistent moderate toxicity levels reported by the EOOH?
- The European Commission against Racism and Intolerance (ECRI) urged Sweden, Portugal, Croatia, and Latvia to strengthen measures against hate speech targeting migrants, Roma, LGBTQI+ individuals, and Black citizens. Online hate speech, including antisemitic, anti-Roma, anti-LGBTQI+, and anti-Muslim content, remains prevalent, with the platform X (formerly Twitter) being the main source.
- What are the long-term consequences of consistently moderate levels of online toxicity, and what innovative strategies can effectively counter its spread and mitigate its societal impact?
- The high prevalence of hate speech on platforms like X underscores the need for stronger platform accountability and regulatory action. The persistence of online toxicity, despite the EOOH's monitoring, signals a systemic challenge requiring comprehensive strategies beyond technical solutions. The disproportionate impact on marginalized groups highlights the urgency for targeted interventions.
Cognitive Concepts
Framing Bias
The report frames the issue primarily as a problem of online hate speech, particularly on platform X. While this is a significant aspect, the emphasis might overshadow other important factors or solutions. The headline (if there was one) could have influenced reader interpretation by focusing on the severity of the situation rather than emphasizing the need for complex, multifaceted solutions. The introduction of the report directly links the ECRI warning to the online hate speech data, potentially shaping the reader's understanding of the causality.
Language Bias
The language used is generally neutral and objective, using specific data points and quantifiable metrics to support claims. While terms like "toxic speech" are used, they are consistently connected to specific behaviors and metrics, mitigating the potential for subjective interpretation.
Bias by Omission
The report focuses heavily on online hate speech, particularly on platforms like X (formerly Twitter), but omits analysis of offline hate speech or potential contributing factors beyond online platforms. While acknowledging limitations of scope is important, the lack of broader context might limit the reader's understanding of the overall problem and potential solutions. The report also omits discussion of the effectiveness of existing hate speech policies on different platforms.
False Dichotomy
The report doesn't present a false dichotomy, but it could benefit from acknowledging the complexities involved in addressing online hate speech. For example, the simple categorization of toxicity levels might not fully capture the nuanced and evolving nature of hate speech online.
Gender Bias
The report mentions sexism and misogynistic speech as a prevalent form of online hate speech, acknowledging its high frequency even with a lower average toxicity score than other categories. This shows an awareness of the issue, but further analysis of gendered forms of hate speech and their impact would be beneficial. There is no evidence of gender bias in the selection or presentation of data.
Sustainable Development Goals
The article highlights the prevalence of online hate speech targeting marginalized groups such as migrants, Roma, LGBTQ+ individuals, and Black citizens. This fuels discrimination and inequality, hindering progress towards a more inclusive and equitable society. The high volume of hate speech on platforms like X further exacerbates the issue.