Harmful content can spread in social media channels of news outlets, and in comment sections of news sites. We contribute to the Innovative Monitoring Systems and Prevention Policies of Online Hate Speech (IMSyPP). It tackles hateful speech in a multidisciplinary fashion combining machine learning, computational social science and linguistic approaches.


At a time where media consumption shapes public opinion, the issue of harmful content spreading through news outlets and their comment sections is increasingly concerning. The IMSyPP project (2020–2022) is dedicated to addressing this problem by leveraging cutting-edge technologies and research methodologies.
IMSyPP employs a multifaceted strategy to monitor and mitigate harmful content. By integrating machine learning and linguistic techniques, we can effectively detect harmful content in both English and Dutch. The initiative also extends beyond detection, by crafting and implementing impactful counter-narratives. Working alongside research teams and social media platforms, we develop machine learning models that adhere to sustainable technology principles. These models can not only process large datasets to pinpoint trends and triggers but can also facilitate a studies focused on counter-narratives.
The IMSyPP project provides valuable insights and strategies to tackle harmful content online. By translating our findings into policy recommendations for European communication regulators, we aim to foster a more informed and balanced media environment.