AI for Social Good

Trustworthy, transparent and explainable AI

For Good

We develop AI systems that are trustworthy, transparent and explainable (XAI). Instead of a black box you will get a glass box, with careful attention to user privacy and algorithmic bias, that will hold up in ethically challenging applications.

For Society

Our team of AI experts and sociologists have experience with online challenges. We focus on projects involving social media, societal conflicts, ethics, privacy. We work with NGOs, press, government, law enforcement and the European Commission.

Open Source

Ancient thinkers already asked: “Quis custodiet ipsos custodes?” (who watches the watchers). Our technology is free and open source. You can take it, use it, inspect it, or simply ask us our experts to build a glass box for you.

Pattern: Python toolkit for data mining, Natural Language Processing (NLP), Machine Learning (ML) and network analytics. Now curated by the University of Antwerp and Google Summer of Code. https://github.com/clips/pattern

Grasp: faster, smaller and easier than Pattern, with new XAI tools. We’re still writing the docs, but feel free to check out the code and learn from these powerful and demystified algorithms. https://github.com/textgain/grasp

Open Data

Some of our core datasets are freely available, commercially or for research:

  • 8chan embeddings: a unique resource by Google Summer of Code student Pierre Voué, trained on 30M+ toxic messages from 8chan/pol/, for studying online polarization and radicalization. https://textgain.com/8chan

Open Science

We can build it for you but we’d rather explain it to you. All of our team members are part-time lecturers with 15+ years of experience. Some of our free study reports:

  • On sexism: Online hatred of women in the Incels.me forum: Linguistic analysis and automatic detection. Sylvia Jaki, Tom De Smedt, Maja Gwóźdź, Rudresh Panchal, Alexander Rossa & Guy De Pauw (2019). JLAC.
  • On extremism: Multilingual Cross-domain perspectives on online hate speech. Tom De Smedt, Sylvia Jaki, Eduan Kotzé, Leïla Saoud, Maja Gwóźdź, Guy De Pauw & Walter Daelemans (2018). CTRS.
  • On jihadism: Automatic detection of online jihadist hate speech. Tom De Smedt, Guy De Pauw & Pieter Van Ostaeyen (2018). CTRS.
  • On NLP & ML: Pattern for Python. Tom De Smedt & Walter Daelemans (2012). JMLR.

Societal Projects

Here’s some of the AI For Good projects that we are working on. All project teams are gender-neutral and represent different ethnicities and ideologies. Reach out if you want to join our community:

  • Project Grey (2019-2021) is co-funded by the Internal Security Fund (ISF) of the European Commision, to raise awareness about online polarization.
  • RHETORiC (2019-2021) is co-funded by IMEC.ICON and investigates tools for news editors and consumers to detect and counter polarization on social media and support civil discourse
  • DeTact (2019-2021) is co-funded by the Rights, Equality and Citizenship fund (REC) of the European Commision, to investigate tech for conflict resolution.
  • Factcheck Vlaanderen (2019) was funded by the Flemish Journalism Fund (VJF) to establish the first press fact-checking platform in Belgium.
  • Africa’s Voices (2018) was privately funded to develop cutting-edge Swahili language technology for social media monitoring and listen to Africa’s voices.

Non-profit Projects

Driven by empathy and a vision of a better world, our most prized resource is time. Here’s an overview of projects that we voluntarily engage in outside of Textgain:

WANT TO KNOW MORE?
Schedule a meeting