embeddings (TGTR-4)

We collected over 8M messages from the controversial Dutch websites GeenStijl and Dumpert to train a word embedding model that captures the toxic language representations contained in the dataset. The trained word embeddings (±150MB) are released for free and may be useful for further study on toxic online discourse.

  • Pierre Voué
  • Elizabeth Cappon
  • Tom De Smedt
Comments are closed.