Textgain is developing new web services for very specific problems, including identifying hate and depression on social media. Recently, we have made notable progress with identifying hate speech, specifically Islamic State (IS/ISIS/ISIL/Daesh) tweets.
In a recent announcement (February 2016), Twitter has spoken out against the use of their microblogging platform to promote terrorism. They report having suspended over 125,000 profiles for threatening or promoting terrorist acts, primarily related to Islamic State, using manual review and proprietary anti-spam technology.
Twitter’s mission is challenging. For every subversive profile suspended, a new profile appears. Profiles that have not yet been suspended then broadcast the existence of the new profile, and so on, in an endless cat-and-mouse game.
Instead of using a fixed notion of what hate speech looks like, our machine fits itself to the available data as the rhetoric evolves. It is over 80% accurate in lab conditions. However, in the real world, automatically identifying one inflammatory tweet in a million other tweets is very difficult. We must use such tools with caution since they may predict false positives and false negatives alike. But Textgain believes that our technology can be valuable to help manual review of subversive text.
We will discuss and freely share our technology with platforms under distress such as Twitter and with known security agencies. Ask us about it!