Natural Language Processing allows us to detect online harassment.
Discussing things you care about can be difficult. The threat of abuse and harassment online means that many people stop expressing themselves and give up on seeking different opinions. Platforms struggle to effectively facilitate conversations, leading many communities to limit or completely shut down user comments.
The challenge was to build a model that's capable of detecting different types of toxicity like threats, obscenity, insults, and identity-based hate. It will hopefully help online discussions become more productive and respectful.
Check it on Github
Detecting Toxic Content
Natural Language Processing allows us to detect online harassment.
Discussing things you care about can be difficult. The threat of abuse and harassment online means that many people stop expressing themselves and give up on seeking different opinions. Platforms struggle to effectively facilitate conversations, leading many communities to limit or completely shut down user comments.
The challenge was to build a model that's capable of detecting different types of toxicity like threats, obscenity, insults, and identity-based hate. It will hopefully help online discussions become more productive and respectful.