Demoting Racial Bias in Hate Speech Detection [article]

Mengzhou Xia, Anjalie Field, Yulia Tsvetkov
2020 arXiv   pre-print
In current hate speech datasets, there exists a high correlation between annotators' perceptions of toxicity and signals of African American English (AAE). This bias in annotated training data and the tendency of machine learning models to amplify it cause AAE text to often be mislabeled as abusive/offensive/hate speech with a high false positive rate by current hate speech classifiers. In this paper, we use adversarial training to mitigate this bias, introducing a hate speech classifier that
more » ... arns to detect toxic sentences while demoting confounds corresponding to AAE texts. Experimental results on a hate speech dataset and an AAE dataset suggest that our method is able to substantially reduce the false positive rate for AAE text while only minimally affecting the performance of hate speech classification.
arXiv:2005.12246v1 fatcat:klt4rbqn3fbtrnlpyjkzuw5nyu