A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Does Robustness Improve Fairness? Approaching Fairness with Word Substitution Robustness Methods for Text Classification
[article]
2021
arXiv
pre-print
Existing bias mitigation methods to reduce disparities in model outcomes across cohorts have focused on data augmentation, debiasing model embeddings, or adding fairness-based optimization objectives during training. Separately, certified word substitution robustness methods have been developed to decrease the impact of spurious features and synonym substitutions on model predictions. While their end goals are different, they both aim to encourage models to make the same prediction for certain
arXiv:2106.10826v1
fatcat:fhgqzd2ssngn3d4ok4oc7gr6ui