Debiasing representations by removing unwanted variation due to protected attributes [article]

Amanda Bower, Laura Niss, Yuekai Sun, Alexander Vargo
2018 arXiv   pre-print
We propose a regression-based approach to removing implicit biases in representations. On tasks where the protected attribute is observed, the method is statistically more efficient than known approaches. Further, we show that this approach leads to debiased representations that satisfy a first order approximation of conditional parity. Finally, we demonstrate the efficacy of the proposed approach by reducing racial bias in recidivism risk scores.
arXiv:1807.00461v1 fatcat:45vbuzr2efb3necbvcpf23znvy