Towards a Measure of Individual Fairness for Deep Learning [article]

Krystal Maughan, Joseph P. Near
2020 arXiv   pre-print
Deep learning has produced big advances in artificial intelligence, but trained neural networks often reflect and amplify bias in their training data, and thus produce unfair predictions. We propose a novel measure of individual fairness, called prediction sensitivity, that approximates the extent to which a particular prediction is dependent on a protected attribute. We show how to compute prediction sensitivity using standard automatic differentiation capabilities present in modern deep
more » ... ng frameworks, and present preliminary empirical results suggesting that prediction sensitivity may be effective for measuring bias in individual predictions.
arXiv:2009.13650v1 fatcat:lgn4x5agfjchfpnjqnkf7jnvmq