A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Fairness without Harm: Decoupled Classifiers with Preference Guarantees
2019
International Conference on Machine Learning
In domains such as medicine, it can be acceptable for machine learning models to include sensitive attributes such as gender and ethnicity. In this work, we argue that when there is this kind of treatment disparity then it should be in the best interest of each group. Drawing on ethical principles such as beneficence ("do the best") and non-maleficence ("do no harm"), we show how to use sensitive attributes to train decoupled classifiers that satisfy preference guarantees. These guarantees
dblp:conf/icml/UstunLP19
fatcat:z3y5fia7n5bntdrddda37oceui