Learning Fair Representations via an Adversarial Framework [article]

Rui Feng, Yang Yang, Yuehan Lyu, Chenhao Tan, Yizhou Sun, Chunping Wang
2019 arXiv   pre-print
Fairness has become a central issue for our research community as classification algorithms are adopted in societally critical domains such as recidivism prediction and loan approval. In this work, we consider the potential bias based on protected attributes (e.g., race and gender), and tackle this problem by learning latent representations of individuals that are statistically indistinguishable between protected groups while sufficiently preserving other information for classification. To do
more » ... at, we develop a minimax adversarial framework with a generator to capture the data distribution and generate latent representations, and a critic to ensure that the distributions across different protected groups are similar. Our framework provides a theoretical guarantee with respect to statistical parity and individual fairness. Empirical results on four real-world datasets also show that the learned representation can effectively be used for classification tasks such as credit risk prediction while obstructing information related to protected groups, especially when removing protected attributes is not sufficient for fair classification.
arXiv:1904.13341v1 fatcat:pou3ms3enzhndnj5fqzdieegi4