6,567 Hits in 6.1 sec

Robust Optimization for Fairness with Noisy Protected Groups [article]

Serena Wang, Wenshuo Guo, Harikrishna Narasimhan, Andrew Cotter, Maya Gupta, Michael I. Jordan
2020 arXiv   pre-print
Many existing fairness criteria for machine learning involve equalizing some metric across protected groups such as race or gender.  ...  Second, we introduce two new approaches using robust optimization that, unlike the naive approach of only relying on Ĝ, are guaranteed to satisfy fairness criteria on the true protected groups G while  ...  We also thank Stefania Albanesi and Domonkos Vámossy for an inspiring early discussion of practical scenarios when noisy protected groups occur.  ... 
arXiv:2002.09343v3 fatcat:7tunjpkm4be73kxvr3wyi3szrm

Fairness without Demographics through Adversarially Reweighted Learning [article]

Preethi Lahoti, Alex Beutel, Jilin Chen, Kang Lee, Flavien Prost, Nithum Thain, Xuezhi Wang, Ed H. Chi
2020 arXiv   pre-print
Our results show that ARL improves Rawlsian Max-Min fairness, with notable AUC improvements for worst-case protected groups in multiple datasets, outperforming state-of-the-art alternatives.  ...  Therefore we ask: How can we train an ML model to improve fairness when we do not even know the protected group memberships?  ...  Recent research [19, 5, 30, 9] has identified fairness concerns in several ML systems, especially toward protected groups that are under-represented in the data.  ... 
arXiv:2006.13114v3 fatcat:gqfqggubcjertibeaqlsfui3w4

Bias-Tolerant Fair Classification [article]

Yixuan Zhang, Feng Zhou, Zhidong Li, Yang Wang, Fang Chen
2021 arXiv   pre-print
Most algorithmic fairness approaches perform an empirical risk minimization with predefined fairness constraints, which tends to trade-off accuracy for fairness.  ...  However, such methods would achieve the desired fairness level with the sacrifice of the benefits (receive positive outcomes) for individuals affected by the bias.  ...  Then we will show how B-FARL is designed by decomposing gp for the protected and unprotected groups. First, we take the expectation of gp w.r.t.  ... 
arXiv:2107.03207v1 fatcat:juapmifjvzeolmnz2aanxte4ju

Fairness for Robust Learning to Rank [article]

Omid Memarrast, Ashkan Rezaei, Rizal Fathony, Brian Ziebart
2021 arXiv   pre-print
To achieve this type of group fairness for ranking, we derive a new ranking system based on the first principles of distributional robustness.  ...  While conventional ranking systems focus solely on maximizing the utility of the ranked items to users, fairness-aware ranking systems additionally try to balance the exposure for different protected attributes  ...  This work addresses the problem of providing more robust fairness given a chosen fairness criterion, but does not answer the broader question of which fairness criterion is appropriate for a particular  ... 
arXiv:2112.06288v1 fatcat:wkx64symqrerpo57gm25quvc6a

Can Less be More? When Increasing-to-Balancing Label Noise Rates Considered Beneficial [article]

Yang Liu, Jialu Wang
2021 arXiv   pre-print
Then we present a method to insert label noise properly for the task of learning with noisy labels, either without or with a fairness constraint.  ...  We propose a detection method that informs us which group of labels might suffer from higher noise without using ground truth labels.  ...  Acknowledgments This work is partially supported by the National Science Foundation (NSF) under grant IIS-2007951 and the NSF FAI program in collaboration with Amazon under grant IIS-2040800.  ... 
arXiv:2107.05913v2 fatcat:cffmpkhjdjf7hll6gyeibn3eum

Fair Classification with Noisy Protected Attributes: A Framework with Provable Guarantees [article]

L. Elisa Celis and Lingxiao Huang and Vijay Keswani and Nisheeth K. Vishnoi
2021 arXiv   pre-print
We present an optimization framework for learning a fair classifier in the presence of noisy perturbations in the protected attributes.  ...  Compared to prior work, our framework can be employed with a very general class of linear and linear-fractional fairness constraints, can handle multiple, non-binary protected attributes, and outputs a  ...  [59] propose a robust optimization approach to solve the noisy fair classification problem.  ... 
arXiv:2006.04778v3 fatcat:kkuzsnhe35dnvnf6sm5lxo6iem

Fair Classification with Group-Dependent Label Noise [article]

Jialu Wang, Yang Liu, Caleb Levy
2020 arXiv   pre-print
function for a protected subgroup.  ...  This work examines how to train fair classifiers in settings where training labels are corrupted with random noise, and where the error rates of corruption depend both on the label class and on the membership  ...  Our work contributes to the fair classification literature by introducing robust methods for dealing with heterogeneous label noise.  ... 
arXiv:2011.00379v1 fatcat:w3ihvffs4fd4bdsrgwjm5ah3cu

Learning from Noisy Labels with Deep Neural Networks: A Survey [article]

Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, Jae-Gil Lee
2022 arXiv   pre-print
Next, we provide a comprehensive review of 62 state-of-the-art robust training methods, all of which are categorized into five groups according to their methodological difference, followed by a systematic  ...  As noisy labels severely degrade the generalization performance of deep neural networks, learning from noisy labels (robust training) is becoming an important task in modern deep learning applications.  ...  Accordingly, the goal of fair training is building a model that satisfies such fairness criteria for the true protected groups.  ... 
arXiv:2007.08199v7 fatcat:c5ztk4jfpfddrhqvf6phcy32de

FLEA: Provably Fair Multisource Learning from Unreliable Training Data [article]

Eugenia Iofinova, Nikola Konstantinov, Christoph H. Lampert
2022 arXiv   pre-print
Fairness-aware learning aims at constructing classifiers that not only make accurate predictions, but also do not discriminate against specific groups.  ...  In this work we address the problem of fair learning from unreliable training data in the robust multisource setting, where the available training data comes from multiple sources, a fraction of which  ...  The authors would like to thank Bernd Prach, Elias Frantar, Alexandra Peste, Mahdi Nikdan, and Peter Sukenik for their helpful feedback.  ... 
arXiv:2106.11732v3 fatcat:4pxbvh5wbbb4fdcgxz2wgrkk7i

On Adversarial Bias and the Robustness of Fair Machine Learning [article]

Hongyan Chang, Ta Duy Nguyen, Sasi Kumar Murakonda, Ehsan Kazemi, Reza Shokri
2020 arXiv   pre-print
Optimizing prediction accuracy can come at the expense of fairness.  ...  We analyze data poisoning attacks against group-based fair machine learning, with the focus on equalized odds.  ...  Learning Fair Models from Noisy Training Data In most practical scenarios, the training data used for learning models might be biased (underrepresentation bias) and/or noisy (with mis-labeling).  ... 
arXiv:2006.08669v1 fatcat:4pr2fyd4nbdonkxp4kupdwxouy

Self-Paced Deep Regression Forests with Consideration on Ranking Fairness [article]

Lili Pan, Mingming Meng, Yazhou Ren, Yali Zheng, Zenglin Xu
2022 arXiv   pre-print
with each example, and tackle the fundamental ranking problem in SPL from a new perspective: fairness.  ...  To the best of our knowledge, our work is the first paper in the literature of SPL that considers ranking fairness for self-paced regime construction.  ...  The notion of fairness is originally defined with respect to a protected attribute such as gender, race or age.  ... 
arXiv:2112.06455v6 fatcat:43r7b6kifbadzjevbkrlmeid5m

On the Intrinsic and Extrinsic Fairness Evaluation Metrics for Contextualized Language Representations [article]

Yang Trista Cao and Yada Pruksachatkun and Kai-Wei Chang and Rahul Gupta and Varun Kumar and Jwala Dhamala and Aram Galstyan
2022 arXiv   pre-print
These metrics can be roughly categorized into two categories: 1) extrinsic metrics for evaluating fairness in downstream applications and 2) intrinsic metrics for estimating fairness in upstream contextualized  ...  such as experiment configuration for extrinsic metrics.  ...  ., 2021) can introduce additional bias, and thus should be optimized to be robust.  ... 
arXiv:2203.13928v1 fatcat:c3z6xau625gapo3iyc5u5scrua

Ensuring Fairness Beyond the Training Data [article]

Debmalya Mandal, Samuel Deng, Suman Jana, Jeannette M. Wing, Daniel Hsu
2020 arXiv   pre-print
We first reduce this problem to finding a fair classifier that is robust with respect to the class of distributions.  ...  In this work, we develop classifiers that are fair not only with respect to the training distribution, but also for a class of distributions that are weighted perturbations of the training samples.  ...  Acknowledgments: We thank Shipra Agrawal and Roxana Geambasu for helpful preliminary discussions. DM was supported through a Columbia Data Science Institute Post-Doctoral Fellowship.  ... 
arXiv:2007.06029v2 fatcat:n5ajt33uaffbrnthp7aeoz7hda

Fairness and Robustness in Invariant Learning: A Case Study in Toxicity Classification [article]

Robert Adragna, Elliot Creager, David Madras, Richard Zemel
2020 arXiv   pre-print
In light of recent work suggesting an intimate connection between fairness and robustness, we investigate whether algorithms from robust ML can be used to improve the fairness of classifiers that are trained  ...  Robustness is of central importance in machine learning and has given rise to the fields of domain generalization and invariant learning, which are concerned with improving performance on a test distribution  ...  protected group membership in fairness applications (for instance, whether the comment is about a particular race or gender).  ... 
arXiv:2011.06485v2 fatcat:qujlfapvwzfkbkk7nif2bbyxqm

Differentially Private and Fair Deep Learning: A Lagrangian Dual Approach [article]

Cuong Tran, Ferdinando Fioretto, Pascal Van Hentenryck
2020 arXiv   pre-print
A critical concern in data-driven decision making is to build models whose outcomes do not discriminate against some demographic groups, including gender, ethnicity, or age.  ...  To address this challenge, this paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.  ...  The protected attribute associated with the group membership is age and the experiments consider the following three different group membership sizes: -Bank, in which the number of protected groups |A|  ... 
arXiv:2009.12562v1 fatcat:objnqxrywves5pgqxbfk25fgci
« Previous Showing results 1 — 15 out of 6,567 results