Filters








39,829 Hits in 4.1 sec

Fairness for Robust Log Loss Classification [article]

Ashkan Rezaei, Rizal Fathony, Omid Memarrast, Brian Ziebart
2019 arXiv   pre-print
Following the first principles of distributional robustness, we derive a new classifier that incorporates fairness criteria into its worst-case logarithmic loss minimization.  ...  Developing classification methods with high accuracy that also avoid unfair treatment of different groups has become increasingly important for data-driven decision making in social applications.  ...  The Fair Robust Log-Loss Predictor, P , minimizes the worst-case log loss-as chosen by adversary P constrained to reflect training statistics-while providing empirical fairness guarantees: min P ∈∆∩Γ max  ... 
arXiv:1903.03910v3 fatcat:u7ldbmx74zhrfjqmmja7vfonfe

Fairness for Robust Log Loss Classification

Ashkan Rezaei, Rizal Fathony, Omid Memarrast, Brian Ziebart
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
We instead re-derive a new classifier from the first principles of distributional robustness that incorporates fairness criteria into a worst-case logarithmic loss minimization.  ...  Developing classification methods with high accuracy that also avoid unfair treatment of different groups has become increasingly important for data-driven decision making in social applications.  ...  The Fair Robust Log-Loss Predictor, P, minimizes the worst-case log loss-as chosen by approximator Q constrained to reflect training statistics (denoted by set Ξ of Eq. (6))-while providing empirical fairness  ... 
doi:10.1609/aaai.v34i04.6002 fatcat:m5ivkh2v5va2lln4cwknd4fyda

Robust Fairness Under Covariate Shift

Ashkan Rezaei, Anqi Liu, Omid Memarrast, Brian D. Ziebart
2021 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
Making predictions that are fair with regard to protected attributes (race, gender, age, etc.) has become an important requirement for classification algorithms.  ...  We propose an approach that obtains the predictor that is robust to the worst-case testing performance while satisfying target fairness requirements and matching statistical properties of the source data  ...  Acknowledgements This work was supported by the National Science Foundation Program on Fairness in AI in collaboration with Amazon under award No. 1939743.  ... 
doi:10.1609/aaai.v35i11.17135 fatcat:sr74ft6jxbbqre6s5nvknmi5wa

Robust Fairness under Covariate Shift [article]

Ashkan Rezaei, Anqi Liu, Omid Memarrast, Brian Ziebart
2021 arXiv   pre-print
Making predictions that are fair with regard to protected group membership (race, gender, age, etc.) has become an important requirement for classification algorithms.  ...  We propose an approach that obtains the predictor that is robust to the worst-case in terms of target performance while satisfying target fairness requirements and matching statistical properties of the  ...  Acknowledgements This work was supported by the National Science Foundation Program on Fairness in AI in collaboration with Amazon under award No. 1939743.  ... 
arXiv:2010.05166v3 fatcat:qihc65gmxrf5bbm6t7qn3qwj6q

Long Term Fairness for Minority Groups via Performative Distributionally Robust Optimization [article]

Liam Peet-Pare, Nidhi Hegde, Alona Fyshe
2022 arXiv   pre-print
Fairness researchers in machine learning (ML) have coalesced around several fairness criteria which provide formal definitions of what it means for an ML model to be fair.  ...  We identify four key shortcomings of these formal fairness criteria, and aim to help to address them by extending performative prediction to include a distributionally robust objective.  ...  Too often, however, this is done without adequate concern for the fairness and robustness of these ML models.  ... 
arXiv:2207.05777v1 fatcat:3jzto34dprdo5iazephvego6ua

Towards Measuring Fairness in AI: the Casual Conversations Dataset [article]

Caner Hazirbas, Joanna Bitton, Brian Dolhansky, Jacqueline Pan, Albert Gordo, Cristian Canton Ferrer
2021 arXiv   pre-print
In addition, we also evaluate the state-of-the-art apparent age and gender classification methods.  ...  Our experiments provides a thorough analysis on these models in terms of fair treatment of people from various backgrounds.  ...  We would like to thank Ida Cheng and Tashrima Hossain for their help in regards to annotating the dataset for the Fitzpatrick skin type.  ... 
arXiv:2104.02821v2 fatcat:lqzobkgnmfd6zebivla3gk52ky

Fair Classification with Instance-dependent Label Noise

Songhua Wu, Mingming Gong, Bo Han, Yang Liu, Tongliang Liu
2022 Conference on Causal Learning and Reasoning  
For statistical fairness notions, we rewrite the classification risk and the fairness metric in terms of noisy data and thereby build robust classifiers.  ...  For the causality-based fairness notion, we exploit the internal causal structure of data to model the label noise and counterfactual fairness simultaneously.  ...  We thank anonymous reviewers for their constructive comments.  ... 
dblp:conf/clear2/WuGHLL22 fatcat:nsvkhbw2knaunm44deuusfrz2i

Scalable and Stable Surrogates for Flexible Classifiers with Fairness Constraints

Harry Bendekgey, Erik B. Sudderth
2021 Neural Information Processing Systems  
We analyze an easy-to-use and robust way of imposing fairness constraints when training, and through this framework prove that some prior fairness surrogates exhibit degeneracies for non-convex models.  ...  We investigate how fairness relaxations scale to flexible classifiers like deep neural networks for images and text.  ...  [1] use the 0-1 loss for both the predictive loss and the constraint. These losses linearly combine via the KKT conditions, inducing a classification problem with reweighted data.  ... 
dblp:conf/nips/BendekgeyS21 fatcat:ukaat4qojfaurpncmd5nwaax5i

A Distributionally Robust Approach to Fair Classification [article]

Bahar Taskesen and Viet Anh Nguyen and Daniel Kuhn and Jose Blanchet
2020 arXiv   pre-print
We demonstrate that the resulting classifier improves fairness at a marginal loss of predictive accuracy on both synthetic and real datasets.  ...  We propose a distributionally robust logistic regression model with an unfairness penalty that prevents discrimination with respect to sensitive attributes such as gender or ethnicity.  ...  We use the following accuracy thresholds at the validation step to tune ρ for DR-FLR and C for DOB + : 95% for Drug, Adult, and Arrhythmia datasets and 70% for COMPAS dataset.  ... 
arXiv:2007.09530v1 fatcat:cng3b3sylfa4phhf7lgdonez44

Training individually fair ML models with Sensitive Subspace Robustness [article]

Mikhail Yurochkin, Amanda Bower, Yuekai Sun
2020 arXiv   pre-print
We formalize this notion of algorithmic fairness as a variant of individual fairness and develop a distributionally robust optimization approach to enforce it during training.  ...  For example, the performance of a resume screening system should be invariant under changes to the gender and/or ethnicity of the applicant.  ...  FAIRNESS THROUGH (DISTRIBUTIONAL) ROBUSTNESS To motivate our approach, imagine an auditor investigating an ML model for unfairness.  ... 
arXiv:1907.00020v2 fatcat:xiufezrkuzhopnmshkcyjsm56m

Realizable Learning is All You Need [article]

Max Hopkins, Daniel Kane, Shachar Lovett, Gaurav Mahajan
2021 arXiv   pre-print
This includes models with no known characterization of learnability such as learning with arbitrary distributional assumptions or general loss, as well as a host of other popular settings such as robust  ...  learning, partial learning, fair learning, and the statistical query model.  ...  Acknowledgements The authors would like to thank Shay Moran, Russell Impagliazzo, and Omar Montasser for enlightening discussions.  ... 
arXiv:2111.04746v1 fatcat:h3kx6pf6azeyfcd5yqn3cqtijq

Learning Fair Representation via Distributional Contrastive Disentanglement [article]

Changdae Oh, Heeji Won, Junhyuk So, Taero Kim, Yewon Kim, Hosik Choi, Kyungwoo Song
2022 arXiv   pre-print
Learning fair representation is crucial for achieving fairness or debiasing sensitive information.  ...  We provide a new type of contrastive loss motivated by Gaussian and Student-t kernels for distributional contrastive learning with theoretical analysis.  ...  A EXPERIMENTAL SETUP A.1 Fair Classification For fair classification, we consider three benchmark datasets previously used in [32] .  ... 
arXiv:2206.08743v1 fatcat:fhnytgfe7va2xb37w52eph3tbu

Metric-Fair Active Learning

Jie Shen, Nan Cui, Jing Wang
2022 International Conference on Machine Learning  
In this paper, we henceforth study metric-fair active learning of homogeneous halfspaces, and show that under the distribution-dependent PAC learning model, fairness and label efficiency can be achieved  ...  Active learning has become a prevalent technique for designing label-efficient algorithms, where the central principle is to only query and fit "informative" labeled instances.  ...  Acknowledgements We thank the anonymous reviewers and meta-reviewer for valuable comments on improving the notation and proof structure.  ... 
dblp:conf/icml/ShenCW22 fatcat:gwk6lq5f45hcdi3x7yvbpa4z2m

CertiFair: A Framework for Certified Global Fairness of Neural Networks [article]

Haitham Khedr, Yasser Shoukry
2022 arXiv   pre-print
We propose a fairness loss that can be used during training to enforce fair outcomes for similar individuals. We then provide provable bounds on the fairness of the resulting NN.  ...  The first is to construct a verifier which checks whether the fairness property holds for a given NN in a classification task or provide a counterexample if it is violated, i.e., the model is fair if all  ...  with a modified loss that enforces fair outcomes for similar individuals.  ... 
arXiv:2205.09927v1 fatcat:vqouwfxzfjcujdzdkx76h63uva

Tilted Empirical Risk Minimization [article]

Tian Li, Ahmad Beirami, Maziar Sanjabi, Virginia Smith
2021 arXiv   pre-print
We provide several interpretations of the resulting framework: We show that TERM can increase or decrease the influence of outliers, respectively, to enable fairness or robustness; has variance-reduction  ...  Finally, we demonstrate that TERM can be used for a multitude of applications, such as enforcing fairness between subgroups, mitigating the effect of outliers, and handling class imbalance.  ...  ACKNOWLEDGEMENTS We are grateful to Arun Sai Suggala and Adarsh Prasad (CMU) for their helpful comments on robust regression; to Zhiguang Wang, Dario Garcia Garcia, Alborz Geramifard, and other members  ... 
arXiv:2007.01162v2 fatcat:bkwwhpdesvdabiebaydgh3r4ry
« Previous Showing results 1 — 15 out of 39,829 results