Filters








112,299 Hits in 5.1 sec

Robust Fairness-aware Learning Under Sample Selection Bias [article]

Wei Du, Xintao Wu
2021 arXiv   pre-print
However, how to achieve fairness in the built classification models is under-explored. In this paper, we propose a framework for robust and fair learning under sample selection bias.  ...  However, the assumption is often violated in real world due to the sample selection bias between the training and test data.  ...  This is a serious concern when it is critical and imperative to achieve fairness in many applications. In this paper, we develop a framework for robust and fair learning under sample selection bias.  ... 
arXiv:2105.11570v1 fatcat:d6hpr2dck5cyha7pmngifgh4re

Enhancing Model Robustness and Fairness with Causality: A Regularization Approach [article]

Zhao Wang, Kai Shu, Aron Culotta
2021 arXiv   pre-print
In this paper, we propose a simple and intuitive regularization approach to integrate causal knowledge during model training and build a robust and fair model by emphasizing causal features and de-emphasizing  ...  We conduct experiments to evaluate model robustness and fairness on three datasets with multiple metrics.  ...  We would also like to thank the anonymous reviewers for useful feedbacks.  ... 
arXiv:2110.00911v1 fatcat:fqtfewtkwbg6lpanonxng3nubm

On Adversarial Bias and the Robustness of Fair Machine Learning [article]

Hongyan Chang, Ta Duy Nguyen, Sasi Kumar Murakonda, Ehsan Kazemi, Reza Shokri
2020 arXiv   pre-print
Adversarial sampling and adversarial labeling attacks can also worsen the model's fairness gap on test data, even though the model satisfies the fairness constraint on training data.  ...  An adversary who can control sampling or labeling for a fraction of training data, can reduce the test accuracy significantly beyond what he can achieve on unconstrained models.  ...  For the fair models trained on COMPAS dataset, we evaluate on poisoning dataset selected using Algorithm 2 with λ = 100 and λ = .  ... 
arXiv:2006.08669v1 fatcat:4pr2fyd4nbdonkxp4kupdwxouy

Self-Paced Deep Regression Forests with Consideration on Ranking Fairness [article]

Lili Pan, Mingming Meng, Yazhou Ren, Yali Zheng, Zenglin Xu
2022 arXiv   pre-print
This tackles the fundamental ranking and selection problem in SPL from a new perspective: fairness. Our idea is fundamental and can be easily combined with a variety of DDMs.  ...  Then, a natural question arises: can SPL lead DDMs to achieve more robust and less biased solutions?  ...  This work is partially supported by the National Key R&D Program of China AI2021ZD0112000, National Natural Science Fundation of China Nos. 62171111, 61806043, 61971106 and 61872068, and the Special Science  ... 
arXiv:2112.06455v7 fatcat:jhu37kbynjgylm5mlwm5hld6yi

Fairness Testing of Deep Image Classification with Adequacy Metrics [article]

Peixin Zhang, Jingyi Wang, Jun Sun, Xinyu Wang
2021 arXiv   pre-print
domain sampling in the semantic level for image classification applications; 2) functionality, i.e., they generate unfair samples without providing testing criterion to characterize the model's fairness  ...  2) a set of multi-granularity adequacy metrics to evaluate the model's fairness; 3) a test selection algorithm for fixing the fairness issues efficiently.  ...  The training images are generated by the gradient-based method based on the original training data, and the validation set is composed by 5, 000 unfair samples each selected from the original testing set  ... 
arXiv:2111.08856v2 fatcat:4pkjv7smwnaetcvquwbx2mo3lu

Dawn of the transformer era in speech emotion recognition: closing the valence gap [article]

Johannes Wagner, Andreas Triantafyllopoulos, Hagen Wierstorf, Maximilian Schmitt, Felix Burkhardt, Florian Eyben, Björn W. Schuller
2022 arXiv   pre-print
However, existing works have not evaluated the influence of model size and pre-training data on downstream performance, and have shown limited attention to generalisation, robustness, fairness, and efficiency  ...  Furthermore, our investigations reveal that transformer-based architectures are more robust to small perturbations compared to a CNN-based baseline and fair with respect to biological sex groups, but not  ...  We name it sex fairness score, which can be formulated as Sex fairness score = CCC female − CCC male , (1) where CCC female is the CCC for all female samples, and CCC male the CCC for all male samples  ... 
arXiv:2203.07378v2 fatcat:xpitp6wsa5c2rcbgicyakblpqi

Fair Regression under Sample Selection Bias [article]

Wei Du, Xintao Wu, Hanghang Tong
2021 arXiv   pre-print
In this paper, we develop a framework for fair regression under sample selection bias when dependent variable values of a set of samples from the training data are missing as a result of another hidden  ...  This assumption is often violated in real world due to the sample selection bias between the training and testing data.  ...  Acknowledgments and Disclosure of Funding This work was supported in part by NSF 1920920, 1939725, 1946391 and 2137335.  ... 
arXiv:2110.04372v1 fatcat:x5itp5f26zb3jhngrrev4g5bfq

Improving Robustness and Efficiency in Active Learning with Contrastive Loss [article]

Ranganath Krishnan, Nilesh Ahuja, Alok Sinha, Mahesh Subedar, Omesh Tickoo, Ravi Iyer
2021 arXiv   pre-print
We propose efficient query strategies in active learning to select unbiased and informative data samples of diverse feature representations.  ...  We also evaluate robustness to dataset shift and out-of-distribution in active learning setup and demonstrate our proposed SCAL method outperforms high performing compute-intensive methods by a bigger  ...  Sampling bias in deep neural network training can cause undesired behavior with respect to fairness, robustness and trustworthiness when deployed in real-world situations (Buolamwini and Gebru 2018; Bhatt  ... 
arXiv:2109.06873v1 fatcat:dqkojoeibbaodcvzm42nlprofe

Poisoning Attacks on Fair Machine Learning [article]

Minh-Hao Van, Wei Du, Xintao Wu, Aidong Lu
2021 arXiv   pre-print
All three attacks effectively and efficiently produce poisoning samples via sampling, labeling, or modifying a fraction of training data in order to reduce the test accuracy.  ...  Our attacking framework can target fair machine learning models trained with a variety of group based fairness notions such as demographic parity and equalized odds.  ...  Acknowledgments This work was supported in part by NSF 1564250, 1564039, 1946391, and 2137335.  ... 
arXiv:2110.08932v1 fatcat:wfgxgppsyrfbxgizqic6rvfvp4

Holistic Approach to Measure Sample-level Adversarial Vulnerability and its Utility in Building Trustworthy Systems [article]

Gaurav Kumar Nayak, Ruchit Rawal, Rohit Lal, Himanshu Patil, Anirban Chakraborty
2022 arXiv   pre-print
We observe that the student network trained with the subset of samples selected using our combined metric performs better than both the competing baselines, viz., where samples are selected randomly or  ...  We, therefore, propose a holistic approach for quantifying adversarial vulnerability of a sample by combining these different perspectives, i.e., degree of model's reliance on high-frequency features and  ...  120, 150) for the model trained via our Trust Score based sample selection strategy.  ... 
arXiv:2205.02604v1 fatcat:b2cgqowdgjadde3reddmjcjkh4

Bandit-based Communication-Efficient Client Selection Strategies for Federated Learning [article]

Yae Jee Cho, Samarth Gupta, Gauri Joshi, Osman Yağan
2020 arXiv   pre-print
We also demonstrate how client selection can be used to improve fairness.  ...  Due to communication constraints and intermittent client availability in federated learning, only a subset of clients can participate in each training round.  ...  While π pow-d is able to achieve higher fairness than π ucb-cs , π ucb-cs shows a significant improvement in fairness even with low communication cost and robustness to the error floor in the training  ... 
arXiv:2012.08009v1 fatcat:7ula3fgkujdstgcta3gaqk625a

Mitigating Sampling Bias and Improving Robustness in Active Learning [article]

Ranganath Krishnan, Alok Sinha, Nilesh Ahuja, Mahesh Subedar, Omesh Tickoo, Ravi Iyer
2021 arXiv   pre-print
This paper presents simple and efficient methods to mitigate sampling bias in active learning while achieving state-of-the-art accuracy and model robustness.  ...  We propose an unbiased query strategy that selects informative data samples of diverse feature representations with our methods: supervised contrastive active learning (SCAL) and deep feature modeling  ...  Buolamwini & Gebru, 2018; Bhatt et al., 2021) with respect to fairness, robustness and trustworthiness.  ... 
arXiv:2109.06321v1 fatcat:3e4stqr6breqhpucen56kl2pb4

Could situational judgement tests be used for selection into dental foundation training?

F. Patterson, V. Ashworth, S. Mehra, H. Falcon
2012 British Dental Journal  
In this respect it will be yet more important to ensure there is a fair, robust and objective scoring system that allows ranking of candidates to comply with a meritorious system of selection and allocation  ...  Most candidates perceived the situational judgement test as relevant to dentistry, appropriate for their training level, and fair.  ...  Patterson Authorship and contributorship F. Patterson conceived of the original study and design, analysed and interpreted the data, and wrote the paper. V.  ... 
doi:10.1038/sj.bdj.2012.560 pmid:22790752 fatcat:52mkazksknhdpkvmx6ehwkgqbi

Adversarial and Natural Perturbations for General Robustness [article]

Sadaf Gulshad, Jan Hendrik Metzen, Arnold Smeulders
2020 arXiv   pre-print
, it leads to drop in the performance for naturally perturbed samples besides clean samples.  ...  Different from previous works which mainly focus on studying the robustness of neural networks against adversarial perturbations, we also evaluate their robustness on natural perturbations before and after  ...  , wave is also scaled uniformly at random for each image, similarly saturation factor is also uniformly selected and finally, the intensity of Gaussian blur is uniformly randomly sampled for each image  ... 
arXiv:2010.01401v1 fatcat:jqyluwl5ezgdjndozaq6szulv4

Self-Paced Deep Regression Forests with Consideration on Underrepresented Examples [article]

Lili Pan, Shijie Ai, Yazhou Ren, Zenglin Xu
2020 arXiv   pre-print
It tackles the fundamental ranking and selecting problem in SPL from a new perspective: fairness.  ...  Most existing methods pursue robust and unbiased solutions either through learning discriminative features, or reweighting samples.  ...  In the first pace, 50% samples which are easy or underrepresented were selected for training.  ... 
arXiv:2004.01459v4 fatcat:qawk4t3l35fenb5wesfjg4v4h4
« Previous Showing results 1 — 15 out of 112,299 results