Filters








967 Hits in 4.6 sec

Training individually fair ML models with Sensitive Subspace Robustness [article]

Mikhail Yurochkin, Amanda Bower, Yuekai Sun
2020 arXiv   pre-print
We consider training machine learning models that are fair in the sense that their performance is invariant under certain sensitive perturbations to the inputs.  ...  We formalize this notion of algorithmic fairness as a variant of individual fairness and develop a distributionally robust optimization approach to enforce it during training.  ...  Second, DRF considers differences FAIR TRAINING WITH SENSITIVE SUBSPACE ROBUSTNESS We cast the fair training problem as training supervised learning systems that are robust to sensitive perturbations  ... 
arXiv:1907.00020v2 fatcat:xiufezrkuzhopnmshkcyjsm56m

SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness [article]

Mikhail Yurochkin, Yuekai Sun
2021 arXiv   pre-print
Our theoretical results guarantee the proposed approach trains certifiably fair ML models.  ...  In this paper, we cast fair machine learning as invariant machine learning. We first formulate a version of individual fairness that enforces invariance on certain sensitive sets.  ...  Training individually fair ML models with sensitive subspace robustness. In International Conference on Learning Representations, Addis Ababa, Ethiopia, 2020.  ... 
arXiv:2006.14168v2 fatcat:2exdr4hhifbmnmpfulua2tdw6q

Individually Fair Gradient Boosting [article]

Alexander Vargo, Fan Zhang, Mikhail Yurochkin, Yuekai Sun
2021 arXiv   pre-print
Unlike prior approaches to individual fairness that only work with smooth ML models, our approach also works with non-smooth models such as decision trees.  ...  At a high level, our approach is a functional gradient descent on a (distributionally) robust loss function that encodes our intuition of algorithmic fairness for the ML task at hand.  ...  We show that the method converges globally and leads to ML models that are individually fair. We also show that it is possible to certify the individual fairness of the models a posteriori. 3.  ... 
arXiv:2103.16785v1 fatcat:brvulcg5hzcqjpuioo423ple3u

Two Simple Ways to Learn Individual Fairness Metrics from Data [article]

Debarghya Mukherjee, Mikhail Yurochkin, Moulinath Banerjee, Yuekai Sun
2020 arXiv   pre-print
We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases.  ...  Despite its benefits, it depends on a task specific fair metric that encodes our intuition of what is fair and unfair for the ML task at hand, and the lack of a widely accepted fair metric for many ML  ...  Training indi- vidually fair ML models with sensitive subspace robust- ness. In International Conference on Learning Represen- tations, Addis Ababa, Ethiopia, 2020.  ... 
arXiv:2006.11439v1 fatcat:kp5factjafa3xoi4ie6hkw3h6q

Towards A Holistic View of Bias in Machine Learning: Bridging Algorithmic Fairness and Imbalanced Learning [article]

Damien Dablain, Bartosz Krawczyk, Nitesh Chawla
2022 arXiv   pre-print
increases in both model accuracy and fairness.  ...  ML models inform decisions in criminal justice, the extension of credit in banking, and the hiring practices of corporations.  ...  SSR induces individual fairness based on sensitive perturbations of inputs. It casts fairness in the form of robustness to sensitive perturbations of the training data.  ... 
arXiv:2207.06084v1 fatcat:wyl2wkdmmbfwjgqfgq2okbpbsu

Does enforcing fairness mitigate biases caused by subpopulation shift? [article]

Subha Maity, Debarghya Mukherjee, Mikhail Yurochkin, Yuekai Sun
2021 arXiv   pre-print
For example, ML models often perform worse on demographic groups that are underrepresented in the training data.  ...  In this paper, we study whether enforcing algorithmic fairness during training improves the performance of the trained model in the target domain.  ...  CRP imposes a similar condition on the risk of the ML model; i.e. the risk of the ML model must be independent of the sensitive attribute conditioned on the discriminative attribute (with label as a special  ... 
arXiv:2011.03173v2 fatcat:xvunyynbajc3tarmb2eyowfbmu

High-accuracy classification of attention deficit hyperactivity disorder with l2,1-norm linear discriminant analysis and binary hypothesis testing

Yibin Tang, Xufei Li, Ying Chen, Yuan Zhong, Aimin Jiang, Chun Wang
2020 IEEE Access  
The FCs of test data (without seeing its label) are used for training and thus affect the subspace learning of training data under binary hypotheses.  ...  On other hand, the l 2,1 -norm LDA model generates a subspace to represent ADHD features, aiming to overcome noise disturbance.  ...  Unfortunately, the above l 2 -norm LDA model is sensitive to outliers.  ... 
doi:10.1109/access.2020.2982401 fatcat:gnbejssuzrbnfpgkn7j3pglvcq

Secure and Robust Machine Learning for Healthcare: A Survey

Adnan Qayyum, Junaid Qadir, Muhammad Bilal, Ala Al Fuqaha
2020 IEEE Reviews in Biomedical Engineering  
Notwithstanding the impressive performance of ML/DL, there are still lingering doubts regarding the robustness of ML/DL in healthcare settings (which is traditionally considered quite challenging due to  ...  In addition, we present potential methods to ensure secure and privacy-preserving ML for healthcare applications.  ...  Fair and Accountable ML The literature on analyzing the security and robustness of ML/DL approaches reveals that the outcomes of these models lack fairness and accountability [163] .  ... 
doi:10.1109/rbme.2020.3013489 pmid:32746371 fatcat:wd2flezcjng4jjsn46t24c5yb4

Secure and Robust Machine Learning for Healthcare: A Survey [article]

Adnan Qayyum, Junaid Qadir, Muhammad Bilal, Ala Al-Fuqaha
2020 arXiv   pre-print
Notwithstanding the impressive performance of ML/DL, there are still lingering doubts regarding the robustness of ML/DL in healthcare settings (which is traditionally considered quite challenging due to  ...  In addition, we present potential methods to ensure secure and privacy-preserving ML for healthcare applications.  ...  Fair and Accountable ML The literature on analyzing the security and robustness of ML/DL approaches reveals that the outcomes of these models lack fairness and accountability [140] .  ... 
arXiv:2001.08103v1 fatcat:u6obszbeajcp5asciz5z5unmlq

On the Global Optima of Kernelized Adversarial Representation Learning

Bashir Sadeghi, Runyi Yu, Vishnu Boddeti
2019 2019 IEEE/CVF International Conference on Computer Vision (ICCV)  
Adversarial representation learning is a promising paradigm for obtaining data representations that are invariant to certain sensitive attributes while retaining the information necessary for predicting  ...  Numerical experiments on UCI, Extended Yale B and CIFAR-100 datasets indicate that, (a) practically, our solution is ideal for "imparting" provable invariance to any biased pre-trained data representation  ...  The target is to classify the credit of individuals as good or bad with the sensitive attribute being age.  ... 
doi:10.1109/iccv.2019.00806 dblp:conf/iccv/SadeghiYB19 fatcat:3o6rdzwr3nevzefkdotkkrkpp4

Auditing ML Models for Individual Bias and Unfairness [article]

Songkai Xue, Mikhail Yurochkin, Yuekai Sun
2020 arXiv   pre-print
We consider the task of auditing ML models for individual bias/unfairness. We formalize the task in an optimization problem and develop a suite of inferential tools for the optimal value.  ...  Training individually fair ML models with sensitive subspace robustness. In International Conference on Learning Representations, Addis Ababa, Ethiopia, 2020.  ...  Combining FaiTH with prior ideas used for group fairness may layout a pass for training ML systems with strong guarantees for both individual and group fairness.  ... 
arXiv:2003.05048v1 fatcat:5zj3bbbot5e4rl45nv6ro7z6eq

Multi-View Learning-Based Data Proliferator for Boosting Classification Using Highly Imbalanced Classes

Olfa Graa, Islem Rekik
2019 Journal of Neuroscience Methods  
However, the performance of machine learning methods highly depends on the size of the training samples from both classes.  ...  For this reason, we compared MV-LEAP with methods that use the dimension reduction of PCA to have a fair comparison.  ...  +SVM), (c) ADASYN with SVM (ADASYN+SVM), and (d) ADASYN based on manifold learning (ADASYN+ML+SVM).  ... 
doi:10.1016/j.jneumeth.2019.108344 pmid:31421161 fatcat:nmwm7qva5bexzfgldzst3f77zq

Anatomizing Bias in Facial Analysis

Richa Singh, Puspita Majumdar, Surbhi Mittal, Mayank Vatsa
2022 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
Due to its impact on society, it has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals.  ...  Our main contributions include a systematic review of algorithms proposed for understanding bias, along with a taxonomy and extensive overview of existing bias mitigation algorithms.  ...  A decorrelation loss is proposed to align the overall information into each subspace, instead of removing the information of sensitive attributes.  ... 
doi:10.1609/aaai.v36i11.21500 fatcat:lbuwkwaganfkxlzy55ir5lyngi

Towards the Science of Security and Privacy in Machine Learning [article]

Nicolas Papernot, Patrick McDaniel, Arunesh Sinha, Michael Wellman
2016 arXiv   pre-print
Key insights resulting from works both in the ML and security communities are identified and the effectiveness of approaches are related to structural elements of ML algorithms and the data used to train  ...  We articulate a comprehensive threat model for ML, and categorize attacks and defenses within an adversarial framework.  ...  showed that fairness could be achieved by learning in competition with an adversary trying to predict the sensitive variable from the fair model's prediction [110] .  ... 
arXiv:1611.03814v1 fatcat:wg3hla7vpnedpjx6rwnvjodsba

SoK: Security and Privacy in Machine Learning

Nicolas Papernot, Patrick McDaniel, Arunesh Sinha, Michael P. Wellman
2018 2018 IEEE European Symposium on Security and Privacy (EuroS&P)  
Key insights resulting from works both in the ML and security communities are identified and the effectiveness of approaches are related to structural elements of ML algorithms and the data used to train  ...  We articulate a comprehensive threat model for ML, and categorize attacks and defenses within an adversarial framework.  ...  showed that fairness could be achieved by learning in competition with an adversary trying to predict the sensitive variable from the fair model's prediction [110] .  ... 
doi:10.1109/eurosp.2018.00035 dblp:conf/eurosp/PapernotMSW18 fatcat:zt6iwhj5vbdfhh4c3kvwwkupv4
« Previous Showing results 1 — 15 out of 967 results