Filters








68,176 Hits in 9.4 sec

Two Simple Ways to Learn Individual Fairness Metrics from Data [article]

Debarghya Mukherjee, Mikhail Yurochkin, Moulinath Banerjee, Yuekai Sun
2020 arXiv   pre-print
In this paper, we present two simple ways to learn fair metrics from a variety of data types.  ...  We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases.  ...  Table 3 . 3 Association tests code names FLvINS Flowers vs. insects Two Simple Ways to Learn Individual Fairness Metrics from Data Lemma B.7.  ... 
arXiv:2006.11439v1 fatcat:kp5factjafa3xoi4ie6hkw3h6q

The Frontiers of Fairness in Machine Learning [article]

Alexandra Chouldechova, Aaron Roth
2018 arXiv   pre-print
Despite this interest and the volume and velocity of work that has been produced recently, the fundamental science of fairness in machine learning is still in a nascent state.  ...  Along the way, it surveys recent theoretical work in the field and points towards promising directions for research.  ...  We are grateful to Helen Wright and Ann Drobnis, who are instrumental in making the workshop happen.  ... 
arXiv:1810.08810v1 fatcat:nf7ddavgsfavxbe5nqgkkxftiu

Towards a Measure of Individual Fairness for Deep Learning [article]

Krystal Maughan, Joseph P. Near
2020 arXiv   pre-print
We propose a novel measure of individual fairness, called prediction sensitivity, that approximates the extent to which a particular prediction is dependent on a protected attribute.  ...  Deep learning has produced big advances in artificial intelligence, but trained neural networks often reflect and amplify bias in their training data, and thus produce unfair predictions.  ...  ACKNOWLEDGEMENTS We thank David Darais and Kristin Mills for their contributions to the development of this work, and the Mechanism Design for Social Good reviewers for their helpful comments.  ... 
arXiv:2009.13650v1 fatcat:lgn4x5agfjchfpnjqnkf7jnvmq

Fairness Sample Complexity and the Case for Human Intervention [article]

Ananth Balashankar, Alyssa Lees
2019 arXiv   pre-print
We look at two commonly explored UCI datasets under this lens and suggest human interventions for data collection for specific subgroups to achieve approximate individual fairness for linear hypotheses  ...  In this paper we present lower bounds of subgroup sample complexity for metric-fair learning based on the theory of Probably Approximately Metric Fair Learning.  ...  In order to overcome this tradeoff, we look to the rich literature of "individual fairness" which defines fairness with respect to a similarity metric between two individuals and enforces that individuals  ... 
arXiv:1910.11452v1 fatcat:eiqyas7ilnfwnkrzqxa4lxj7mi

Developing a Philosophical Framework for Fair Machine Learning: The Case of Algorithmic Collusion and Market Fairness [article]

James Michelson
2022 arXiv   pre-print
This contribution ties the development of fairness metrics to specifically scoped normative principles. This enables fairness metrics to reflect different concerns from discrimination.  ...  Fair machine learning research has been primarily concerned with classification tasks that result in discrimination.  ...  Acknowledgements I would like to thank Sina Fazelpour, David Danks, Nil-Jana Akpinar, Annie Wang, and Giovanna Vitelli.  ... 
arXiv:2208.06308v1 fatcat:bqfyup5nhreslm7au7vljdk6t4

Learning Fair Representations

Richard S. Zemel, Yu Wu, Kevin Swersky, Toniann Pitassi, Cynthia Dwork
2013 International Conference on Machine Learning  
We formulate fairness as an optimization problem of finding a good representation of the data with two competing goals: to encode the data as well as possible, while simultaneously obfuscating any information  ...  ., transfer learning is possible); secondly, we take a step toward learning a distance metric which can find important dimensions of the data for classification.  ...  ., 2011) , aim to achieve the first goal, group fairness, by adapting standard learning approaches in novel ways, primarily through a form of fairness regularizer, or by re-labeling the training data to  ... 
dblp:conf/icml/ZemelWSPD13 fatcat:hk2sarwzjjcm5djg4s5u5so44m

Probably Approximately Metric-Fair Learning [article]

Guy N. Rothblum, Gal Yona
2018 arXiv   pre-print
We show that this can lead to computational intractability even for simple fair-learning tasks.  ...  In the context of machine learning, however, individual fairness does not generalize from a training set to the underlying population.  ...  Indeed, we exhibit a simple learning task that, while easy to learn without fairness Generalization bounds.  ... 
arXiv:1803.03242v2 fatcat:emljqsjst5cbnfajebukjdj3ci

End-To-End Bias Mitigation: Removing Gender Bias in Deep Learning [article]

Tal Feldman, Ashley Peake
2021 arXiv   pre-print
We find that our end-to-end bias mitigation framework outperforms the baselines with respect to several fairness metrics, suggesting its promise as a method for improving fairness.  ...  To provide readers with the tools to assess the fairness of machine learning models and mitigate the biases present in them, we discuss multiple open source packages for fairness in AI.  ...  These are, in effect, "online" ways to improve fairness, producing nondiscriminatory results from biased data.  ... 
arXiv:2104.02532v3 fatcat:vlkprfcylzeu5j5lzmwmgyamny

Algorithmic fairness in computational medicine

Jie Xu, Yunyu Xiao, Wendy Hui Wang, Yue Ning, Elizabeth A. Shenkman, Jiang Bian, Fei Wang
2022 EBioMedicine  
Specifically, we overview the different types of algorithmic bias, fairness quantification metrics, and bias mitigation methods, and summarize popular software libraries and tools for bias evaluation and  ...  However, recent research has shown that machine learning techniques may result in potential biases when making decisions for people in different subgroups, which can lead to detrimental effects on the  ...  50 The concept of individual fairness potentially alleviate the issues of group fairness metrics by forcing any two individuals who are similar at a given task should be similarly classified.  ... 
doi:10.1016/j.ebiom.2022.104250 pmid:36084616 pmcid:PMC9463525 fatcat:uhr43hhdznd6tcy7ghludw44da

Themis-ml: A Fairness-aware Machine Learning Interface for End-to-end Discrimination Discovery and Mitigation [article]

Niels Bantilan
2017 arXiv   pre-print
In this paper we specify, implement, and evaluate a "fairness-aware" machine learning interface called themis-ml, which is intended for use by individual data scientists and engineers, academic research  ...  On the other hand, the responsible use of machine learning can help us measure, understand, and mitigate the implicit historical biases in socially sensitive data by expressing implicit decision-making  ...  For instance, what might be some reasonable ways to aggregate utility and fairness metrics in order to find the optimal set of hyperparameters?  ... 
arXiv:1710.06921v1 fatcat:le5cawayv5eannaf3ekyl4gqny

AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias [article]

Rachel K. E. Bellamy, Kuntal Dey, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra Mojsilovic, Seema Nagar, Karthikeyan Natesan Ramamurthy, John Richards, Diptikalyan Saha (+4 others)
2018 arXiv   pre-print
The package includes a comprehensive set of fairness metrics for datasets and models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.  ...  Fairness is an increasingly important concern as machine learning models are used to support decision making in high-stakes applications such as mortgage lending, hiring, and prison sentencing.  ...  Such metrics are used, e.g. by Calmon et al. (2017) , to quantify individual fairness.  ... 
arXiv:1810.01943v1 fatcat:5f2ud4crbfhnrld2qfqj6ihpqa

Algorithmic Fairness in Computational Medicine [article]

Jie Xu, Yunyu Xiao, Wendy Hui Wang, Yue Ning, Elizabeth A Shenkman, Jiang Bian, Fei Wang
2022 medRxiv   pre-print
Specifically, we overview the different types of algorithmic bias, fairness quantification metrics, and bias mitigation methods, and summarize popular software libraries and tools for bias evaluation and  ...  However, recent research has shown that machine learning techniques may result in potential biases when making decisions for people in different subgroups, which can lead to detrimental effects on the  ...  Preprocessing Preprocessing methods allow discrimination to be removed from data sets more effectively than simple methods, such as removing sensitive attributes from training data.  ... 
doi:10.1101/2022.01.16.21267299 fatcat:26fp56upvfgabozuqowrrun3j4

Fairness across Network Positions in Cyberbullying Detection Algorithms [article]

Vivek Singh, Connor Hofenbitzer
2019 arXiv   pre-print
The results pave way for more accurate and fair cyberbullying detection algorithms.  ...  Recently, researchers have created automated machine learning algorithms to detect Cyberbullying using social and textual features.  ...  To quantify the "fairness" of algorithms, we survey the recent literature on fairness in machine learning (e.g., [1, 2, 3] ) and focus on the comparisons based on three different metrics: difference in  ... 
arXiv:1905.03403v1 fatcat:235rmzgxavbsxkpxj5oa2vor2a

Quantum Fair Machine Learning [article]

Elija Perrier
2021 arXiv   pre-print
We extend canonical Lipschitz-conditioned individual fairness criteria to the quantum setting using quantum metrics.  ...  , metrics and remediation strategies when quantum algorithms are subject to fairness constraints.  ...  This means translating fair machine learning metrics to the quantum realm requires different (albeit related) metric formalism.  ... 
arXiv:2102.00753v2 fatcat:wpvmnxfyx5bebbqw3z5ckdqt6e

Fairness in Federated Learning for Spatial-Temporal Applications [article]

Afra Mashhadi, Alex Kyllo, Reza M. Parizi
2022 arXiv   pre-print
Federated learning can be viewed as a unique opportunity to bring fairness and parity to many existing models by enabling model training to happen on a diverse set of participants and on data that is generated  ...  We propose how these metrics and approaches can be re-defined to address the challenges that are faced in the federated learning setting.  ...  There are various ways of measuring distance between trajectories of people such as simple Manhattan distance, Mean Squared Error etc.  ... 
arXiv:2201.06598v2 fatcat:t3q2wwv77jetfell5sacv4oawu
« Previous Showing results 1 — 15 out of 68,176 results