Filters








667 Hits in 5.5 sec

Does enforcing fairness mitigate biases caused by subpopulation shift? [article]

Subha Maity, Debarghya Mukherjee, Mikhail Yurochkin, Yuekai Sun
2021 arXiv   pre-print
Many instances of algorithmic bias are caused by subpopulation shifts. For example, ML models often perform worse on demographic groups that are underrepresented in the training data.  ...  On one hand, we conceive scenarios in which enforcing fairness does not improve performance in the target domain. In fact, it may even harm performance.  ...  Acknowledgments and Disclosure of Funding This note is based upon work supported by the National Science Foundation (NSF) under grants no. 1916271, 2027737, and 2113373.  ... 
arXiv:2011.03173v2 fatcat:xvunyynbajc3tarmb2eyowfbmu

Ctrl-Shift: How Privacy Sentiment Changed from 2019 to 2021 [article]

Angelica Goetzen, Samuel Dooley, Elissa M. Redmiles
2022 arXiv   pre-print
Our results offer insight into how privacy attitudes may have been impacted by recent events and allow us to identify potential predictors of changes in privacy attitudes during times of geopolitical or  ...  For each survey, they sample a subpopulation of the larger panel.  ...  ., genetic data) for public safety by specific entities (i.e., law enforcement).  ... 
arXiv:2110.09437v2 fatcat:qvyaawjgzzh3fccekslhgghybi

WILDS: A Benchmark of in-the-Wild Distribution Shifts [article]

Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, Tony Lee, Etienne David (+11 others)
2021 arXiv   pre-print
This gap remains even with models trained by existing methods for tackling distribution shifts, underscoring the need for new methods for training models that are more robust to the types of distribution  ...  shifts that arise in practice.  ...  Algorithmic fairness Distribution shifts which degrade model performance on minority subpopulations are frequently discussed in the algorithmic fairness literature.  ... 
arXiv:2012.07421v3 fatcat:bsohmukpszajxeadeo25oxmbs4

Learning from Positive and Unlabeled Data with Arbitrary Positive Shift [article]

Zayd Hammoudeh, Daniel Lowd
2020 arXiv   pre-print
This assumption rarely holds in practice due to temporal drift, domain shift, and/or adversarial manipulation.  ...  Since the test distributions are never biased, PN te is unaffected by shift.  ...  data on a positive subpopulation in the original dataset).  ... 
arXiv:2002.10261v4 fatcat:makah7ddqjcmdmwa326nwv4yia

Radio Galaxy Zoo: using semi-supervised learning to leverage large unlabelled data sets for radio galaxy classification under data set shift

Inigo V Slijepcevic, Anna M M Scaife, Mike Walmsley, Micah Bowles, O Ivy Wong, Stanislav S Shabala, Hongming Tang
2022 Monthly notices of the Royal Astronomical Society  
Additionally, we show that SSL does not improve model calibration, regardless of whether classification is improved.  ...  to provide the labelled and unlabelled data sets required for SSL, a significant drop in classification performance is observed, highlighting the difficulty of applying SSL techniques under data set shift  ...  This work has been made possible by the participation of more than 12 000 volunteers in the Radio Galaxy Zoo Project.  ... 
doi:10.1093/mnras/stac1135 fatcat:d7zuy6xs3ratjfwnt4tnlrvvp4

Algorithm Fairness in AI for Medicine and Healthcare [article]

Richard J. Chen, Tiffany Y. Chen, Jana Lipkova, Judy J. Wang, Drew F.K. Williamson, Ming Y. Lu, Sharifa Sahai, Faisal Mahmood
2022 arXiv   pre-print
In this perspective article, we summarize the intersectional field of fairness in machine learning through the context of current issues in healthcare, outline how algorithmic biases (e.g. - image acquisition  ...  Lastly, we also review emerging technology for mitigating bias via federated learning, disentanglement, and model explainability, and their role in AI-SaMD development.  ...  In training algorithms on data that have internalized historical biases, algorithms may be learning from these biases and cause disproportionate harm to certain groups of individuals.  ... 
arXiv:2110.00603v2 fatcat:pspb6bqqxjh45an5mhqohysswu

Domain Adaptation meets Individual Fairness. And they get along [article]

Debarghya Mukherjee, Felix Petersen, Mikhail Yurochkin, Yuekai Sun
2022 arXiv   pre-print
Many instances of algorithmic bias are caused by distributional shifts.  ...  domain adaptation methods (for overcoming distribution shifts) can mitigate algorithmic biases.  ...  This complements the recent results by Maity et al. [32] , which show that enforcing group fairness can mitigate algorithmic biases caused by subpopulation shifts.  ... 
arXiv:2205.00504v1 fatcat:gnusztq6ubgc3pbzz3qwu5zdim

Algorithmic Fairness in Business Analytics: Directions for Research and Practice [article]

Maria De-Arteaga and Stefan Feuerriegel and Maytal Saar-Tsechansky
2022 arXiv   pre-print
This paper offers a forward-looking, BA-focused review of algorithmic fairness. We first review the state-of-the-art research on sources and measures of bias, as well as bias mitigation algorithms.  ...  Finally, we chart a path forward by identifying opportunities for business scholars to address impactful, open challenges that are key to the effective and responsible deployment of BA.  ...  A utility-fairness trade-off may also arise if the source of the bias is differential subgroup validity (see Section 3.1), and the available data does not contain information that is predictive for a subpopulation  ... 
arXiv:2207.10991v1 fatcat:wu4u3eq6xvab3jnt3xpit5n45q

Inducing bias is simpler than you think [article]

Stefano Sarao Mannelli, Federica Gerace, Negar Rostamzadeh, Luca Saglietti
2022 arXiv   pre-print
By allowing each model to specialise on a different community within the data, we find that multiple fairness criteria and high accuracy can be achieved simultaneously.  ...  Finally, we analyse the issue of bias mitigation: by reweighing the various terms in the training loss, we indirectly minimise standard unfairness metrics and highlight their incompatibilities.  ...  We acknowledge that defining or addressing biases in a social system only from a technical standpoint, may risk the root causes of issues and even amplify other types of biases that are dismissed.  ... 
arXiv:2205.15935v1 fatcat:gmvccuysyveazog3paekzg5h2i

TARA: Training and Representation Alteration for AI Fairness and Domain Generalization [article]

William Paul, Armin Hadzic, Neil Joshi, Fady Alajaji, Phil Burlina
2021 arXiv   pre-print
We propose a novel method for enforcing AI fairness with respect to protected or sensitive factors.  ...  This method uses a dual strategy performing training and representation alteration (TARA) for the mitigation of prominent causes of AI bias by including: a) the use of representation learning alteration  ...  Much of the difficulty of ensuring model fairness has to do with resilience to distributional shift and domain shift, which call for approaches to domain adaptation.  ... 
arXiv:2012.06387v4 fatcat:q27pshiwnnae7jkbqxusttd24q

Addressing Artificial Intelligence Bias in Retinal Disease Diagnostics [article]

Philippe Burlina, Neil Joshi, William Paul, Katia D. Pacheco, Neil M. Bressler
2020 arXiv   pre-print
This study evaluated generative methods to potentially mitigate AI bias when diagnosing diabetic retinopathy (DR) resulting from training data imbalance, or domain generalization which occurs when deep  ...  The public domain Kaggle-EyePACS dataset (88,692 fundi and 44,346 individuals, originally diverse for ethnicity) was modified by adding clinician-annotated labels and constructing an artificial scenario  ...  [Pakamatsu2008,Yiu2016] Criteria for fairness can vary, and bias in AI can also have different causes, manifestations, and definitions.  ... 
arXiv:2004.13515v4 fatcat:r2bm7pwtgbe6lnluqyjjshpkga

Studying Up Machine Learning Data: Why Talk About Bias When We Mean Power? [article]

Milagros Miceli, Julian Posada, Tianling Yang
2021 arXiv   pre-print
Research in machine learning (ML) has primarily argued that models trained on incomplete or biased datasets can lead to discriminatory outputs.  ...  In this commentary, we propose moving the research focus beyond bias-oriented framings by adopting a power-aware perspective to "study up" ML datasets.  ...  For instance, one of the questions in Datasheets for Datasets asks "does the dataset identify any subpopulations?" (e.g. by race, age, or gender).  ... 
arXiv:2109.08131v1 fatcat:lyur2rofbrgfrmqtk4u6tyy2cq

Incorporating Priors with Feature Attribution on Text Classification

Frederick Liu, Besim Avci
2019 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics  
To demonstrate the effectiveness our technique, we apply it to two tasks: (1) mitigating unintended bias in text classifiers by neutralizing identity terms; (2) improving classifier performance in a scarce  ...  data setting by forcing the model to focus on toxic terms.  ...  Our main intuition is that undesirable correlations between toxicity labels and instances of identity terms cause the model to learn unfair biases which can be corrected by incorporating priors on these  ... 
doi:10.18653/v1/p19-1631 dblp:conf/acl/LiuA19 fatcat:xwjz46gsgfh2xftmrhkehyczwe

fairmodels: A Flexible Tool For Bias Detection, Visualization, And Mitigation [article]

Jakub Wiśniewski, Przemysław Biecek
2022 arXiv   pre-print
The implemented set of functions and fairness metrics enables model fairness validation from different perspectives.  ...  Moreover, complex predictive models are really eager to learn social biases present in historical data that can lead to increasing discrimination.  ...  Acknowledgements Work on this package was financially supported by the NCN Sonata Bis-9 grant 2019/34/E/ST6/00052.  ... 
arXiv:2104.00507v2 fatcat:wq2nxbyk5vfkvchkkran6dkhle

An Ethical Highlighter for People-Centric Dataset Creation [article]

Margot Hanley, Apoorv Khandelwal, Hadar Averbuch-Elor, Noah Snavely, Helen Nissenbaum
2020 arXiv   pre-print
Our work is informed by a review and analysis of prior works and highlights where such ethical challenges arise.  ...  Therefore, in the case of potentially sensitive data, dataset creators should consider reviewing requests on a case-by-case basis, putting forth a good faith effort to reject requests whose intent does  ...  is no way to readily enforce research-only usage.  ... 
arXiv:2011.13583v1 fatcat:ovm7odlwjjg7lei3dsypa3obwe
« Previous Showing results 1 — 15 out of 667 results