Filters








77 Hits in 16.4 sec

Debiasing representations by removing unwanted variation due to protected attributes [article]

Amanda Bower, Laura Niss, Yuekai Sun, Alexander Vargo
2018 arXiv   pre-print
We propose a regression-based approach to removing implicit biases in representations.  ...  Further, we show that this approach leads to debiased representations that satisfy a first order approximation of conditional parity.  ...  By treating the variation that is present in the data due to protected attributes (e.g. race) as unwanted, we devise a method to remove this unwanted variation based on the factor model (and thus debias  ... 
arXiv:1807.00461v1 fatcat:45vbuzr2efb3necbvcpf23znvy

De-biasing facial detection system using VAE [article]

Vedant V. Kandge, Siddhant V. Kandge, Kajal Kumbharkar, Prof. Tanuja Pattanshetti
2022 arXiv   pre-print
which are there due to bias in the system.  ...  The bias can be due to the algorithm we are using for our problem or may be due to the dataset we are using, having some features over-represented in it.  ...  By using an adversarial debiasing method to remove unwanted features corresponding to protected variables from intermediate representations in the DNN and provide a detailed analysis of its effectiveness  ... 
arXiv:2204.09556v1 fatcat:oh4t7ianwvdixjyktw2sdk754e

Adversarial Scrubbing of Demographic Information for Text Classification [article]

Somnath Basu Roy Chowdhury, Sayan Ghosh, Yiyuan Li, Junier B. Oliva, Shashank Srivastava, Snigdha Chaturvedi
2021 arXiv   pre-print
We aim to scrub such undesirable attributes and learn fair representations while maintaining performance on the target task.  ...  We extend previous evaluation techniques by evaluating debiasing performance using Minimum Description Length (MDL) probing.  ...  Our work is most similar to Elazar and Goldberg (2018), which achieves fairness by blindness by learning intermediate representations which are oblivious to a protected attribute.  ... 
arXiv:2109.08613v1 fatcat:evjk4v3ubzaljnl5p65ypcjjda

Algorithm Fairness in AI for Medicine and Healthcare [article]

Richard J. Chen, Tiffany Y. Chen, Jana Lipkova, Judy J. Wang, Drew F.K. Williamson, Ming Y. Lu, Sharifa Sahai, Faisal Mahmood
2022 arXiv   pre-print
. - image acquisition, genetic variation, intra-observer labeling variability) arise in current clinical workflows and their resulting healthcare disparities.  ...  To remove protected attributes in the representation space of structured data modalities such as images and text data, deep learning algorithms can be additionally supervised with the protected attribute  ...  Model constraints via regularization penalty terms are designed to remove unwanted confounders that would leak protected subgroup identity 43, 47, 48 .  ... 
arXiv:2110.00603v2 fatcat:pspb6bqqxjh45an5mhqohysswu

Learning Disentangled Representation for Fair Facial Attribute Classification via Fairness-aware Information Alignment

Sungho Park, Sunhee Hwang, Dohyung Kim, Hyeran Byun
2021 AAAI Conference on Artificial Intelligence  
The popular approach to solving the issue is to remove protected attribute information in the decision process.  ...  To overcome the limitation, we propose Fairness-aware Disentangling Variational Auto-Encoder (FD-VAE) that disentangles data representation into three subspaces: 1) Target Attribute Latent (TAL), 2) Protected  ...  By adding a Gradient Reversal Layer (GRL) (Ganin et al. 2016) to the protected attribute classification branch, we remove protected attribute information in data representation.  ... 
dblp:conf/aaai/ParkH0B21 fatcat:qh3nagka75hivn7gs6cazahkeq

Achieving Utility, Fairness, and Compactness via Tunable Information Bottleneck Measures [article]

Adam Gronowski, William Paul, Fady Alajaji, Bahman Gharesifard, Philippe Burlina
2022 arXiv   pre-print
of representation, and apply it to image classification.  ...  Designing machine learning algorithms that are accurate yet fair, not discriminating based on any sensitive attribute, is of paramount importance for society to accept AI for critical applications.  ...  First proposed by [4] , fair representation learning consists of mapping input data to an intermediate representation that remains informative but discards unwanted information that could reveal the protected  ... 
arXiv:2206.10043v1 fatcat:zkvqdelmuzewdcoo36kwjilphq

In-Processing Modeling Techniques for Machine Learning Fairness: A Survey

Mingyang Wan, Daochen Zha, Ninghao Liu, Na Zou
2022 ACM Transactions on Knowledge Discovery from Data  
mitigate fairness issues in outputs and representations.  ...  In recent years, various techniques have been developed to mitigate the unfairness for machine learning models.  ...  ACKNOWLEDGEMENTS This work is in part supported by NSF grants IIS-1939716 and IIS-1900990. The authors would like to thank Dr. Xia Hu and Dr. Mengnan Du for their constructive feedback.  ... 
doi:10.1145/3551390 fatcat:eu7vuvv7nvejjgs4uzl3lk3w4a

Modeling Techniques for Machine Learning Fairness: A Survey [article]

Mingyang Wan, Daochen Zha, Ninghao Liu, Na Zou
2022 arXiv   pre-print
mitigate fairness issues in outputs and representations.  ...  In recent years, various techniques have been developed to mitigate the unfairness for machine learning models.  ...  This work focuses on reaching text-driven representations that are blind to the attributes we wish to protect.  ... 
arXiv:2111.03015v2 fatcat:didcuo2yabbcrb2fuhveqgng3y

Renyi Fair Information Bottleneck for Image Classification [article]

Adam Gronowski and William Paul and Fady Alajaji and Bahman Gharesifard and Philippe Burlina
2022 arXiv   pre-print
We consider two different fairness constraints - demographic parity and equalized odds - for learning fair representations and derive a loss function via a variational approach that uses Renyi's divergence  ...  with its tunable parameter α and that takes into account the triple constraints of utility, fairness, and compactness of representation.  ...  The new representation Z must simultaneously preserve information from X relevant to predicting Y while removing sensitive information that could lead to bias.  ... 
arXiv:2203.04950v2 fatcat:4c7oq4wuyffwbig4rwipepnsnm

A Survey on Bias and Fairness in Machine Learning [article]

Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, Aram Galstyan
2022 arXiv   pre-print
We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.  ...  In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and how they have tried to  ...  These models can be used to remove unwanted causal dependence of outcomes on sensitive attributes such as gender or race in designing systems or policies [96] .  ... 
arXiv:1908.09635v3 fatcat:fygrqs3sing6zdsg53t7awhih4

Through a fair looking-glass: mitigating bias in image datasets [article]

Amirarsalan Rajabi, Mehdi Yazdani-Jahromi, Ozlem Ozmen Garibay, Gita Sukthankar
2022 arXiv   pre-print
Our architecture includes a U-net to reconstruct images, combined with a pre-trained classifier which penalizes the statistical dependence between target attribute and the protected attribute.  ...  With the recent growth in computer vision applications, the question of how fair and unbiased they are has yet to be explored.  ...  The authors provide an algorithm to remove multiple sources of variation from the feature representation of a network.  ... 
arXiv:2209.08648v1 fatcat:f2q7f4tkjvhp7jq2l6xqsic73i

Local Data Debiasing for Fairness Based on Generative Adversarial Training

Ulrich Aïvodji, François Bidet, Sébastien Gambs, Rosin Claude Ngueveu, Alain Tapp
2021 Algorithms  
) based on a sensitive attribute by removing the attribute itself as well as the existing correlations with the remaining attributes.  ...  In contrast to prior work, one of the strengths of our approach is that the sanitization is performed in the same space as the original data by only modifying the other attributes as little as possible  ...  Therefore, this is similar to the previously mentioned body of work that protects the sensitive attribute by changing the space of representation.  ... 
doi:10.3390/a14030087 fatcat:pu3vmcdbgvaijjy2qyzil47fum

The Geometry of Distributed Representations for Better Alignment, Attenuated Bias, and Improved Interpretability [article]

Sunipa Dev
2020 arXiv   pre-print
As a result, invalid associations (such as different races and their association with a polar notion of good versus bad) are made and propagated by the representations, leading to unfair outcomes in different  ...  These representations have different degrees of interpretability, with efficient distributed representations coming at the cost of the loss of feature to dimension mapping.  ...  In this dissertation, we define bias in language representation to be the invalid and stereotypical associations made by the representations about the aforementioned protected attributes.  ... 
arXiv:2011.12465v1 fatcat:fa6zc6rjw5btxjcylf6vawgdzq

A benchmark study on methods to ensure fair algorithmic decisions for credit scoring [article]

Darie Moldovan
2022 arXiv   pre-print
However, automatic decisions may lead to different treatments over groups or individuals, potentially causing discrimination.  ...  ACKNOWLEDGMENTS This work was supported in part by the 2021 Development Fund of the UBB. It was also partly supported by the Romanian National Authority for Scientific Research Grant no.  ...  We had to reduce the number of attributes by selecting only the top 5 attributes considering information value, along the protected attribute and the target attribute, in order to create a testing environment  ... 
arXiv:2209.07912v1 fatcat:k4cg2w4bwbcord5pzb7eijgbyq

Deconfounding and Causal Regularization for Stability and External Validity [article]

Peter Bühlmann, Domagoj Ćevid
2020 arXiv   pre-print
In this sense, we provide additional thoughts to the issue on concept drift, raised by Efron (2020), when the data generating distribution is changing.  ...  We review some recent work on removing hidden confounding and causal regularization from a unified viewpoint.  ...  By doing so, we remove bias thanks to spectral transformations and hence the words "doubly debiased".  ... 
arXiv:2008.06234v1 fatcat:ny7axfvicbcczmh24wuaofg2km
« Previous Showing results 1 — 15 out of 77 results