A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations
[article]
2019
arXiv
pre-print
To mitigate this, we adopt an adversarial approach to remove unwanted features corresponding to protected variables from intermediate representations in a deep neural network -- and provide a detailed ...
We show that trained models significantly amplify the association of target labels with gender beyond what one would expect from biased datasets. ...
We also acknowledge fruitful discussions with members of the Human-Machine Intelligence group through the Institute for the Humanities and Global Cultures at the University of Virginia. ...
arXiv:1811.08489v4
fatcat:f34iwjfusfeqdfdsbyumx7yir4
Jointly De-biasing Face Recognition and Demographic Attribute Estimation
[article]
2020
arXiv
pre-print
Adversarial learning is adopted to minimize correlation among feature factors so as to abate bias influence from other factors. ...
We present a novel de-biasing adversarial network (DebFace) that learns to extract disentangled feature representations for both unbiased face recognition and demographics estimation. ...
Given this assumption, one common way to remove demographic information from face representations is to perform feature disentanglement via adversarial learning (Fig. 1b) . ...
arXiv:1911.08080v4
fatcat:2cavhrnfezggjh6jffhpxzhfoy
Linear Adversarial Concept Erasure
[article]
2022
arXiv
pre-print
When evaluated in the context of binary gender removal, the method recovers a low-dimensional subspace whose removal mitigates bias by intrinsic and extrinsic evaluation. ...
We formulate the problem of identifying and erasing a linear subspace that corresponds to a given concept, in order to prevent linear predictors from recovering the concept. ...
gender at all. 5 Besides the Figure 4 : Application of R-LACE on the raw pixels of image data, from top to bottom we present the original images and the same images after a rank-1 projection, for the ...
arXiv:2201.12091v1
fatcat:qfcltkbdlrayhhk6bj6y7ix55e
SensitiveNets: Learning Agnostic Representations with Application to Face Images
[article]
2020
arXiv
pre-print
Our method is based on an adversarial regularizer that introduces a sensitive information removal function in the learning objective. ...
Instead of existing approaches aimed directly at fairness improvement, the proposed feature representation enforces the privacy of selected attributes. ...
removing the sensitive information (gender and ethnicity in this case) from the embeddings. ...
arXiv:1902.00334v3
fatcat:d57brut44vg2zoeppqwx24b5cq
Fairness in Deep Learning: A Computational Perspective
[article]
2020
arXiv
pre-print
We provide a review covering recent progresses to tackle algorithmic fairness problems of deep learning from the computational perspective. ...
We also discuss fairness mitigation approaches categorized according to three stages of deep learning life-cycle, aiming to push forward the area of fairness in deep learning and build genuinely fair and ...
Thus prediction outcome discrimination could be detected and removed from the deep representation perspective. ...
arXiv:1908.08843v2
fatcat:kaaevm64fbctpjfdycv5uz3dhi
Privacy-Preserving Image Template Sharing Using Contrastive Learning
2022
Entropy
For the attribute inference attack, we aim to provide a representation of data that is independent of the sensitive attribute. ...
Furthermore, an obfuscator module is trained in an adversarial manner to preserve the privacy of sensitive attributes while maintaining the classification performance on the target attribute. ...
Deep and private feature sharing: With the recent advancements of deep models, a new line of work has been introduced to share deep private and obfuscated feature representations of images. Osia et al ...
doi:10.3390/e24050643
fatcat:z6svnfowzbe3zas4jftbpzy46m
Privacy-Preserving Image Acquisition Using Trainable Optical Kernel
[article]
2021
arXiv
pre-print
In this work, for the first time, we propose a trainable image acquisition method that removes the sensitive identity revealing information in the optical domain before it reaches the image sensor. ...
We show that this method can reduce 65.1% of sensitive content when it is selected to be the gender and it only loses 7.3% of the desired content. ...
After training, the visual representation inverter can be used to reconstruct the original images from the output of the optical convolution. ...
arXiv:2106.14577v1
fatcat:o5hzef4sejcovlptldclvpy4t4
Toward Privacy and Utility Preserving Image Representation
[article]
2020
arXiv
pre-print
Multiple methods have been proposed to protect an individual's privacy by perturbing the images to remove traces of identifiable information, such as gender or race. ...
In this paper, we study the novel problem of creating privacy-preserving image representations with respect to a given utility task by proposing a principled framework called the Adversarial Image Anonymizer ...
For example, a well-trained gender classifier could predict the gender of a person using the corresponding image representation. ...
arXiv:2009.14376v2
fatcat:euznouscvvarjcubasn3ma5qz4
Training privacy-preserving video analytics pipelines by suppressing features that reveal information about private attributes
[article]
2022
arXiv
pre-print
attributes (e.g. age or gender). ...
Deep neural networks are increasingly deployed for scene analytics, including to evaluate the attention and reaction of people exposed to out-of-home advertisements. ...
The ARM network aims to remove the distortion from the edges of the image on the features, referred to as albino erosion. ...
arXiv:2203.02635v1
fatcat:vwofqa4b3jetvjao6g7chxpwlq
Deep Identity-aware Transfer of Facial Attributes
[article]
2018
arXiv
pre-print
This paper presents a Deep convolutional network model for Identity-Aware Transfer (DIAT) of facial attributes. ...
Our DIAT can provide a unified solution for several representative facial attribute transfer tasks, e.g., expression transfer, accessory removal, age progression, and gender transfer, and can be extended ...
transfer, accessory removal, age progression, and gender transfer. ...
arXiv:1610.05586v2
fatcat:l4khpvjrnbgh3k6gy7csfyzfse
Towards Reducing Bias in Gender Classification
[article]
2019
arXiv
pre-print
This work aims at addressing the racial bias present in many modern gender recognition systems. We learn race invariant representations of human faces with an adversarially trained autoencoder model. ...
We show that such representations help us achieve less biased performance in gender classification. ...
In particular, we remove race information from the representation of each image and try to do gender classification using this disentangled representation. ...
arXiv:1911.08556v1
fatcat:7nptsrjej5blrdgiaaopfhza24
Representation Learning with Statistical Independence to Mitigate Bias
[article]
2020
arXiv
pre-print
Such challenges range from spurious associations between variables in medical studies to the bias of race in gender or face recognition systems. ...
We apply our method to synthetic data, medical images (containing task bias), and a dataset for gender classification (containing dataset bias). ...
., gender prediction from face images. ...
arXiv:1910.03676v4
fatcat:yf6amgpfivgarmfunohtx3l2ry
Don't Judge Me by My Face : An Indirect Adversarial Approach to Remove Sensitive Information From Multimodal Neural Representation in Asynchronous Job Video Interviews
[article]
2021
arXiv
pre-print
Recently, adversarial methods have been proved to effectively remove sensitive information from the latent representation of neural networks. ...
In this article, we propose a new adversarial approach to remove sensitive information from the latent representation of neural networks without the need to collect any sensitive variable. ...
PROPOSED CONTRIBUTION: AN INDIRECT ADVERSARIAL APPROACH We present a new adversarial approach, available in two forms, to remove personal information of AVI candidates from the latent layer of a deep network ...
arXiv:2110.09424v1
fatcat:iknfspsudbdvjdqfwvpvvcg5t4
TIPRDC: Task-Independent Privacy-Respecting Data Crowdsourcing Framework for Deep Learning with Anonymized Intermediate Representations
[article]
2020
arXiv
pre-print
The success of deep learning partially benefits from the availability of various large-scale datasets. ...
The emerging privacy concerns from users on data sharing hinder the generation or use of crowdsourcing datasets and lead to hunger of training data for new deep learning applications. ...
ACKNOWLEDGMENTS This work was supported in part by NSF-1822085 and NSF IUCRC for ASIC membership from Ergomotion. ...
arXiv:2005.11480v6
fatcat:jgppdb5fanby5ocdjrjkvpeoeq
DISCO: Dynamic and Invariant Sensitive Channel Obfuscation for deep neural networks
[article]
2021
arXiv
pre-print
Recent deep learning models have shown remarkable performance in image classification. ...
Finally, we also release an evaluation benchmark dataset of 1 million sensitive representations to encourage rigorous exploration of novel attack schemes. ...
A key idea of DISCO is to disentangle representation learning from privacy via the learned pruning filter. ...
arXiv:2012.11025v2
fatcat:dqg6gkx7mvh7dka3sqdcufcwd4
« Previous
Showing results 1 — 15 out of 8,882 results