Filters








4,179 Hits in 4.6 sec

Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [article]

Joseph P Robinson and Can Qin and Yann Henon and Samson Timoner and Yun Fu
2022 arXiv   pre-print
Our Balanced Faces In the Wild (BFW) dataset serves as a proxy to measure bias across ethnicity and gender subgroups, allowing one to characterize FR performances per subgroup.  ...  A benefit of the proposed is to preserve identity information in facial features while decreasing the demographic information they contain.  ...  To address the imbalanced data (i.e., (1)), we propose a proxy dataset, Balanced Faces In the Wild (BFW), to measure subgroup biases in FR (Fig. 5 ).  ... 
arXiv:2103.09118v3 fatcat:66yi76sklzb5fcrsw5cpcw2rbm

SensitiveNets: Learning Agnostic Representations with Application to Face Images [article]

Aythami Morales, Julian Fierrez, Ruben Vera-Rodriguez, Ruben Tolosana
2020 arXiv   pre-print
In addition, we present a new face annotation dataset with balanced distribution between genders and ethnic origins.  ...  In our approach, privacy and discrimination are related to each other.  ...  The methods, based on adversarial learning, reported encouraging privacy-preserving results, but at the cost of a non-negligible impact on the primary task performance The methods proposed in [10] [11  ... 
arXiv:1902.00334v3 fatcat:d57brut44vg2zoeppqwx24b5cq

Face Recognition: Too Bias, or Not Too Bias?

Joseph P Robinson, Gennady Livitz, Yann Henon, Can Qin, Yun Fu, Samson Timoner
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)  
We reveal critical insights into problems of bias in state-of-the-art facial recognition (FR) systems using a novel Balanced Faces In the Wild (BFW) dataset: data balanced for gender and ethnic groups.  ...  We show variations in the optimal scoring threshold for face-pairs across different subgroups.  ...  Conclusion We introduce the Balanced Faces In the Wild (BFW) dataset with eight subgroups balanced across gender and ethnicity.  ... 
doi:10.1109/cvprw50498.2020.00008 dblp:conf/cvpr/RobinsonLHQ0T20 fatcat:g5dkzhys2jcc7iasys3qsdxli4

Beyond Identity: What Information Is Stored in Biometric Face Templates? [article]

Philipp Terhörst, Daniel Fährmann, Naser Damer, Florian Kirchbuchner, Arjan Kuijper
2020 arXiv   pre-print
Knowing the encoded information in face templates helps to develop bias-mitigating and privacy-preserving face recognition technologies.  ...  Since face recognition systems aim to be robust against these variations, future research might build on this work to develop more understandable privacy preserving solutions and build robust and fair  ...  the National Research Center for Applied Cybersecurity (ATHENE), and in part by the German Federal Ministry of Education and Research (BMBF) through the Software Campus project.  ... 
arXiv:2009.09918v1 fatcat:zguyooc36bavdgc5ubbzbquyya

Face Recognition: Too Bias, or Not Too Bias? [article]

Joseph P Robinson, Gennady Livitz, Yann Henon, Can Qin, Yun Fu, Samson Timoner
2020 arXiv   pre-print
We reveal critical insights into problems of bias in state-of-the-art facial recognition (FR) systems using a novel Balanced Faces In the Wild (BFW) dataset: data balanced for gender and ethnic groups.  ...  We show variations in the optimal scoring threshold for face-pairs across different subgroups.  ...  Conclusion We introduce the Balanced Faces In the Wild (BFW) dataset with eight subgroups balanced across gender and ethnicity.  ... 
arXiv:2002.06483v4 fatcat:464dhibgjvfzvaz2f3wlk2blgm

Learning Emotional-Blinded Face Representations [article]

Alejandro Peña and Julian Fierrez and Agata Lapedriza and Aythami Morales
2020 arXiv   pre-print
in terms of fairness and privacy.  ...  The results demonstrate that it is possible to reduce emotional information in the face representation while retaining competitive performance in other face-based artificial intelligence tasks.  ...  Problem Formulation We employ the privacy-preserving learning framework showed in Fig. 2 and detailed in [24] .  ... 
arXiv:2009.08704v1 fatcat:l3ro5hluujhlbi62q3s2r2ykhu

A Comparative Study on the Privacy Risks of Face Recognition Libraries

István Fábián, Gábor György Gulyás
2021 Acta Cybernetica  
Our experiments were conducted on a balanced face image dataset of different sexes and races, allowing us to discover biases in our results.  ...  We consider risks related to the processing of face embeddings, which are floating point vectors representing the human face in an identifying way.  ...  Acknowledgments The authors would also like to thank Kenéz Csiktusnádi-Kiss for his work and support in this research.  ... 
doi:10.14232/actacyb.289662 fatcat:iej6sm3osbfbriowumdoit7nty

On Soft-Biometric Information Stored in Biometric Face Embeddings

Philipp Terhorst, Daniel Fahrmann, Naser Damer, Florian Kirchbuchner, Arjan Kuijper
2021 IEEE Transactions on Biometrics Behavior and Identity Science  
This raises privacy and bias concerns in face recognition.  ...  We hope our findings will guide future works to develop more privacy-preserving and bias-mitigating face recognition technologies.  ...  This might lead to biased decisions in face recognition systems and raises major privacy issues.  ... 
doi:10.1109/tbiom.2021.3093920 fatcat:e3sddmvl55fvxducyonoecfkxe

Rethinking Common Assumptions to Mitigate Racial Bias in Face Recognition Datasets [article]

Matthew Gwilliam, Srinidhi Hegde, Lade Tinubu, Alex Hanson
2021 arXiv   pre-print
In our experiments, training on only African faces induced less bias than training on a balanced distribution of faces and distributions skewed to include more African faces produced more equitable models  ...  Exceptions to this are BUPT-Balancedface/RFW and Fairface, but these works assume that primarily training on a single race or not racially balancing the dataset are inherently disadvantageous.  ...  Acknowledgments: Alex Hanson was supported by the NDSEG fellowship.  ... 
arXiv:2109.03229v4 fatcat:nlfe7653dzdunjx7uokprrvyvu

Anatomizing Bias in Facial Analysis

Richa Singh, Puspita Majumdar, Surbhi Mittal, Mayank Vatsa
2022 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
We also discuss open challenges in the field of biased facial analysis.  ...  This has led to research in the identification and mitigation of bias in AI systems. In this paper, we encapsulate bias detection/estimation and mitigation algorithms for facial analysis.  ...  Most face recognition databases collected in the wild lack annotation information for protected attributes such as race and gender.  ... 
doi:10.1609/aaai.v36i11.21500 fatcat:lbuwkwaganfkxlzy55ir5lyngi

Privacy–Enhancing Face Biometrics: A Comprehensive Survey

Blaz Meden, Peter Rot, Philipp Terhorst, Naser Damer, Arjan Kuijper, Walter J. Scheirer, Arun Ross, Peter Peer, Vitomir Struc
2021 IEEE Transactions on Information Forensics and Security  
Face recognition technology, in particular, has been in the spotlight, and is now seen by many as posing a considerable risk to personal privacy.  ...  The goal of this overview paper is to provide a comprehensive introduction into privacy-related research in the area of biometrics and review existing work on Biometric Privacy-Enhancing Techniques (B-PETs  ...  For more information, see https://creativecommons.org/licenses/by/4.0/ This article has been accepted for publication in a future issue of this journal, but has not been fully edited.  ... 
doi:10.1109/tifs.2021.3096024 fatcat:z5kvij6g7vgx3b24narxdyp2py

Challenges and Opportunities for Machine Learning Classification of Behavior and Mental State from Images [article]

Peter Washington, Cezmi Onur Mutlu, Aaron Kline, Kelley Paskov, Nate Tyler Stockham, Brianna Chrisman, Nick Deveau, Mourya Surhabi, Nick Haber, Dennis P. Wall
2022 arXiv   pre-print
Here, we discuss the challenges and corresponding opportunities in this space, including handling heterogeneous data, avoiding biased models, labeling massive and repetitive data sets, working with ambiguous  ...  or compound class labels, managing privacy concerns, creating appropriate representations, and personalizing models.  ...  Acknowledgements This work was supported in part by funds to DPW from the National Institutes of Health (1R01EB025025-01, 1R21HD091500-01, 1R01LM013083  ... 
arXiv:2201.11197v1 fatcat:fvhzvvctn5drtooemem4u6qloi

Jointly De-biasing Face Recognition and Demographic Attribute Estimation [article]

Sixue Gong, Xiaoming Liu, Anil K. Jain
2020 arXiv   pre-print
We address the problem of bias in automated face recognition and demographic attribute estimation algorithms, where errors are lower on certain cohorts belonging to specific demographic groups.  ...  The proposed network consists of one identity classifier and three demographic classifiers (for gender, age, and race) that are trained to distinguish identity and demographic attributes, respectively.  ...  Moreover, since the features are disentangled into the demographic and identity, our face representations also contribute to privacy-preserving applications.  ... 
arXiv:1911.08080v4 fatcat:2cavhrnfezggjh6jffhpxzhfoy

PASS: Protected Attribute Suppression System for Mitigating Bias in Face Recognition [article]

Prithviraj Dhar, Joshua Gleason, Aniket Roy, Carlos D. Castillo, Rama Chellappa
2021 arXiv   pre-print
Such encoding has two major issues: (a) it makes the face representations susceptible to privacy leakage (b) it appears to contribute to bias in face recognition.  ...  We show the efficacy of PASS to reduce gender and skintone information in descriptors from SOTA face recognition networks like Arcface.  ...  tion in face descriptors, and thus considerably reduce the that is balanced in terms of race and provide the verification associated biases.  ... 
arXiv:2108.03764v1 fatcat:eiehasmayrfnlfdxjzi7wgqu3y

Assessing Dataset Bias in Computer Vision [article]

Athiya Deviyani
2022 arXiv   pre-print
These biases have the tendency to propagate to the models that train on them, often leading to a poor performance in the minority class.  ...  This signifies that the model was also able to mitigate the biases present in the baseline model that was trained on the original training set.  ...  The UTKFace dataset [80] consists of 20K+ face images in the wild which are readily cropped and aligned, with the respective age, gender, and ethnicity labels.  ... 
arXiv:2205.01811v1 fatcat:nerm3uxlbngqfbi7fsll6zjtre
« Previous Showing results 1 — 15 out of 4,179 results