Filters








21,875 Hits in 4.7 sec

Multi-Dimensional Gender Bias Classification [article]

Emily Dinan, Angela Fan, Ledell Wu, Jason Weston, Douwe Kiela, Adina Williams
2020 arXiv   pre-print
NLP models can inadvertently learn socially undesirable patterns when training on gender biased text.  ...  In this work, we propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions: bias from the gender of the person being spoken about, bias from the gender  ...  In this work, we make four main contributions: we propose a multi-dimensional framework (ABOUT, AS, TO) for measuring and mitigating gender bias in language and NLP models, we introduce an evaluation  ... 
arXiv:2005.00614v1 fatcat:o3lgzjeouvhepmp6bkmw2jk7jm

Multi-Dimensional Gender Bias Classification

Emily Dinan, Angela Fan, Ledell Wu, Jason Weston, Douwe Kiela, Adina Williams
2020 Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)   unpublished
NLP models can inadvertently learn socially undesirable patterns when training on gender biased text.  ...  the gender of the person being spoken to, and bias from the gender of the speaker.  ...  This paper makes four novel contributions: (i) we propose a multi-dimensional framework (ABOUT, AS, TO) for measuring and mitigating gender bias in language and NLP models, (ii) we introduce an evaluation  ... 
doi:10.18653/v1/2020.emnlp-main.23 fatcat:y5h6zpjlinb2dif3nk5j64uxgq

Towards Socially Responsible AI: Cognitive Bias-Aware Multi-Objective Learning [article]

Procheta Sen, Debasis Ganguly
2020 arXiv   pre-print
In contrast, our proposed bias-aware multi-objective learning methodology is shown to reduce such biases in the predictied emotions.  ...  To alleviate this problem, we propose a bias-aware multi-objective learning framework that given a set of identity attributes (e.g. gender, ethnicity etc.) and a subset of sensitive categories of the possible  ...  'Bias-Awr-Joint' (Bias-aware Multi-objective Joint Learning): In this variant, we use both the ethnicityemotion and the gender-emotion pairs to define two sets of biased response generation variables,  ... 
arXiv:2005.06618v2 fatcat:2y7i4vybsje67pwmgtylai5zky

Towards Socially Responsible AI: Cognitive Bias-Aware Multi-Objective Learning

Procheta Sen, Debasis Ganguly
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
In contrast, our proposed bias-aware multi-objective learning methodology is shown to reduce such biases in the predictid emotions.  ...  To alleviate this problem, we propose a bias-aware multi-objective learning framework that given a set of identity attributes (e.g. gender, ethnicity etc.) and a subset of sensitive categories of the possible  ...  'Bias-Awr-Joint' (Bias-aware Multi-objective Joint Learning): In this variant, we use both the ethnicityemotion and the gender-emotion pairs to define two sets of biased response generation variables,  ... 
doi:10.1609/aaai.v34i03.5654 fatcat:rmaguvhr6fc2dhqtkrkusztd6e

Marked Attribute Bias in Natural Language Inference [article]

Hillary Dawkins
2021 arXiv   pre-print
We present a new observation of gender bias in a downstream NLP application: marked attribute bias in natural language inference.  ...  However, focusing on biased word embeddings is potentially the most impactful first step due to their universal nature.  ...  We propose a new measure for quantify- ing intrinsic bias on the embedding space: Multi-dimensional Information-weighted Di- rect Bias (MIDB).  ... 
arXiv:2109.14039v1 fatcat:ph27cczaqbcuxjazkrsjdvlvk4

Debiasing Embeddings for Reduced Gender Bias in Text Classification [article]

Flavien Prost, Nithum Thain, Tolga Bolukbasi
2019 arXiv   pre-print
., 2016) demonstrated that pretrained word embeddings can inherit gender bias from the data they were trained on.  ...  We investigate how this bias affects downstream classification tasks, using the case study of occupation classification (De-Arteaga et al.,2019).  ...  For the overall performance of these models, we will use the standard accuracy metric of multi-class classification.  ... 
arXiv:1908.02810v1 fatcat:ifiqiiuxlzcivdgzxtivtcqywe

Finding Good Representations of Emotions for Text Classification [article]

Ji Ho Park
2018 arXiv   pre-print
We address the issue of gender bias in various neural network models by conducting experiments to measure and reduce those biases in the representations in order to build more robust classification models  ...  toward certain identities like different genders.  ...  Reducing Gender Bias We experiment various methods to reduce gender biases identified in Section 5.3.1.  ... 
arXiv:1808.07235v1 fatcat:uusufxzxdbgtjkxthxf3pbd65y

Deeply Learned Rich Coding for Cross-Dataset Facial Age Estimation

Zhanghui Kuang, Chen Huang, Wei Zhang
2015 2015 IEEE International Conference on Computer Vision Workshop (ICCVW)  
Training CNN is supervised by rich binary codes, and thus modeled as a multi-label classification problem.  ...  The codes represent different age group partitions at multiple granularities, and also gender information.  ...  Our rich codes not only characterize multi-source information of gender and age groups at several granularities, but also alleviate the problem caused by the labeling bias of multiple training datasets  ... 
doi:10.1109/iccvw.2015.52 dblp:conf/iccvw/KuangHZ15 fatcat:7zrgjkzsefgtfifaubcwsjnm6u

Multi-Objective Few-shot Learning for Fair Classification [article]

Ishani Mondal, Procheta Sen, Debasis Ganguly
2021 arXiv   pre-print
., race, gender etc.).  ...  Our proposed method involves learning a multi-objective function that in addition to learning the primary objective of predicting the primary class labels from the data, also employs a clustering-based  ...  For Bias-Aware Supervised (BAS) methods, we employ the multi-tasking architecture of [11] along with the attribute annotations of gender and ethnicity categories.  ... 
arXiv:2110.01951v1 fatcat:rmjcv33vw5do3bdb667rn5luky

Unsupervised Domain Adaptation in Speech Recognition using Phonetic Features [article]

Rupam Ojha, C Chandra Sekhar
2021 arXiv   pre-print
In this paper, we propose a technique to perform unsupervised gender-based domain adaptation in speech recognition using phonetic features.  ...  recognition because several sources of variability exist in the speech input like the channel variations, the input might be clean or noisy, the speakers may have different accent and variations in the gender  ...  In King and Taylor [3] a 13 dimensional SPE classification system is used. The silence is also taken as one of the phonetic features.  ... 
arXiv:2108.02850v1 fatcat:mp4prfyp4repvbjtaf2qwy7qmu

Representation Learning with Statistical Independence to Mitigate Bias [article]

Ehsan Adeli, Qingyu Zhao, Adolf Pfefferbaum, Edith V. Sullivan, Li Fei-Fei, Juan Carlos Niebles, Kilian M. Pohl
2020 arXiv   pre-print
We apply our method to synthetic data, medical images (containing task bias), and a dataset for gender classification (containing dataset bias).  ...  Such challenges range from spurious associations between variables in medical studies to the bias of race in gender or face recognition systems.  ...  We evaluated our bias-resilient neural network (BR-Net) on synthetic, medical diagnosis, and gender classification datasets.  ... 
arXiv:1910.03676v4 fatcat:yf6amgpfivgarmfunohtx3l2ry

Unintended Bias in Language Model-driven Conversational Recommendation [article]

Tianshu Shen, Jiaru Li, Mohamed Reda Bouadjenek, Zheda Mai, Scott Sanner
2022 arXiv   pre-print
However, pretrained LMs are well-known to be prone to intrinsic biases in their training data, which may be exacerbated by biases embedded in domain-specific language data(e.g., user reviews) used to fine-tune  ...  We study a recently introduced LM-driven recommendation backbone (termed LMRec) of a CRS to investigate how unintended bias i.e., language variations such as name references or indirect indicators of sexual  ...  Figure 5 :Gender 5 Figure 5: Two-dimensional scatter plot of the association score between item categories and each bias dimension.  ... 
arXiv:2201.06224v2 fatcat:ulcgerg33jgljcij33xtnfjoda

Multi-Task Semi-Supervised Adversarial Autoencoding for Speech Emotion Recognition [article]

Siddique Latif, Rajib Rana, Sara Khalifa, Raja Jurdak, Julien Epps,, Björn W. Schuller
2020 arXiv   pre-print
In particular, we use gender identifications and speaker recognition as auxiliary tasks, which allow the use of very large datasets, e.g., speaker classification datasets.  ...  The proposed model is rigorously evaluated for categorical and dimensional emotion, and cross-corpus scenarios.  ...  sum of multi-task classification errors.  ... 
arXiv:1907.06078v5 fatcat:drhb3dfjsraxxkt73dnncqhsqa

Multi-view Gender Classification Using Local Binary Patterns and Support Vector Machines [chapter]

Hui-Cheng Lian, Bao-Liang Lu
2006 Lecture Notes in Computer Science  
In this paper, we present a novel approach to multi-view gender classification considering both shape and texture information to represent facial image.  ...  In addition, the simplicity of the proposed method leads to very fast feature extraction, and the regional histograms and global description of the face allow for multi-view gender classification.  ...  gender classification.  ... 
doi:10.1007/11760023_30 fatcat:m4gj5duedrawzk433msseirfga

Mitigating Demographic Bias in Facial Datasets with Style-Based Multi-attribute Transfer

Markos Georgopoulos, James Oldfield, Mihalis A. Nicolaou, Yannis Panagakis, Maja Pantic
2021 International Journal of Computer Vision  
In facial datasets, this particularly relates to attributes such as skin tone, gender, and age. In this work, we address the problem of mitigating bias in facial datasets by data augmentation.  ...  Clearly, deploying biased systems under real-world settings can have grave consequences for affected populations.  ...  As such, we can only address gender bias within this imposed binary classification paradigm.  ... 
doi:10.1007/s11263-021-01448-w fatcat:uqmhtfblmrc5bflzbvby36uqaa
« Previous Showing results 1 — 15 out of 21,875 results