A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
An Information-Theoretic Approach and Dataset for Probing Gender Stereotypes in Multilingual Masked Language Models
2022
Findings of the Association for Computational Linguistics: NAACL 2022
unpublished
Warning: This work deals with statements of a stereotypical nature that may be upsetting. Bias research in NLP is a rapidly growing and developing field. Similar to CrowS-Pairs (Nangia et al., 2020), we assess gender bias in masked-language models (MLMs) by studying pairs of sentences that are identical except that the individuals referred to have different gender. Most bias research focuses on and often is specific to English. Using a novel methodology for creating sentence pairs that is
doi:10.18653/v1/2022.findings-naacl.69
fatcat:t6ajyykawvahzcndcsgpqu67ba