PowerTransformer: Unsupervised Controllable Revision for Biased Language Correction

Xinyao Ma, Maarten Sap, Hannah Rashkin, Yejin Choi
2020 Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)   unpublished
Unconscious biases continue to be prevalent in modern text and media, calling for algorithms that can assist writers with bias correction. For example, a female character in a story is often portrayed as passive and powerless ("She daydreams about being a doctor") while a man is portrayed as more proactive and powerful ("He pursues his dream of being a doctor"). We formulate Controllable Debiasing, a new revision task that aims to rewrite a given text to correct the implicit and potentially
more » ... and potentially undesirable bias in character portrayals. We then introduce POWERTRANSFORMER as an approach that debiases text through the lens of connotation frames (Sap et al., 2017), which encode pragmatic knowledge of implied power dynamics with respect to verb predicates. One key challenge of our task is the lack of parallel corpora. To address this challenge, we adopt an unsupervised approach using auxiliary supervision with related tasks such as paraphrasing and self-supervision based on a reconstruction loss, building on pretrained language models. Through comprehensive experiments based on automatic and human evaluations, we demonstrate that our approach outperforms ablations and existing methods from related tasks. Furthermore, we demonstrate the use of POWER-TRANSFORMER as a step toward mitigating the well-documented gender bias in character portrayal in movie scripts.
doi:10.18653/v1/2020.emnlp-main.602 fatcat:2nwgdaz3ozcala6bfvjtnlsh5q