Do not let the history haunt you – Mitigating Compounding Errors in Conversational Question Answering [article]

Angrosh Mandya, James O'Neill, Danushka Bollegala, Frans Coenen
2020 arXiv   pre-print
The Conversational Question Answering (CoQA) task involves answering a sequence of inter-related conversational questions about a contextual paragraph. Although existing approaches employ human-written ground-truth answers for answering conversational questions at test time, in a realistic scenario, the CoQA model will not have any access to ground-truth answers for the previous questions, compelling the model to rely upon its own previously predicted answers for answering the subsequent
more » ... ns. In this paper, we find that compounding errors occur when using previously predicted answers at test time, significantly lowering the performance of CoQA systems. To solve this problem, we propose a sampling strategy that dynamically selects between target answers and model predictions during training, thereby closely simulating the situation at test time. Further, we analyse the severity of this phenomena as a function of the question type, conversation length and domain type.
arXiv:2005.05754v1 fatcat:52r5qatel5eudp5n6gemeudr7a