A Linguistic Analysis of Visually Grounded Dialogues Based on Spatial Expressions [article]

Takuma Udagawa, Takato Yamazaki, Akiko Aizawa
2020 arXiv   pre-print
Recent models achieve promising results in visually grounded dialogues. However, existing datasets often contain undesirable biases and lack sophisticated linguistic analyses, which make it difficult to understand how well current models recognize their precise linguistic structures. To address this problem, we make two design choices: first, we focus on OneCommon Corpus , a simple yet challenging common grounding dataset which contains minimal bias by design. Second, we analyze their
more » ... structures based on spatial expressions and provide comprehensive and reliable annotation for 600 dialogues. We show that our annotation captures important linguistic structures including predicate-argument structure, modification and ellipsis. In our experiments, we assess the model's understanding of these structures through reference resolution. We demonstrate that our annotation can reveal both the strengths and weaknesses of baseline models in essential levels of detail. Overall, we propose a novel framework and resource for investigating fine-grained language understanding in visually grounded dialogues.
arXiv:2010.03127v1 fatcat:s3bsa7qb6fdalopkntel2szdpa