ArraMon: A Joint Navigation-Assembly Instruction Interpretation Task in Dynamic Environments

Hyounghun Kim, Abhaysinh Zala, Graham Burri, Hao Tan, Mohit Bansal
2020 Findings of the Association for Computational Linguistics: EMNLP 2020   unpublished
For embodied agents, navigation is an important ability but not an isolated goal. Agents are also expected to perform specific tasks after reaching the target location, such as picking up objects and assembling them into a particular arrangement. We combine Vision-and-Language Navigation, assembling of collected objects, and object referring expression comprehension, to create a novel joint navigationand-assembly task, named ARRAMON. During this task, the agent (similar to a PokéMON GO player)
more » ... s asked to find and collect different target objects one-by-one by navigating based on natural language (English) instructions in a complex, realistic outdoor environment, but then also ARRAnge the collected objects part-by-part in an egocentric grid-layout environment. To support this task, we implement a 3D dynamic environment simulator and collect a dataset with human-written navigation and assembling instructions, and the corresponding ground truth trajectories. We also filter the collected instructions via a verification stage, leading to a total of 7.7K task instances (30.8K instructions and paths). We present results for several baseline models (integrated and biased) and metrics (nDTW, CTC, rPOD, and PTC), and the large model-human performance gap demonstrates that our task is challenging and presents a wide scope for future work. 1
doi:10.18653/v1/2020.findings-emnlp.348 fatcat:w5gt46tu5rbq5gxdexhg3xsmti