Learning Representations Specialized in Spatial Knowledge: Leveraging Language and Vision

Guillem Collell, Marie-Francine Moens
2018 Transactions of the Association for Computational Linguistics  
Spatial understanding is crucial in many realworld problems, yet little progress has been made towards building representations that capture spatial knowledge. Here, we move one step forward in this direction and learn such representations by leveraging a task consisting in predicting continuous 2D spatial arrangements of objects given objectrelationship-object instances (e.g., "cat under chair") and a simple neural network model that learns the task from annotated images. We show that the
more » ... succeeds in this task and, furthermore, that it is capable of predicting correct spatial arrangements for unseen objects if either CNN features or word embeddings of the objects are provided. The differences between visual and linguistic features are discussed. Next, to evaluate the spatial representations learned in the previous task, we introduce a task and a dataset consisting in a set of crowdsourced human ratings of spatial similarity for object pairs. We find that both CNN (convolutional neural network) features and word embeddings predict human judgments of similarity well and that these vectors can be further specialized in spatial knowledge if we update them when training the model that predicts spatial arrangements of objects. Overall, this paper paves the way towards building distributed spatial representations, contributing to the understanding of spatial expressions in language.
doi:10.1162/tacl_a_00010 fatcat:bxzm5hcxgfa5nihtzowahtcldi