A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Toward Interactive Grounded Language Acqusition
2013
Robotics: Science and Systems IX
This paper addresses the problem of enabling robots to interactively learn visual and spatial models from multi-modal interactions involving speech, gesture and images. Our approach, called Logical Semantics with Perception (LSP), provides a natural and intuitive interface by significantly reducing the amount of supervision that a human is required to provide. This paper demonstrates LSP in an interactive setting. Given speech and gesture input, LSP is able to learn object and relation
doi:10.15607/rss.2013.ix.005
dblp:conf/rss/KollarKS13
fatcat:l2g2set4zzak7mgaaofctie5xi