A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Creating Multimodal Interactive Agents with Imitation and Self-Supervised Learning
[article]
2022
arXiv
pre-print
A common vision from science fiction is that robots will one day inhabit our physical spaces, sense the world as we do, assist our physical labours, and communicate with us through natural language. Here we study how to design artificial agents that can interact naturally with humans using the simplification of a virtual environment. We show that imitation learning of human-human interactions in a simulated world, in conjunction with self-supervised learning, is sufficient to produce a
arXiv:2112.03763v2
fatcat:xegc5kw4cncmtdnxlzlflfbaau