Using natural language feedback in a neuro-inspired integrated multimodal robotic architecture

Johannes Twiefel, Xavier Hinaut, Marcelo Borghetti, Erik Strahl, Stefan Wermter
2016 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)  
In this paper we present a multi-modal human robot interaction architecture which is able to combine information coming from different sensory inputs, and can generate feedback for the user which helps to teach him/her implicitly how to interact with the robot. The system combines vision, speech and language with inference and feedback. The system environment consists of a Nao robot which has to learn objects situated on a table only by understanding absolute and relative object locations
more » ... ect locations uttered by the user and afterwards points on a desired object to show what it has learned. The results of a user study and performance test show the usefulness of the feedback produced by the system and also justify the usage of the system in a real-world applications, as its classification accuracy of multi-modal input is around 80.8%. In the experiments, the system was able to detect inconsistent input coming from different sensory modules in all cases and could generate useful feedback for the user from this information.
doi:10.1109/roman.2016.7745090 dblp:conf/ro-man/TwiefelHBSW16 fatcat:bd2d7vdq2fenfmjavbg426gtai