Multi-modal human robot interaction for map generation

S.S. Ghidary, Y. Nakata, H. Saito, M. Hattori, T. Takamori
Proceedings 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems. Expanding the Societal Role of Robotics in the the Next Millennium (Cat. No.01CH37180)  
This paper describes an interface for multi modal human robot interaction, which enables people to introduce a newcomer robot about different attributes of objects and places in the room through speech commands and hand gestures. Robot makes an environment map of the room based on knowledge learned through communication with human and uses this map for navigation. The developed system consists of several sections including: natural language processing, posture recognition, object localization
more » ... d map generation. This system uses combination of multiple sources of information and model matching to detect and track a human hand so that the user can point toward an object of interest and guide the robot to go near to it or locate that object's position in the room. The position of objects in the room is located by monocular camera vision and depth from focus method.
doi:10.1109/iros.2001.976404 dblp:conf/iros/GhidaryNSHT01 fatcat:oe3glpcjozcazmo4u7fv6kljga