Where to look

Candace L. Sidner, Cory D. Kidd, Christopher Lee, Neal Lesh
2004 Proceedings of the 9th international conference on Intelligent user interface - IUI '04  
This paper reports on a study of human subjects with a robot designed to mimic human conversational gaze behavior in collaborative conversation. The robot and the human subject together performed a demonstration of an invention created at our laboratory; the demonstration lasted 3 to 3.5 minutes. We briefly discuss the robot architecture and then focus the paper on a study of the effects of the robot operating in two different conditions. We offer some conclusions based on the study about the
more » ... portance of engagement for 3D IUIs. We will present video clips of the subject interactions with the robot at the conference. Our architecture for collaborative interactions uses several different systems and algorithms, largely developed at MERL. The architecture is illustrated in Figure 2 . The conversational and collaborative capabilities of our robot are provided by the Collagen TM middleware for collaborative agents [15, 16] , and commercially available speech recognition software (IBM ViaVoice). We use a face detection algorithm [20], a sound location algorithm, a speech detection algorithm, and an object recognition algorithm [1] and fuse the sensory data before
doi:10.1145/964456.964458 fatcat:q2yugoive5g5xjhuzuxomk42ge