Real-time adaptive behaviors in multimodal human-avatar interactions

Hui Zhang, Damian Fricker, Thomas G. Smith, Chen Yu
2010 International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction on - ICMI-MLMI '10  
Multimodal interaction in everyday life seems so effortless. However, a closer look reveals that such interaction is indeed complex and comprises multiple levels of coordination, from high-level linguistic exchanges to low-level couplings of momentary bodily movements both within an agent and across multiple interacting agents. A better understanding of how these multimodal behaviors are coordinated can provide insightful principles to guide the development of intelligent multimodal interfaces.
more » ... In light of this, we propose and implement a research framework in which human participants interact with a virtual agent in a virtual environment. Our platform allows the virtual agent to keep track of the user's gaze and hand movements in real time, and adjust his own behaviors accordingly. An experiment is designed and conducted to investigate adaptive user behaviors in a human-agent joint attention task. Multimodal data streams are collected in the study including speech, eye gaze, hand and head movements from both the human user and the virtual agent, which are then analyzed to discover various behavioral patterns. Those patterns show that human participants are highly sensitive to momentary multimodal behaviors generated by the virtual agent and they rapidly adapt their behaviors accordingly. Our results suggest the importance of studying and understanding real-time adaptive behaviors in human-computer multimodal interactions.
doi:10.1145/1891903.1891909 dblp:conf/icmi/ZhangFSY10 fatcat:awvgmh7cgvb3ti5hlv2d6z6p6a