Filters








1,081 Hits in 5.2 sec

Leveraging the robot dialog state for visual focus of attention recognition

Samira Sheikhi, Vasil Khalidov, David Klotz, Britta Wrede, Jean-Marc Odobez
2013 Proceedings of the 15th ACM on International conference on multimodal interaction - ICMI '13  
In this paper, we investigate the use of the robot conversational state, which the robot is aware of, as contextual information to improve VFOA recognition from head pose.  ...  The Visual Focus of Attention (what or whom a person is looking at) or VFOA is a fundamental cue in non-verbal communication and plays an important role when designing effective human-machine interaction  ...  However, to our knowledge, while estimating the VFOA is considered by several systems [2, 4] , the use of the robot dialog context to improve the recognition of a user attention (VFOA) has not been explored  ... 
doi:10.1145/2522848.2522881 dblp:conf/icmi/SheikhiKKWO13 fatcat:k2zthmtxbbdztehnns46l43sfi

Managing Human-Robot Engagement with Forecasts and...um... Hesitations

Dan Bohus, Eric Horvitz
2014 Proceedings of the 16th International Conference on Multimodal Interaction - ICMI '14  
We report results from a study of the proposed approach with a directions-giving robot deployed in the wild.  ...  We explore methods for managing conversational engagement in open-world, physically situated dialog systems.  ...  ACKNOWLEDGMENTS We thank Rebecca Hanson for assistance with data annotation, Anne Loomis Thompson and Nick Saw for project contributions, and the anonymous reviewers for their feedback.  ... 
doi:10.1145/2663204.2663241 dblp:conf/icmi/BohusH14 fatcat:hjqxvomafjderkzdhimjbstiim

Intuitive Multimodal Interaction with Communication Robot Fritz [chapter]

Maren Bennewitz, Felix Faber, Dominik Joho, Sven Behnke
2007 Humanoid Robots, Human-like Machines  
Depending on the audio-visual input, our robot shifts its attention between different persons in order to involve them into an interaction.  ...  One of the most important motivations for many humanoid robot projects is that robots with a human-like body and human-like senses could in principle be capable of intuitive multimodal communication with  ...  Focus of Attention In order to determine the focus of attention of the robot, we compute an importance value for each person in the belief.  ... 
doi:10.5772/4826 fatcat:pjbombcnm5cgvieanauozvoznm

Fritz - A Humanoid Communication Robot

Maren Bennewitz, Felix Faber, Dominik Joho, Sven Behnke
2007 RO-MAN 2007 - The 16th IEEE International Symposium on Robot and Human Interactive Communication  
Depending on the audio-visual input, our robot shifts its attention between different persons in order to involve them into the conversation.  ...  To express its emotional state, the robot generates facial expressions and adapts the speech synthesis. We discuss experiences made during two public demonstrations of our robot.  ...  ACKNOWLEDGMENT This project is supported by the DFG (Deutsche Forschungsgemeinschaft), grant BE 2556/2-1,2.  ... 
doi:10.1109/roman.2007.4415240 dblp:conf/ro-man/BennewitzFJB07 fatcat:rdmd3fogczd5texylkdfxawqpi

Social Robotics for Nonsocial Teleoperation: Leveraging Social Techniques to Impact Teleoperator Performance and Experience

Daniel J. Rea, Stela H. Seo, James E. Young
2020 Current Robotics Reports  
Purpose of Review Research has demonstrated the potential for robotic interfaces to leverage human-like social interaction techniques, for example, autonomous social robots as companions, as professional  ...  Recent Findings The core benefit of social robotics is to leverage human-like and thus familiar social techniques to communicate effectively or shape people's mood and behavior.  ...  It could even support the transition by providing state summaries of the new robot and environment or draw attention to important information through action such as focusing intently on a map to indicate  ... 
doi:10.1007/s43154-020-00020-7 fatcat:is6iqj577vhitffygxyxybdxvi

Dialog in the open world

Dan Bohus, Eric Horvitz
2009 Proceedings of the 2009 international conference on Multimodal interfaces - ICMI-MLMI '09  
We outline a set of core competencies for open-world dialog, and describe three prototype systems.  ...  Like the multi-participant aspect, the often implicit, yet powerful physicality of situated interaction, provides opportunities for making ongoing inferences in openworld dialog systems, and challenges  ...  The pose tracker provides 3D head orientation information for each engaged agent ̅̅̅̅ , which is in turn used to infer the focus of attention (see below.) Focus of attention.  ... 
doi:10.1145/1647314.1647323 dblp:conf/icmi/BohusH09 fatcat:fb34jadrjrdi5frv5nuzjsp3fy

Head gestures for perceptual interfaces: The role of context in improving recognition

Louis-Philippe Morency, Candace Sidner, Christopher Lee, Trevor Darrell
2007 Artificial Intelligence  
Lexical, prosodic, timing, and gesture features can be used to predict a user's visual feedback during conversational dialog with a robotic or virtual agent.  ...  In this paper we describe how contextual information can be used to predict visual feedback and improve recognition of head gestures in human-computer interfaces.  ...  The authors thank Charles Rich for his support in the use of Collagen in this effort.  ... 
doi:10.1016/j.artint.2007.04.003 fatcat:bgz2rs676fezlnzy42jk3pcxpq

Execution memory for grounding and coordination

Stephanie Rosenthal, Sarjoun Skaff, Manuela Veloso, Dan Bohus, Eric Horvitz
2013 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI)  
memories for use in state inference and policy execution.  ...  In this work, we define execution memory as the capability of saving interaction event information and recalling it for later use.  ...  In addition to dialog, the Assistant uses face recognition and tracking to remember the salient features of times when visitors come and go from its proximity.  ... 
doi:10.1109/hri.2013.6483577 fatcat:sk4ork7q3zhabnicseozs2s65a

Contextual recognition of head gestures

Louis-Philippe Morency, Candace Sidner, Christopher Lee, Trevor Darrell
2005 Proceedings of the 7th international conference on Multimodal interfaces - ICMI '05  
We investigate how dialog context from an embodied conversational agent (ECA) can improve visual recognition of user gestures.  ...  We investigate how dialog context from an embodied conversational agent (ECA) can improve visual recognition of user gestures.  ...  Integrated Recognition Framework To recognize visual gestures in the context of the current dialog state, we fuse the output of the context predictor with the output of visual head gesture recognizer.  ... 
doi:10.1145/1088463.1088470 dblp:conf/icmi/MorencySLD05 fatcat:xayawdeyobdmdgtfetqybv3cde

Report from the NSF Future Directions Workshop, Toward User-Oriented Agents: Research Directions and Challenges [article]

Maxine Eskenazi, Tiancheng Zhao
2020 arXiv   pre-print
This USER Workshop was convened with the goal of defining future research directions for the burgeoning intelligent agent research community and to communicate them to the National Science Foundation.  ...  Any opinions, findings and conclusions or future directions expressed in this document are those of the authors and do not necessarily reflect the views of the National Science Foundation.  ...  visual objects for novel visual activity recognition (Eum et al., 2019) .  ... 
arXiv:2006.06026v1 fatcat:h2hbe4cr6vajvca4oqkfrmf5oa

Combining dynamic head pose–gaze mapping with the robot conversational state for attention recognition in human–robot interactions

Samira Sheikhi, Jean-Marc Odobez
2015 Pattern Recognition Letters  
The ability to recognize the Visual Focus of Attention (VFOA, i.e. what or whom a person is looking at) of people is important for robots or conversational agents interacting with multiple people, since  ...  This is a neglected aspect of previous works but essential for recognition.  ...  Approach Overview Our objective is to monitor the visual attention of people in a given environment relying on head pose.  ... 
doi:10.1016/j.patrec.2014.10.002 fatcat:qp4xykvk2bfhpb7punvid7e3eu

ChildBot: Multi-Robot Perception and Interaction with Children [article]

Niki Efthymiou, Panagiotis P. Filntisis, Petros Koutras, Antigoni Tsiami, Jack Hadfield, Gerasimos Potamianos, Petros Maragos
2020 arXiv   pre-print
The system, called ChildBot, features multimodal perception modules and multiple robotic agents that monitor the interaction environment, and can robustly coordinate complex Child-Robot Interaction use-cases  ...  In order to validate the effectiveness of the system and its integrated modules, we have conducted multiple experiments with a total of 52 children.  ...  Georgia Chalvatzaki for her help during the experiments, and the members of the NTUA IRAL.  ... 
arXiv:2008.12818v1 fatcat:au33jpbqpnfr5foaiivmd76n3y

The humanoid museum tour guide Robotinho

Felix Faber, Maren Bennewitz, Clemens Eppner, Attila Gorog, Christoph Gonsior, Dominik Joho, Michael Schreiber, Sven Behnke
2009 RO-MAN 2009 - The 18th IEEE International Symposium on Robot and Human Interactive Communication  
Most of the previous tour guide robots, however, focused more on the involved navigation task than on natural interaction with humans.  ...  The multimodal interaction capabilities of Robotinho have been designed and enhanced according to the questionnaires filled out by the people who interacted with the robot at previous public demonstrations  ...  To determine the focus of attention of the robot, we compute an importance value for each person in the belief, which is based on the time when the person has last spoken, on the distance of the person  ... 
doi:10.1109/roman.2009.5326326 dblp:conf/ro-man/FaberBEGGJSB09 fatcat:wngz35iopvefhda4wuimkdzv7u

Chat-Capsule: A Hierarchical Capsule for Dialog-level Emotion Analysis [article]

Yequan Wang, Xuying Meng, Yiyi Liu, Aixin Sun, Yao Wang, Yinhe Zheng, Minlie Huang
2022 arXiv   pre-print
These models hence are not optimized for dialog-level emotion detection, i.e. to predict the emotion category of a dialog as a whole.  ...  Many studies on dialog emotion analysis focus on utterance-level emotion only.  ...  Datcu and Rothkrantz fuse acoustic information with visual cues for emotion recognition (Datcu and Rothkrantz, 2014) .  ... 
arXiv:2203.12254v1 fatcat:iehqcfggtzfsxo2s2osno7vlxi

Natural Communication about Uncertainties in Situated Interaction

Tomislav Pejsa, Dan Bohus, Michael F. Cohen, Chit W. Saw, James Mahoney, Eric Horvitz
2014 Proceedings of the 16th International Conference on Multimodal Interaction - ICMI '14  
We present methods for estimating and communicating about different uncertainties in situated interaction, leveraging the affordances of an embodied conversational agent.  ...  The approach harnesses a representation that captures both the magnitude and the sources of uncertainty, and a set of policies that select and coordinate the production of nonverbal and verbal behaviors  ...  ACKNOWLEDGMENTS We thank Anne Loomis Thompson for her contributions to the project.  ... 
doi:10.1145/2663204.2663249 dblp:conf/icmi/PejsaBCSMH14 fatcat:shcgedujvnabdgi4qfo6od2mv4
« Previous Showing results 1 — 15 out of 1,081 results