11,184 Hits in 4.3 sec


Shuzhi Sam Ge, Oussama Khatib
2017 International Journal of Social Robotics  
A cooperative planning method has been proposed such that agents select actions using a combination of planning and team reasoning.  ...  In the second paper, "Cooperative Human-Robot Planning with Team Reasoning (by Raul Hakli)", the author studies the connections between philosophical action theory and planning methods in artificial intelligence  ... 
doi:10.1007/s12369-017-0438-3 fatcat:we4drzcx5ben5pqnlzdhegfarq

Goal Recognition for Deceptive Human Agents through Planning and Gaze

Thao Le, Ronal Singh, Tim Miller
2021 The Journal of Artificial Intelligence Research  
In this paper, we present new models for goal recognition under deception using a combination of gaze behaviour and observed movements of the agent.  ...  Eye gaze has the potential to provide insight into the minds of individuals, and this idea has been used in prior research to improve human goal recognition by combining human's actions and gaze.  ...  Singh et al. (2018) propose a model that combines gaze input with a model-based goal recognition approach for intention and goal recognition.  ... 
doi:10.1613/jair.1.12518 fatcat:4c5nsdep2jcp7netqb3s3hawwe

Gaze-Based Intention Estimation for Shared Autonomy in Pick-and-Place Tasks

Stefan Fuchs, Anna Belardinelli
2021 Frontiers in Neurorobotics  
Shared autonomy aims at combining robotic and human control in the execution of remote, teleoperated tasks.  ...  This cooperative interaction cannot be brought about without the robot first recognizing the current human intention in a fast and reliable way so that a suitable assisting plan can be quickly instantiated  ...  Figure 9 sketches the online intention recognition approach.  ... 
doi:10.3389/fnbot.2021.647930 pmid:33935675 pmcid:PMC8085393 fatcat:pzfq7wqp5je25bo6f25kg6ts4u

Gaze-Based Interaction Intention Recognition in Virtual Reality

Xiao-Lin Chen, Wen-Jun Hou
2022 Electronics  
With the increasing need for eye tracking in head-mounted virtual reality displays, the gaze-based modality has the potential to predict user intention and unlock intuitive new interaction schemes.  ...  In the present work, we explore whether gaze-based data and hand-eye coordination data can predict a user's interaction intention with the digital world, which could be used to develop predictive interfaces  ...  Van-Horenbeke and Peer [38] explore human behavior, planning, and goal (intent) recognition as a holistic problem.  ... 
doi:10.3390/electronics11101647 fatcat:mf7ukpcvvbfpjnhs7hjjp2xacm

Gaze-Based Proactive User Interface for Pen-Based Systems

Çağla Çığ
2014 Proceedings of the 16th International Conference on Multimodal Interaction - ICMI '14  
In typical human-computer interaction, users convey their intentions through traditional input devices (e.g. keyboards, mice, joysticks) coupled with standard graphical user interface elements.  ...  In this paper, I describe the roadmap for my PhD research which aims at using eye gaze movements that naturally occur during pen-based interaction to reduce dependency on explicit mode selection mechanisms  ...  monitoring user's eye gaze and pen input to detect the intention to switch modes in an online setting, and act accordingly.  ... 
doi:10.1145/2663204.2666287 dblp:conf/icmi/Cig14 fatcat:5i5ymnxadzc3xkj6yobgicoppa

Intention based Comparative Analysis of Human-Robot Interaction

Muhammad Awais, Muhammad Yahya Saeed, Muhammad Sheraz Arshad Malik, Muhammad Younas, Rao Sohail Iqbal Asif
2020 IEEE Access  
[36] considered the combination of gaze and model-based human intention could improve the intention recognition.  ...  The generalization of existing approaches used for model-based human intention recognition. Both of the parts, i.e., gaze and model approaches are combined using the Bayesian.  ...  He is working as a lecturer in gc university from last seven years with analysing different models designing and verification.  ... 
doi:10.1109/access.2020.3035201 fatcat:cjsfymmmjfdwpmguw6j2aeeknu

Exploiting Three-Dimensional Gaze Tracking for Action Recognition During Bimanual Manipulation to Enhance Human–Robot Collaboration

Alireza Haji Fathaliyan, Xiaoyu Wang, Veronica J. Santos
2018 Frontiers in Robotics and AI  
Human-robot collaboration could be advanced by facilitating the intuitive, gaze-based control of robots, and enabling robots to recognize human actions, infer human intent, and plan actions that support  ...  ). 3D gaze saliency maps, generated with high spatial resolution for six subtasks, appeared to encode action-relevant information.  ...  , infer human intent, and plan actions that support human goals.  ... 
doi:10.3389/frobt.2018.00025 pmid:33500912 pmcid:PMC7805858 fatcat:szyhefjgyfel7pj7uuiipiveeq

Towards Conversational Agents That Attend to and Adapt to Communicative User Feedback [chapter]

Hendrik Buschmeier, Stefan Kopp
2011 Lecture Notes in Computer Science  
A comprehensive conceptual and architectural model for this is proposed and first steps of its realisation are described. Results from a prototype implementation are presented.  ...  We would like to credit Benjamin Dosch with developing the concepts and mechanisms that make the SPUD NLG microplanner adaptive to the 'attributed listener state'.  ...  The production of elicitation cues for feedback is also specified in intent planning and currently only 'translated' to either an explicit request for acknowledgement, a short pause, or a change of gaze  ... 
doi:10.1007/978-3-642-23974-8_19 fatcat:ve75k6x2jvc43aow2vcw3jedte

Conversational gaze aversion for humanlike robots

Sean Andrist, Xiang Zhi Tan, Michael Gleicher, Bilge Mutlu
2014 Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction - HRI '14  
The results of a human-robot interaction study with 30 participants show that gaze aversions implemented with our approach are perceived as intentional, and robots can use gaze aversions to appear more  ...  We present a system that addresses the challenges of adapting human gaze aversion movements to a robot with very different affordances, such as a lack of articulated eyes.  ...  We would like to thank Faisal Khan, Brandi Hefty, Ross Luo, Brandon Smith, Catherine Steffel, and Chien-Ming Huang for their contributions to this work.  ... 
doi:10.1145/2559636.2559666 dblp:conf/hri/AndristTGM14 fatcat:gdgbgfgkrnc7jbbgarkdupztse

Toward Shared Autonomy Control Schemes for Human-Robot Systems: Action Primitive Recognition Using Eye Gaze Features

Xiaoyu Wang, Alireza Haji Fathaliyan, Veronica J. Santos
2020 Frontiers in Neurorobotics  
In this work, eye gaze was leveraged as a natural way to infer human intent and advance action recognition for shared autonomy control schemes.  ...  For a representative activity (making a powdered drink), the average recognition accuracy was 77% for the verb and 83% for the target object.  ...  ACKNOWLEDGMENTS The authors thank Daniela Zokaeim, Aarranon Bharathan, Kevin Hsu, and Emma Suh for assistance with data analysis.  ... 
doi:10.3389/fnbot.2020.567571 pmid:33178006 pmcid:PMC7593660 fatcat:pphb3b7xifezrddruiwylmnpda

Symmetric Multimodality Revisited: Unveiling Users' Physiological Activity

Helmut Prendinger, Mitsuru Ishizuka
2007 IEEE transactions on industrial electronics (1982. Print)  
He is a coeditor (with Mitsuru Ishizuka) of a book on lifelike characters that appeared in the Cognitive Technologies series of Springer.  ...  His research interests include artificial intelligence, affective computing, and human-computer interaction, in which areas he has published more than 75 papers in international journals and conference  ...  We want to contrast our approach to understanding humans and their intentions to traditional plan recognition, which refers to the task of inferring the plan or plans of humans from observations of their  ... 
doi:10.1109/tie.2007.891646 fatcat:wae7gulvu5gb7aotaolhxjok4m

Goal-recognition-based adaptive brain-computer interface for navigating immersive robotic systems

Mohammad Abu-Alqumsan, Felix Ebert, Angelika Peer
2017 Journal of Neural Engineering  
the Institute for Advanced Study, Technical University of Munich (TUM).  ...  Acknowledgments This work is supported in part by the VERE project within the 7th Framework Programme of the European Union, FET-Human Computer Confluence Initiative, contract number ICT-2010-257695 and  ...  Related Work: Goal/Intention Recognition The term plan recognition has been defined by Schmidt et al.  ... 
doi:10.1088/1741-2552/aa66e0 pmid:28294109 fatcat:qf2zdm5hwrgnvnufvpzgea5msu

Exploring Temporal Dependencies in Multimodal Referring Expressions with Mixed Reality [article]

Elena Sibirtseva, Ali Ghadirzadeh, Iolanda Leite, Mårten Björkman, Danica Kragic
2019 arXiv   pre-print
For human-robot interaction to run smoothly and naturally, a robot should be equipped with the ability to robustly disambiguate referring expressions.  ...  In collaborative tasks, people rely both on verbal and non-verbal cues simultaneously to communicate with each other.  ...  Recent studies focused on intent recognition by combining different features from speech with gaze fixations [1] , head movements [25] , and gestures [6] .  ... 
arXiv:1902.01117v1 fatcat:ogxbtqxbz5bqrm6gqsx33pi6ve

Ubiquitous Gaze Sensing and Interaction (Dagstuhl Seminar 18252)

Lewis Chuang, Andrew Duchowski, Pernilla Qvarfordt, Daniel Weiskopf, Michael Wagner
2019 Dagstuhl Reports  
Therefore, this Dagstuhl Seminar brought together experts in computer graphics, signal processing, visualization, human-computer interaction, data analytics, pattern analysis and classification along with  ...  researchers who employ eye tracking across a diverse set of disciplines: geo-information systems, medicine, aviation, psychology, and neuroscience, to explore future applications and to identify requirements for  ...  Gaze-touch: combining gaze with multi-touch for interaction on the same surface.  ... 
doi:10.4230/dagrep.8.6.77 dblp:journals/dagstuhl-reports/ChuangDQW18 fatcat:7nfkzimerrb3hpcb25bwy7sf5e

Congruency of gaze metrics in action, imagery and action observation

Joe Causer, Sheree A. McCormick, Paul S. Holmes
2013 Frontiers in Human Neuroscience  
Suggestions are made for how researchers and practitioners can structure action observation and movement imagery interventions to maximize (re)learning.  ...  Furthermore, the paper highlights aspects of congruency in gaze metrics between these states.  ...  For example, Decety (1996) reported that the neural profile was altered depending on whether the task was to "recognize" the action or to "observe the action with the intent to imitate."  ... 
doi:10.3389/fnhum.2013.00604 pmid:24068996 pmcid:PMC3781353 fatcat:q2r6tdemc5ahdj3q6hg3ya4ssq
« Previous Showing results 1 — 15 out of 11,184 results