Filters








8,634 Hits in 6.4 sec

Recognizing gaze aversion gestures in embodied conversational discourse

Louis-Philippe Morency, C. Mario Christoudias, Trevor Darrell
2006 Proceedings of the 8th international conference on Multimodal interfaces - ICMI '06  
We analyze eye gestures during interaction with an animated embodied agent and propose a non-intrusive vision-based approach to estimate eye gaze and recognize eye gestures.  ...  While a large body of research results exist to document the use of gaze in humanto-human interaction, and in animating realistic embodied avatars, recognition of conversational eye gestures-distinct eye  ...  Our approach integrates eye detection results with a monocular 3D head pose tracker to achieve smooth and robust eye tracking, that computes the 3D position and orientation of the head at each frame.  ... 
doi:10.1145/1180995.1181051 dblp:conf/icmi/MorencyCD06 fatcat:c23bp7uu2ja45fendocgrcijby

Nonverbal communication with a humanoid robot via head gestures

Salah Saleh, Karsten Berns
2015 Proceedings of the 8th ACM International Conference on PErvasive Technologies Related to Assistive Environments - PETRA '15  
Simultaneously, the eye gazes are also detected to ensure right interpretation of head gestures. In order to recognize the human head gestures, head poses have been tracked over time.  ...  A stream of images with their corresponding depth information, acquired from a Kinect sensor, are used to find, track, and estimate the head poses of human.  ...  ACKNOWLEDGMENTS The authors would like to thank DAAD, Germany and the Ministry of Higher Education and Scientific Research of Iraq for funding of Salah Saleh.  ... 
doi:10.1145/2769493.2769543 dblp:conf/petra/SalehB15 fatcat:vsdgwl4jejf2xaolfm3oflxzd4

NUI framework based on real-time head pose estimation and hand gesture recognition

Hyunduk Kim, Sang-Heon Lee, Myoung-Kyu Sohn, M. Kavakli, M.J.E. Salami, A. Amini, M.A.B.M. Basri, A.B. Masli, S.C.H. Li, M. Pal
2016 MATEC Web of Conferences  
First module is real-time head pose estimation module using random forests and second module is hand gesture recognition module, named Hand gesture Key Emulation Toolkit (HandGKET).  ...  Moreover, using the hand gesture recognition module, we can also control the computer using the user's hand gesture without mouse and keyboard.  ...  It was also supported by Ministry of Culture, Sports and Tourism (MCST) and Korea Creative Content Agency (KOCCA) in the Culture Technology (CT) Research & Development Program (Immersive Game Contents  ... 
doi:10.1051/matecconf/20165602011 fatcat:wip2bqualrbjjpveannk6dufum

Associating Facial Expressions and Upper-Body Gestures with Learning Tasks for Enhancing Intelligent Tutoring Systems

Ardhendu Behera, Peter Matthew, Alexander Keidel, Peter Vangorp, Hui Fang, Susan Canning
2020 International Journal of Artificial Intelligence in Education  
We found that there is a significant increase in head and eye movements as time progresses, as well as with the increase of difficulty level.  ...  In this paper, we explore the automatic detection of learner's nonverbal behaviors involving hand-over-face gestures, head and eye movements and emotions via facial expressions during learning.  ...  as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.  ... 
doi:10.1007/s40593-020-00195-2 fatcat:clj7hxinwfbgvpqtk5tmlgvtmq

Multimodal human–computer interaction: A survey

Alejandro Jaimes, Nicu Sebe
2007 Computer Vision and Image Understanding  
In particular, we focus on body, gesture, gaze, and affective interaction (facial expression recognition, and emotion in audio).  ...  In this paper we review the major approaches to multimodal human computer interaction from a computer vision perspective.  ...  interact naturally with computers the way face-to-face human-human interaction takes place.  ... 
doi:10.1016/j.cviu.2006.10.019 fatcat:gzaoce4i2zedxclvpu77z5ndry

Multimodal Human Computer Interaction: A Survey [chapter]

Alejandro Jaimes, Nicu Sebe
2005 Lecture Notes in Computer Science  
In particular, we focus on body, gesture, gaze, and affective interaction (facial expression recognition, and emotion in audio).  ...  In this paper we review the major approaches to multimodal human computer interaction from a computer vision perspective.  ...  interact naturally with computers the way face-to-face human-human interaction takes place.  ... 
doi:10.1007/11573425_1 fatcat:ale6vmjoungs3gh7xx4dksigva

MotionInput v2.0 supporting DirectX: A modular library of open-source gesture-based machine learning and computer vision methods for interacting and controlling existing software with a webcam [article]

Ashild Kummen, Guanlin Li, Ali Hassan, Teodora Ganeva, Qianying Lu, Robert Shaw, Chenuka Ratwatte, Yang Zou, Lu Han, Emil Almazov, Sheena Visram, Andrew Taylor (+5 others)
2021 arXiv   pre-print
The user can choose their own preferred way of interacting from a series of motion types, including single and bi-modal hand gesturing, full-body repetitive or extremities-based exercises, head and facial  ...  of facial motions such as mouth motions, winking, and head direction with rotation.  ...  Users with motor impairments would be able to use head movements or eye tracking to interact with their computer, and full-body rehabilitative exercises can be performed by playing games with skeletal  ... 
arXiv:2108.04357v1 fatcat:mvgfynyxefetlecn3i5kbqdhou

Realtime AAM based user attention estimation

Sebastian Hommel, Uwe Handmann
2011 2011 IEEE 9th International Symposium on Intelligent Systems and Informatics  
For that reason this paper examine a simple gaze estimation with the help of an ordinary webcam.  ...  In this paper a method of automatic real-time capable visual user attention for a face to face human machine interaction is described.  ...  This paper focus on improving the human-robot interaction and therefore applies an attention and head gesture estimation, which use the AAM shape parameters to estimate the users head pose.  ... 
doi:10.1109/sisy.2011.6034322 fatcat:6flsvhvh3jealctn73lloyrbty

From conversational tooltips to grounded discourse

Louis-Philippe Morency, Trevor Darrell
2004 Proceedings of the 6th international conference on Multimodal interfaces - ICMI '04  
Head pose and gesture offer several key conversational grounding cues and are used extensively in face-to-face interaction among people.  ...  We further describe the integration of our module in two systems where animated and robotic characters interact with users based on rich discourse and semantic models.  ...  These new interfaces integrate information from different sources such as speech, eye gaze and body gestures.  ... 
doi:10.1145/1027933.1027940 dblp:conf/icmi/MorencyD04 fatcat:35gfhg4hjfefdg7k5kiyb57i2i

Cascading Hand and Eye Movement for Augmented Reality Videoconferencing

Istvan Barakonyi, Helmut Prendinger, Dieter Schmalstieg, Mitsuru Ishizuka
2007 2007 IEEE Symposium on 3D User Interfaces  
The virtual objects are manipulated using a novel interaction technique cascading bimanual tangible interaction and eye tracking.  ...  and virtual objects.  ...  the University of Tokyo (FY2005), and the Austrian Science Fund FWF (contract No.  ... 
doi:10.1109/3dui.2007.340777 dblp:conf/3dui/BarakonyiPSI07 fatcat:suum5rkvjvb7xj3472ddr7z4u4

The humanoid museum tour guide Robotinho

Felix Faber, Maren Bennewitz, Clemens Eppner, Attila Gorog, Christoph Gonsior, Dominik Joho, Michael Schreiber, Sven Behnke
2009 RO-MAN 2009 - The 18th IEEE International Symposium on Robot and Human Interactive Communication  
Most of the previous tour guide robots, however, focused more on the involved navigation task than on natural interaction with humans.  ...  A key requirement for successful tour guide robots is to interact with people and to entertain them.  ...  These include speech, emotional expressions, eye-gaze, and a set of human-like, symbolic as well as unconscious arm and head gestures.  ... 
doi:10.1109/roman.2009.5326326 dblp:conf/ro-man/FaberBEGGJSB09 fatcat:wngz35iopvefhda4wuimkdzv7u

Hands-Free Human-Robot Interaction Using Multimodal Gestures and Deep Learning in Wearable Mixed Reality

Kyeong-Beom Park, Sung Ho Choi, Jae Yeol Lee, Yalda Ghasemi, Mustafa Mohammed, Heejin Jeong
2021 IEEE Access  
This study proposes a novel hands-free interaction method using multimodal gestures such as eye gazing and head gestures and deep learning for human-robot interaction (HRI) in mixed reality (MR) environments  ...  Eye gazing-based interaction is used for coarse interactions such as searching and previewing of target objects, and head gesture interactions are used for fine interactions such as selection and 3D manipulation  ...  tasks using eye gazing and head gestures.  ... 
doi:10.1109/access.2021.3071364 fatcat:uqxc464gerblzcay2vgwdlaskm

AN OVERVIEW OF HUMAN COMPUTER INTERACTION AND ROLE OF GESTURES

Zainab Iqbal, Mohammad Amjad
2020 International Journal of Engineering Applied Sciences and Technology  
Also, we present a briefing of evolution of HCI and how gestures provide a reliable means of interaction to facilitilate an increasingly natural technique for interacting with our computers.  ...  This paper gives an overview of existing writing in regards to gesture recognition for communication between a human and computer by classifying it as per hand motions and eye motions.  ...  with the environment" [4] .They can include identification of hand poses, fingers, arms, gestures in virtual reality ,eye gestures such as eye tracking through direction of eye gaze, head and face  ... 
doi:10.33564/ijeast.2020.v05i05.025 fatcat:takkhbs5jjazzoysduqbxpm2fu

Integration of gestures and speech in human-robot interaction

Raveesh Meena, Kristiina Jokinen, Graham Wilcock
2012 2012 IEEE 3rd International Conference on Cognitive Infocommunications (CogInfoCom)  
We present an approach to enhance the interaction abilities of the Nao humanoid robot by extending its communicative behavior with non-verbal gestures (hand and head movements, and gaze following).  ...  We found that open arm gestures, head movements and gaze following could significantly enhance Nao's ability to be expressive and appear lively, and to engage human users in conversational interactions  ...  Often verbal expressions are accompanied by nonverbal expressions, such as gestures (e.g., hand, head and facial movements) and eye-gaze.  ... 
doi:10.1109/coginfocom.2012.6421936 fatcat:7rca5qmfqzhyzdebyaq3c7shoa

Context-based understanding of interaction intentions

Joao Quintas, Luis Almeida, Miguel Brito, Gustavo Quintela, Paulo Menezes, Jorge Dias
2012 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication  
This paper focus in the importance of context awareness and intention understanding capabilities in modern robots when faced with different situations.  ...  Combining the head pose estimation with the eye tracker we end up with a estimation of the user's gaze.  ...  IMPLEMENTATION AND RESULTS A. Gaze tracking The head pose estimation with gaze tracking is implemented in C, in a normal laptop and using it's own webcam.  ... 
doi:10.1109/roman.2012.6343803 dblp:conf/ro-man/QuintasABQMD12 fatcat:y26ntqfehzac5ahvogcirgaktu
« Previous Showing results 1 — 15 out of 8,634 results