Filters








12,923 Hits in 5.5 sec

Understanding the impact of multimodal interaction using gaze informed mid-air gesture control in 3D virtual objects manipulation

Shujie Deng, Nan Jiang, Jian Chang, Shihui Guo, Jian J. Zhang
2017 International Journal of Human-Computer Studies  
U n d e r s t a n d i n gt h ei mp a c t o f mu l t i mo d a l i n t e r a c t i o nu s i n gg a z ei n f o r me dmi d -a i r g e s t u r ec o n t r o l i n3 Dv i r t u a l o b j e c t sma n i p u l a t i o n S h u j i eD e n g a , Na nJ i a n g b , J i a nC h a n g a , S h i h u i G u o c * , J i a nJ . Z h a n g a a Na t i o n a l C e n t r ef o r C o mp u t e r A n i ma t i o n , B o u r n e mo u t hU n i v e r s i t y , P o o l e , U n i t e dK i n g d o m b D e p a r t me n t o f C o mp u
more » ... i n ga n dI n f o r ma t i c s , B o u r n e mo u t hU n i v e r s i t y
doi:10.1016/j.ijhcs.2017.04.002 fatcat:hpz47x2ekfeipetppl5t7ii4ry

Survey on Effect of Multimodal Interface on Senior Citizen

Aleena Susan Mathew, Vidya . N
2018 International Journal Of Engineering And Computer Science  
Multimodal interface is designed within CAMI in which an Artificial Intelligent ecosystem integrates the main functionalities of AAL (Ambient Assisted Living) systems for senior citizen, which are its  ...  It can process both gesture and speech commands. It must work on different devices and adapt to any screen size.  ...  Gesture recognition used for computers to understand human body language, interpreting those gestures via mathematical algorithms. Hassani et al.  ... 
doi:10.18535/ijecs/v7i3.08 fatcat:ro3govi7l5ecflpdy6dill5yja

Multimodal interaction: A review

Matthew Turk
2014 Pattern Recognition Letters  
Finally, we list challenges that lie ahead for research in multimodal human-computer interaction.  ...  Multimodal human-computer interaction has sought for decades to endow computers with similar capabilities, in order to provide more natural, powerful, and compelling interactive experiences.  ...  human-human multimodal interaction.  ... 
doi:10.1016/j.patrec.2013.07.003 fatcat:xhbzycgarbd3vjnptybvdoezcy

Multimodal Systems: Taxonomy, Methods, and Challenges [article]

Muhammad Zeeshan Baig, Manolya Kavakli
2020 arXiv   pre-print
Empowering computers with the capability to process input multimodally is a major domain of investigation in Human-Computer Interaction (HCI).  ...  The modalities are processed both sequentially and in parallel for communication in the human brain, this changes when humans interact with computers.  ...  Introduction The interaction between human and the world is multi-modal [1] . Humans utilize multiple senses to get an understanding of the environment.  ... 
arXiv:2006.03813v1 fatcat:qenme7xocjeede374ck46ucx4u

Multimodal Fusion Algorithm and Reinforcement Learning-Based Dialog System in Human-Machine Interaction

Hanif Fakhrurroja, Institut Teknologi Bandung, School of Electrical Engineering and Informatics, Indonesia, Carmadi Machbub, Ary Setijadi Prihatmanto, Ayu Purwarianti, Institut Teknologi Bandung, School of Electrical Engineering and Informatics, Indonesia, Institut Teknologi Bandung, School of Electrical Engineering and Informatics, Indonesia, Institut Teknologi Bandung, School of Electrical Engineering and Informatics, Indonesia
2020 International Journal on Electrical Engineering and Informatics  
It involved several stages, including a multimodal activation system, methods for recognizing speech modalities, gestures, face detection and skeleton tracking, multimodal fusion strategies, understanding  ...  The level of user satisfaction towards the multimodal recognition-based human-machine interaction system developed was 95%.  ...  Activation of human-machine interaction system developed with four modality inputs in the form of skeleton tracking, face detection, speech recognition, and gesture in humans to understand the context  ... 
doi:10.15676/ijeei.2020.12.4.19 fatcat:tun3mqo3a5cn7d5sui7bdd2o6y

Face and Body Gesture Analysis for Multimodal HCI [chapter]

Hatice Gunes, Massimo Piccardi, Tony Jan
2004 Lecture Notes in Computer Science  
Multimodal interfaces allow humans to interact with machines through multiple modalities such as speech, facial expression, gesture, and gaze.  ...  Accordingly, in this paper we present a vision-based framework that combines face and body gesture for multimodal HCI.  ...  way in various human-computer interaction applications [1] , [24] .  ... 
doi:10.1007/978-3-540-27795-8_59 fatcat:ib3d3ek6fjbyzceinv73k5baxi

Data fusion methods in multimodal human computer dialog

Ming-Hao YANG, Jian-Hua TAO
2019 Virtual Reality & Intelligent Hardware  
Finally, some practical examples of multimodal information fusion methods are introduced and the possible and important breakthroughs of the data fusion methods in future multimodal human-computer interaction  ...  This paper presents a review of data fusion methods in multimodal human computer dialog.  ...  Multimodal information fusion in human computer dialog The general framework of multimodal human computer dialog In human's daily life, various interaction channels, such as speech, gesture, body movement  ... 
doi:10.3724/sp.j.2096-5796.2018.0010 dblp:journals/vrih/YangT19 fatcat:jltufn3o6fd4pjzv4pnqd6wmvy

Diana's World: A Situated Multimodal Interactive Agent

Nikhil Krishnaswamy, Pradyumna Narayana, Rahul Bangar, Kyeongmin Rim, Dhruva Patil, David McNeely-White, Jaime Ruiz, Bruce Draper, Ross Beveridge, James Pustejovsky
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
To facilitate true peer-to-peer communication with a computer, we present Diana, a situated multimodal agent who exists in a mixed-reality environment with a human interlocutor, is situation- and context-aware  ...  State of the art unimodal dialogue agents lack some core aspects of peer-to-peer communication—the nonverbal and visual cues that are a fundamental aspect of human interaction.  ...  The gestures Diana understands were gathered from human-to-human elicitation studies conducted to better understand communicative gestures used by humans in the course of a collaborative task .  ... 
doi:10.1609/aaai.v34i09.7096 fatcat:rgthkixro5bilfmtrxayzudv6u

Analyzing Multimodal Communication around a Shared Tabletop Display [chapter]

Anne Marie Piper, James D. Hollan
2009 ECSCW 2009  
We also describe extensions of this communication technology, discuss how multimodal analysis techniques are useful in understanding the affects of multiuser multimodal tabletop systems, and briefly allude  ...  We compare communication mediated by a multimodal tabletop display and by a human sign language interpreter.  ...  Analyzing Multimodal Communication around a Shared Tabletop Display  ... 
doi:10.1007/978-1-84882-854-4_17 dblp:conf/ecscw/PiperH09 fatcat:i7dlputr6rbppltmb7lqhxtxcm

A Study on Potential of Integrating Multimodal Interaction into Musical Conducting Education [article]

Gilbert Phuah Leong Siang, Nor Azman Ismail, Pang Yee Yong
2010 arXiv   pre-print
The purpose of this paper is attempted to analyze the possibility of integrating multimodal interaction such as vision-based hand gesture and speech interaction into musical conducting education.  ...  With the rapid development of computer technology, computer music has begun to appear in the laboratory. Many potential utility of computer music is gradually increasing.  ...  It is possible to use the idea of hand gesture interaction and speech interaction as the main input to conduct computers.  ... 
arXiv:1005.4014v1 fatcat:ar2gt6ok3bet7mpd2ap43fyg3i

Multimodal interaction for distributed collaboration

Levent Bolelli, Guoray Cai, Hongmei Wang, Bita Mortazavi, Ingmar Rauschert, Sven Fuhrmann, Rajeev Sharma, Alan MacEachren
2004 Proceedings of the 6th international conference on Multimodal interfaces - ICMI '04  
Decision makers in front of large screen displays and/or desktop computers, and emergency responders in the field with tablet PCs can engage in collaborative activities for situation assessment and emergency  ...  Our system enables distributed spatial decision-making by providing a multimodal interface to team members.  ...  In the EOC setting, user's speech and free-hand gestures are captured by a multimodal interface platform, GeoMIP [1] , while first responders in the field use penbased gestures and speech for interaction  ... 
doi:10.1145/1027933.1027990 dblp:conf/icmi/BolelliCWMRFSM04 fatcat:pdbfjw55tvefpnynxtqyseeu3e

Gestures in Human-Computer Interaction – Just Another Modality? [chapter]

Antti Pirhonen
2010 Lecture Notes in Computer Science  
The role of physical gestures in human-computer interaction has mostly been neglected. In this paper, we argue that gestures are not just one input modality.  ...  Rather, they should be seen as a unifying phenomenon, in terms of which all interaction with physical reality could be conceptualised.  ...  Introduction The tradition of analysing multimodality in the field of human-computer interaction is strongly anchored to a somewhat superficial conception of modality.  ... 
doi:10.1007/978-3-642-12553-9_25 fatcat:qbj7krtxczfzrnnq563j5lvqfq

Giving interaction a hand

Stefan Kopp
2013 Proceedings of the 15th ACM on International conference on multimodal interaction - ICMI '13  
Humans frequently join words and gestures for multimodal communication.  ...  How can we develop computational models for processing and generating natural speech-gesture behavior, in a flexible, fast and adaptive manner similar to humans?  ...  Social and interactional nature The main difference between movements used in gesture-based interfaces and human gestures in natural dialogue, is that the latter are not fixed but dynamically evolving.  ... 
doi:10.1145/2522848.2532201 dblp:conf/icmi/Kopp13 fatcat:ttawpowqzncfxl5esmkjh3jfhi

Multimodal Interaction System for Home Appliances Control

Hanif Fakhrurroja, Carmadi Machbub, Ary Setijadi Prihatmanto, Ayu Purwarianti
2020 International Journal of Interactive Mobile Technologies  
<span>This paper proposes a way to control home appliances using a multimodal interaction system such as speech, gestures, and smartphone applications.  ...  The sensor to capture speech, in the Indonesian language, and gestures from users are Kinect v2.  ...  In the future, face detection and skeleton tracking can be added to multimodal interaction so human and machine interactions can run more naturally. 6 Fig. 1 . 1 Multimodal Interaction System Design  ... 
doi:10.3991/ijim.v14i15.13563 fatcat:cwxr7yv7vbbklpqtrtwxjs3qaq

Multimodal human–computer interaction: A survey

Alejandro Jaimes, Nicu Sebe
2007 Computer Vision and Image Understanding  
In this paper we review the major approaches to multimodal human computer interaction from a computer vision perspective.  ...  In particular, we focus on body, gesture, gaze, and affective interaction (facial expression recognition, and emotion in audio).  ...  interact naturally with computers the way face-to-face human-human interaction takes place.  ... 
doi:10.1016/j.cviu.2006.10.019 fatcat:gzaoce4i2zedxclvpu77z5ndry
« Previous Showing results 1 — 15 out of 12,923 results