Filters








1,585 Hits in 6.8 sec

Survey on Effect of Multimodal Interface on Senior Citizen

Aleena Susan Mathew, Vidya . N
2018 International Journal Of Engineering And Computer Science  
Multimodal interface is designed within CAMI in which an Artificial Intelligent ecosystem integrates the main functionalities of AAL (Ambient Assisted Living) systems for senior citizen, which are its  ...  It can process both gesture and speech commands. It must work on different devices and adapt to any screen size.  ...  Since unimodal interfaces doesn't provide a natural way of interaction hence a multimodal interface is introduced. Multimodal interfaces are user friendly, platform independent, scalable and robust.  ... 
doi:10.18535/ijecs/v7i3.08 fatcat:ro3govi7l5ecflpdy6dill5yja

Masterpiece: Physical Interaction and 3D Content-Based Search in VR Applications

K. Moustakas, D. Tzovaras, M.G. Strintzis, S. Carbini, O. Bernier, J.E. Viallet, S. Raidt, M. Mancas, M. Dimiccoli, E. Yagci, S. Balci, E.I. Leon
2006 IEEE Multimedia  
The user can generate and manipulate simple 3D objects with a sketch-based approach that integrates a multimodal gesture-speech interface.  ...  In particular, our multimodal interface consists of the following modules: ❚ speech recognition for specific commands, ❚ gesture recognition for efficiently handling 3D objects using 3D hand motions, ❚  ...  Acknowledgments We conducted most of this work during the Enterface 2005 Summer Workshop on Multimodal Interfaces (http://www.enterface.net), 7 which is supported by the EU-funded SIMILAR Network of Excellence  ... 
doi:10.1109/mmul.2006.65 fatcat:ovtvwlfgxfau7jxylul75qbgn4

Multimodal Interaction with Multiple Co-located Drones in Search and Rescue Missions [article]

Jonathan Cacace, Alberto Finzi, Vincenzo Lippiello
2016 arXiv   pre-print
We present a multimodal interaction framework suitable for a human rescuer that operates in proximity with a set of co-located drones during search missions.  ...  In this work, we illustrate the domain and the proposed multimodal interaction framework discussing the system at work in a simulated case study.  ...  Instead, a robust gesture recognition system based on the armband acceleration measures requires an independent classification method.  ... 
arXiv:1605.07316v1 fatcat:2z2svolumnas3hwljt5c527iee

Multimodal Human Computer Interaction: A Survey [chapter]

Alejandro Jaimes, Nicu Sebe
2005 Lecture Notes in Computer Science  
In this paper we review the major approaches to multimodal human computer interaction from a computer vision perspective.  ...  In particular, we focus on body, gesture, gaze, and affective interaction (facial expression recognition, and emotion in audio).  ...  For example, a prior model of emotional expression recognition trained based on a certain user can be used as a starting point for learning a model for another user, or for the same user in a different  ... 
doi:10.1007/11573425_1 fatcat:ale6vmjoungs3gh7xx4dksigva

Spatial Programming for Industrial Robots through Task Demonstration

Jens Lambrecht, Martin Kleinsorge, Martin Rosenstrauch, Jörg Krüger
2013 International Journal of Advanced Robotic Systems  
We present an intuitive system for the programming of industrial robots using markerless gesture recognition and mobile augmented reality in terms of programming by demonstration.  ...  A 3D motion tracking system and a handheld device establish the basis for the presented spatial programming system.  ...  In 2009, an approach of a marker cube-based teaching of trajectories and stereo vision-based methods for virtual object registration was used for programming virtual and real robots [19] .  ... 
doi:10.5772/55640 fatcat:wjth66ybhfdexbf6dwqwbjt3ja

Multimodal human–computer interaction: A survey

Alejandro Jaimes, Nicu Sebe
2007 Computer Vision and Image Understanding  
In this paper we review the major approaches to multimodal human computer interaction from a computer vision perspective.  ...  In particular, we focus on body, gesture, gaze, and affective interaction (facial expression recognition, and emotion in audio).  ...  For example, a prior model of emotional expression recognition trained based on a certain user can be used as a starting point for learning a model for another user, or for the same user in a different  ... 
doi:10.1016/j.cviu.2006.10.019 fatcat:gzaoce4i2zedxclvpu77z5ndry

Vision-based hand-gesture applications

Juan Pablo Wachs, Mathias Kölsch, Helman Stern, Yael Edan
2011 Communications of the ACM  
Gesture interfaces for gaming based on hand/body gesture technology must be designed to achieve social and commercial success. no single method for automatic handgesture recognition is suitable for every  ...  Control of home devices and appliances for people with physical handicaps and/or elderly users with impaired mobility; and Exploring big data.  ...  of the recognition is a well-known obstacle in the design of gesture-based interfaces.  ... 
doi:10.1145/1897816.1897838 fatcat:rg7757vtp5djbmk5wranfpug2y

Speech-gesture driven multimodal interfaces for crisis management

R. Sharma, M. Yeasin, N. Krahntoever, I. Rauschert, Guoray Cai, I. Brewer, A.M. Maceachren, K. Sengupta
2003 Proceedings of the IEEE  
Dialogue-enabled devices, based on natural, multimodal interfaces have the potential of making a variety of information technology tools accessible during crisis management.  ...  This paper establishes the importance of multimodal interfaces in various aspects of crisis management and explores many issues in realizing successful speech-gesture driven, dialog-enabled interfaces  ...  Multimodal interfaces allow users to interact via a combination of modalities, for instance, speech, gesture, pen, touch screen, displays, keypads, pointing devices, and tactile sensors.  ... 
doi:10.1109/jproc.2003.817145 fatcat:flbaisvreresla7wufztzpnvfq

Multimodal user interface for the communication of the disabled

Savvas Argyropoulos, Konstantinos Moustakas, Alexey A. Karpov, Oya Aran, Dimitrios Tzovaras, Thanos Tsakiris, Giovanna Varni, Byungjun Kwon
2008 Journal on Multimodal User Interfaces  
The integration of the multimodal interfaces into a game application serves both as an entertainment and a pleasant education tool to the users.  ...  In this paper, a novel system is proposed to provide alternative tools and interfaces to blind and deaf-andmute people and enable their communication and interaction with the computer.  ...  Conclusions In this paper, a novel system for the communication between disabled users and their effective interaction with the computer was presented based on multimodal user interfaces.  ... 
doi:10.1007/s12193-008-0012-2 fatcat:nb6whqmzqnhhznplafv3nlm5ya

A real-time system for hand gesture controlled operation of in-car devices

M. Zobl, M. Geiger, B. Schuller, M. Lang, G. Rigoll
2003 2003 International Conference on Multimedia and Expo. ICME '03. Proceedings (Cat. No.03TH8698)  
In this paper a videobased realtime hand gesture recognition system for in-car use is presented. It was developed in course of extensive usability studies.  ...  In combination with a gesture optimized HMI it allows intuitive and effective operation of a variaty of in-car multimedia and infotainment devices with handposes and dynamic hand gestures.  ...  A gesture controlled HMI should only be part of a multimodal HMI in which the user is allowed to control every functionality with the optimal modality (haptics, speech, gestures).  ... 
doi:10.1109/icme.2003.1221368 dblp:conf/icmcs/ZoblGSLR03 fatcat:fq3dqof6nbekjlaaikcftt3ewi

Designing a human-centered, multimodal GIS interface to support emergency management

Ingmar Rauschert, Pyush Agrawal, Rajeev Sharma, Sven Fuhrmann, Isaac Brewer, Alan MacEachren
2002 Proceedings of the tenth ACM international symposium on Advances in geographic information systems - GIS '02  
Speech and gesture recognition is coupled with a knowledge-based dialogue management system for storing and retrieving geospatial data.  ...  A large screen display is used for data visualization, and collaborative, multi-user interactions in emergency management are supported through voice and gesture recognition.  ...  In order to break through this cycle, a prototype multimodal user interface for GIS was created that serves as an initial system for the user studies.  ... 
doi:10.1145/585147.585172 dblp:conf/gis/RauschertASFBM02 fatcat:xo3hjpbapfg5ra75zmzqetzgwy

Path Word

Hassoumi Almoctar, Pourang Irani, Vsevolod Peysakhovich, Christophe Hurter
2018 Proceedings of the 2018 on International Conference on Multimodal Interaction - ICMI '18  
We present PathWord (PATH passWORD), a multimodal digit entry method for ad-hoc authentication based on known digits shape and user relative eye movements.  ...  We envision PathWord as a method to foster conidence while unlocking a system through gaze gestures.  ...  CONCLUSION We presented PathWord, a novel multimodal PIN input approach that exploits back-of-device activation and user's gaze for digit selection.  ... 
doi:10.1145/3242969.3243008 dblp:conf/icmi/HassoumiIPH18 fatcat:ppqntmw26rg5zj2gfuys6wboya

Multimodal Interaction Recognition Mechanism by Using Midas Featured By Data-Level and Decision-Level Fusion

Muhammad Habib, Noor ul Qamar
2017 Lahore Garrison University research journal of computer science and information technology  
Natural User Interfaces (NUI's) dealing with gestures is an alternative of traditional input devices on multi-touch panels.  ...  The language as a base interface deals with minimum complexity issues like controlling inversion and intermediary states by means of data fusion, data processing and data selection provisioning high-level  ...  gesture trajectory.  ... 
doi:10.54692/lgurjcsit.2017.010227 fatcat:cqvkqnfafzf6fkz2bkmjcfybwq

A Systematic Review of Data Exchange Formats in Advanced Interaction Environments

Celso A. S. Santos, Estêvão B. Saleme, Juliana C. S. de Andrade
2015 International Journal of Multimedia and Ubiquitous Engineering  
The advent of advanced user interface devices has raised the interest of industry and academia in finding new modes of Human-Computer Interaction.  ...  Advanced interfaces employ gesture recognition, as well as motion and voice capturing to enable humans to interact naturally with interactive environments without utilizing any of the traditional devices  ...  The authors developed a proof of concept using the framework Candescent NUI for the recognition part, transforming the acquired information to the MPEG-U format.  ... 
doi:10.14257/ijmue.2015.10.5.13 fatcat:vs4yotonnrfvtblm46m2si46c4

Designing a human-centered, multimodal GIS interface to support emergency management

Ingmar Rauschert, Pyush Agrawal, Rajeev Sharma, Sven Fuhrmann, Isaac Brewer, Alan MacEachren
2002 Proceedings of the tenth ACM international symposium on Advances in geographic information systems - GIS '02  
Speech and gesture recognition is coupled with a knowledge-based dialogue management system for storing and retrieving geospatial data.  ...  A large screen display is used for data visualization, and collaborative, multi-user interactions in emergency management are supported through voice and gesture recognition.  ...  In order to break through this cycle, a prototype multimodal user interface for GIS was created that serves as an initial system for the user studies.  ... 
doi:10.1145/585168.585172 fatcat:zdnpfmmn4fe5zj7h3ck4xa3thq
« Previous Showing results 1 — 15 out of 1,585 results