Filters








39,870 Hits in 3.7 sec

Integrated vision-based robotic arm interface for operators with upper limb mobility impairments

Hairong Jiang, Juan P. Wachs, Bradley S. Duerstock
2013 2013 IEEE 13th International Conference on Rehabilitation Robotics (ICORR)  
An integrated, computer vision-based system was developed to operate a commercial wheelchair-mounted robotic manipulator (WMRM).  ...  The gesture recognition interface incorporated hand detection, tracking and recognition algorithms to obtain a high recognition accuracy of 97.5% for an eight-gesture lexicon.  ...  We are grateful for the assistance of Jamie Nolan from the Institute for Accessible Science and Mithun Jacob from the Intelligent System and Assistive Technology (ISAT) lab at Purdue University.  ... 
doi:10.1109/icorr.2013.6650447 pmid:24187264 dblp:conf/icorr/JiangWD13 fatcat:3ex6hdsepjfbdgd62xbbhz5mka

Multimodal Human Hand Motion Sensing and Analysis -A Review

Yaxu Xue, Zhaojie Ju, Kui Xiang, Jing Chen, Honghai Liu
2018 IEEE Transactions on Cognitive and Developmental Systems  
directions.  ...  Firstly, the nature of human hand motions is discussed in terms of simple motions, such as grasps and gestures, and complex motions, e.g. in-hand manipulations and re-grasps; secondly, different techniques  ...  Rashid et al. presented a framework for the integration of gesture and posture recognition systems at the decision level to extract multiple meanings.  ... 
doi:10.1109/tcds.2018.2800167 fatcat:ojznwvn3gzg7rgnfq7yg3wf4jm

A Graph Modeling Strategy for Multi-touch Gesture Recognition

Zhaoxin Chen, Eric Anquetil, Harold Mouchere, Christian Viard-Gaudin
2014 2014 14th International Conference on Frontiers in Handwriting Recognition  
We believe that our research points out a possibility of integrating together raw ink, direct manipulation and indirect command in many gesture-based complex application such as a sketch drawing application  ...  We evaluated our multi-touch recognition system on a set of 18 different multi-touch gestures. With this graph embedding method and a SVM classifier, we achieve 94.50% recognition rate.  ...  ACKNOWLEDGMENT The authors would like to thank Somia Rahmoun and Boussad Ghedamsi for fundamental research in their master's internship early in this study.  ... 
doi:10.1109/icfhr.2014.51 dblp:conf/icfhr/ChenAMV14 fatcat:mohlbcwhmvaudlmbapgrwb6hju

Mutual disambiguation of 3D multimodal interaction in augmented and virtual reality

Ed Kaiser, Alex Olwal, David McGee, Hrvoje Benko, Andrea Corradini, Xiaoguang Li, Phil Cohen, Steven Feiner
2003 Proceedings of the 5th international conference on Multimodal interfaces - ICMI '03  
The resulting multimodal system fuses symbolic and statistical information from a set of 3D gesture, spoken language, and referential agents.  ...  We discuss the means by which the system supports mutual disambiguation of these modalities and information sources, and show through a user study how mutual disambiguation accounts for over 45% of the  ...  ACKNOWLEDGEMENTS Work conducted at Columbia University was supported by ONR Contracts N00014-99-1-0394, N00014-99-1-0249, and N00014-99-  ... 
doi:10.1145/958432.958438 dblp:conf/icmi/KaiserOMBCLCF03 fatcat:4bvbjx74pbdg7otb7aebstgpn4

Mutual disambiguation of 3D multimodal interaction in augmented and virtual reality

Ed Kaiser, Alex Olwal, David McGee, Hrvoje Benko, Andrea Corradini, Xiaoguang Li, Phil Cohen, Steven Feiner
2003 Proceedings of the 5th international conference on Multimodal interfaces - ICMI '03  
The resulting multimodal system fuses symbolic and statistical information from a set of 3D gesture, spoken language, and referential agents.  ...  We discuss the means by which the system supports mutual disambiguation of these modalities and information sources, and show through a user study how mutual disambiguation accounts for over 45% of the  ...  ACKNOWLEDGEMENTS Work conducted at Columbia University was supported by ONR Contracts N00014-99-1-0394, N00014-99-1-0249, and N00014-99-  ... 
doi:10.1145/958436.958438 fatcat:4pa5owiehjf2rbao3izny32pci

Translation and rotation of virtual objects in Augmented Reality: A comparison of interaction devices

Stefan Reifinger, Florian Laquai, Gerhard Rigoll
2008 Conference Proceedings / IEEE International Conference on Systems, Man and Cybernetics  
All kinds of manipulation showed, that gesture recognition provides the most immersive and intuitive interaction with the lowest mental workload.  ...  For a combination of translation and rotation, gesture recognition based interaction turned out to be the fastest way of interaction.  ...  Translation in Z direction and rotation around the Z axis were started by pressing the right mouse button additionally to the left button for selection. 3) Gesture recognition system: We used the gesture  ... 
doi:10.1109/icsmc.2008.4811662 dblp:conf/smc/ReifingerLR08a fatcat:dvhpdlnolnfrtommptvsvkgma4

Extending an existing user interface toolkit to support gesture recognition

James A. Landay, Brad A. Myers
1993 INTERACT '93 and CHI '93 conference companion on Human factors in computing systems - CHI '93  
Gestures are a powerful way to specify both objects and operations with a single mark of a stylus or mouse.  ...  We have extended an existing user interface toolkit to support gestures as a standard type of interaction so that researchers can easily explore this technology.  ...  Although the above examples show that gestural input can be quite useful in direct-manipulation interfaces, developers must still overcome the problem of having to build tailored gesture recognition algorithms  ... 
doi:10.1145/259964.260123 dblp:conf/chi/LandayM93 fatcat:3nu6knuaebh2bhle7nj5tyntxq

A real-time system for hand gesture controlled operation of in-car devices

M. Zobl, M. Geiger, B. Schuller, M. Lang, G. Rigoll
2003 2003 International Conference on Multimedia and Expo. ICME '03. Proceedings (Cat. No.03TH8698)  
In combination with a gesture optimized HMI it allows intuitive and effective operation of a variaty of in-car multimedia and infotainment devices with handposes and dynamic hand gestures.  ...  The integration of more and more functionality into the human machine interface (HMI) of vehicles increases the complexity of device handling.  ...  - - - - + dynamic gesture recog. - + - + + direct manipulation + - + - -  ... 
doi:10.1109/icme.2003.1221368 dblp:conf/icmcs/ZoblGSLR03 fatcat:fq3dqof6nbekjlaaikcftt3ewi

Multimodal interaction for 2D and 3D environments [virtual reality]

P. Cohen, D. McGee, S. Oviatt, L. Wu, J. Clow, R. King, S. Julier, L. Rosenblum
1999 IEEE Computer Graphics and Applications  
QuickSet consists of a collection of "agents" including speech recognition, gesture recognition, natural language understanding, multimodal integration, a map-based user interface, and a database, running  ...  Direct voice and gesture interaction with the 3D scene offer benefits analogous to those discussed above for 2D visualizations.  ...  QuickSet consists of a collection of "agents" including speech recognition, gesture recognition, natural language understanding, multimodal integration, a map-based user interface, and a database, running  ... 
doi:10.1109/38.773958 fatcat:5bn2pyqjhjhclb2q5jeavtwlcy

Speech/gesture interface to a visual computing environment for molecular biologists

R. Sharma, T.S. Huang, V.I. Pavlovic, Yunxin Zhao, Zion Lo, S. Chu, K. Schul
1996 Proceedings of 13th International Conference on Pattern Recognition  
In this paper we describe the use of visual hand gesture analysis and speech recognition f o r developing a speech/gesture interface f o r controlling a 3 -0 display.  ...  T h e free h a n d gestures are used f o r manipulating the 3-0 graphical display together with a set of speech commands.  ...  and speech, (b) Interaction of gesture and the virtual scene, (c) Interaction of gesture and gaze direction, and (d) Interaction of gesture and graphical display.  ... 
doi:10.1109/icpr.1996.547311 dblp:conf/icpr/SharmaHPZLCS96 fatcat:ukxtl5sr6bgfbbellhjyz45fue

Mobile manipulator control through gesture recognition using IMUs and Online Lazy Neighborhood Graph search

Padmaja Vivek Kulkarni, Boris Illing, Bastian Gaspers, Bernd Brüggemann, Dirk Schulz
2019 ACTA IMEKO  
A software framework is developed to control a robotic platform through integrating our gesture recognition algorithm with a Robot Operating System (ROS), which is in turn used to trigger predefined robot  ...  In this paper, we present and evaluate a framework for gesture recognition using four wearable Inertial Measurement Units (IMUs) to indirectly control a mobile robot.  ...  [20] devised a control architecture for gesture-based control of UAVs using the ASUS Xtion camera. They defined nine gestures and used ROS for integrating gesture recognition with UAVs.  ... 
doi:10.21014/acta_imeko.v8i4.677 fatcat:byhj6bamlfaf7ixkgkz2ajtndi

Intention, Context and Gesture Recognition for Sterile MRI Navigation in the Operating Room [chapter]

Mithun Jacob, Christopher Cange, Rebecca Packer, Juan P. Wachs
2012 Lecture Notes in Computer Science  
with context integration at similar recognition rates.  ...  The system incorporates contextual cues and intent of the user to strengthen the gesture recognition process.  ...  This project was supported by grant number R03HS019837 from the Agency for Healthcare Research and Quality (AHRQ).  ... 
doi:10.1007/978-3-642-33275-3_27 fatcat:itgxuwl2avfmbh4tao5dwt5z54

Integrating context-free and context-dependent attentional mechanisms for gestural object reference

Gunther Heidemann, Robert Rae, Holger Bekel, Ingo Bax, Helge Ritter
2004 Machine Vision and Applications  
To evaluate hand movements for pointing gestures and to recognise object references, an approach to integrating bottom-up generated feature maps and top-down propagated recognition results is introduced  ...  This method facilitates both the integration of different modalities and the generation of auditory feedback.  ...  This work was supported within the project VAMPIRE (Visual Active Memory Processes and Interactive REtrieval), which is part of the IST programme (IST-2001-34401).  ... 
doi:10.1007/s00138-004-0157-2 fatcat:hiebvzudzrclzlglksrufu5tku

Integrating Context-Free and Context-Dependent Attentional Mechanisms for Gestural Object Reference [chapter]

Gunther Heidemann, Robert Rae, Holger Bekel, Ingo Bax, Helge Ritter
2003 Lecture Notes in Computer Science  
To evaluate hand movements for pointing gestures and to recognise object references, an approach to integrating bottom-up generated feature maps and top-down propagated recognition results is introduced  ...  This method facilitates both the integration of different modalities and the generation of auditory feedback.  ...  This work was supported within the project VAMPIRE (Visual Active Memory Processes and Interactive REtrieval), which is part of the IST programme (IST-2001-34401).  ... 
doi:10.1007/3-540-36592-3_3 fatcat:vqemcvw5ere4xmvitcxex73nwy

Speech/gesture interface to a visual-computing environment

R. Sharma, M. Zeller, V.I. Pavlovic, T.S. Huang, Z. Lo, S. Chu, Y. Zhao, J.C. Phillips, K. Schulten
2000 IEEE Computer Graphics and Applications  
DAAL01-96-2-0003); Sumitomo Electric Industries; the National Institutes of Health (PHS 5 P41 RR05969-04); and the Roy J. Carver Charitable Trust.  ...  Gestures for manipulating 3D display We also developed an AGR system based on HMMs to recognize basic manipulative hand gestures.  ...  s automatic speech recognition (ASR), aided by a microphone, to recognize voice commands; s two strategically positioned cameras to detect hand gestures; and s automatic gesture recognition (AGR), a set  ... 
doi:10.1109/38.824531 fatcat:wn3xuvidrfaffepp62o6yfkgji
« Previous Showing results 1 — 15 out of 39,870 results