Filters








934 Hits in 5.8 sec

Facial Feature Tracking and Occlusion Recovery in American Sign Language
english

2006 6th International Workshop on Pattern Recognition in Information Systems   unpublished
Facial features play an important role in expressing grammatical information in signed languages, including American Sign Language (ASL).  ...  Therefore, identification of these facial gestures is essential to sign language recognition. One problem with detection of such grammatical indicators is occlusion recovery.  ...  We would like to thank Stan Sclaroff and Vassilis Athitsos for helping to collect the ASL videos.  ... 
doi:10.5220/0002471700810090 fatcat:34wlafhdazdsjfazkkyv4yt2dm

A review of motion analysis methods for human Nonverbal Communication Computing

Dimitris Metaxas, Shaoting Zhang
2013 Image and Vision Computing  
It uses image sequences to detect and track people, and 0262-8856/$see front matter  ...  They include face tracking, expression recognition, body reconstruction, and group activity analysis.  ...  We also would like to thank our long time collaborators Judee Burgoon (UA), David Dinges (UPENN), and Carol Neidle (BU). Metaxas would like to thank his previous PhD students Ioannis Kakadiaris  ... 
doi:10.1016/j.imavis.2013.03.005 fatcat:ylxt5bph2jfgrfd5a4c22qn66u

Sign Language Recognition [chapter]

Helen Cooper, Brian Holt, Richard Bowden
2011 Visual Analysis of Humans  
Classifying the manual aspects of sign (similar to gestures) is then discussed from a tracking and non-tracking viewpoint before summarising some of the approaches to the non-manual aspects of sign languages  ...  This chapter covers the key aspects of Sign Language Recognition (SLR), starting with a brief introduction to the motivations and requirements, followed by a précis of sign linguistics and their impact  ...  Studies of American Sign Language (ASL) by Liddell and Johnson [64] model sign language on the movement-hold sys-tem.  ... 
doi:10.1007/978-0-85729-997-0_27 fatcat:tc4ubxuh6zfplau5p5zbomohmy

The Best of Both Worlds: Combining 3D Deformable Models with Active Shape Models

Christian Vogler, Zhiguo Li, Atul Kanaujia, Siome Goldenstein, Dimitris Metaxas
2007 2007 IEEE 11th International Conference on Computer Vision  
We demonstrate the strength of the framework in experiments that include automated 3D model fitting and facial expression tracking for a variety of applications, including sign language.  ...  The ASMs, in contrast, provide the majority of reliable 2D image features over time, and aid in recovering from drift and total occlusions.  ...  Acknowledgments The research in this paper was supported by NSF CNS-0427267 and CNS-0428231, research scientist funds by the Gallaudet Research Institute, CNPq PQ-301278/2004-0, and FAPESP 07/50040-8.  ... 
doi:10.1109/iccv.2007.4409015 dblp:conf/iccv/VoglerLKGM07 fatcat:uaclxgp6kbastihqoytsiz2qwq

Gesture Recognition: A Survey

Sushmita Mitra, Tinku Acharya
2007 IEEE Transactions on Systems Man and Cybernetics Part C (Applications and Reviews)  
In this paper, we provide a survey on gesture recognition with particular emphasis on hand gestures and facial expressions.  ...  The applications of gesture recognition are manifold, ranging from sign language through medical rehabilitation to virtual reality.  ...  Connectionist Approach to Hand Gesture Recognition TDNN has been applied in [27] to recognize gestures related to American Sign Language (ASL).  ... 
doi:10.1109/tsmcc.2007.893280 fatcat:iywyfj465zgfnp4n2o7usgcsne

Real-Time American Sign Language Recognition from Video Using Hidden Markov Models [chapter]

Thad Starner, Alex Pentland
1997 Computational Imaging and Vision  
Consequently, they seem ideal f o r visual recognition of complex, structured hand gestures such as are found in sign language.  ...  We describe a real-time HMM-based system for recognizing sentence level American Sign Language (ASL) which attains a word accuracy of 99.2% without explicitly modeling the fingers.  ...  In this paper we describe an estensible system which uses a single color camera to track hands in real time and interprets American Sign Language (ASL) using Hidden Markov Models (HMM).  ... 
doi:10.1007/978-94-015-8935-2_10 fatcat:wb756l47snc27cuvpoij6xjtsi

Automated Face Analysis for Affective Computing [chapter]

Rafael Calvo, Sidney D'Mello, Jonathan Gratch, Arvid Kappas, Jeffrey F. Cohn, Fernando De La Torre
2015 The Oxford Handbook of Affective Computing  
We review 1) human-observer based approaches to measurement that inform AFA; 2) advances in face detection and tracking, feature extraction, registration, and supervised learning; and 3) applications in  ...  Facial expression communicates emotion, intention, and physical state, and regulates interpersonal behavior.  ...  Acknowledgements Research reported in this chapter was supported in part by the National Institutes of Health (NIH) under Award Number MHR01MH096951 and by the US Army Research Laboratory (ARL) under the  ... 
doi:10.1093/oxfordhb/9780199942237.013.020 fatcat:qrhzadx6y5bxzjo6nnfshhwoze

Classification of extreme facial events in sign language videos

Epameinondas Antonakos, Vassilis Pitsikalis, Petros Maragos
2014 EURASIP Journal on Image and Video Processing  
We propose a new approach for Extreme States Classification (ESC) on feature spaces of facial cues in sign language (SL) videos.  ...  The method is built upon Active Appearance Model (AAM) face tracking and feature extraction of global and local AAMs.  ...  It was also partially supported by the EU research projects Dicta-Sign (FP7-231135) and DIRHA (FP7-288121).  ... 
doi:10.1186/1687-5281-2014-14 fatcat:iewpb7k4gbbvfgdmi6tlkkpq5y

Looking at people: sensing for ubiquitous and wearable computing

A. Pentland
2000 IEEE Transactions on Pattern Analysis and Machine Intelligence  
in machine vision research.  ...  AbstractÐThe research topic of looking at people, that is, giving machines the ability to detect, track, and identify people and more generally, to interpret human behavior, has become a central topic  ...  Examples of more complex phenomena are words in American Sign Language and pedestrian walking patterns within a plaza.  ... 
doi:10.1109/34.824823 fatcat:emk266gdc5eudj5olzzfxfclmy

Hand Gesture Recognition Based on Computer Vision: A Review of Techniques

Munir Oudah, Ali Al-Naji, Javaan Chahl
2020 Journal of Imaging  
In other words, the hand sign can be classified under many headings, such as posture and gesture, as well as dynamic and static, or a hybrid of the two.  ...  Hand gestures are a form of nonverbal communication that can be used in several fields such as communication between deaf-mute people, robot control, human–computer interaction (HCI), home automation and  ...  Acknowledgments: The authors would like to thank the staff in Electrical Engineering Technical College, Middle Technical University, Baghdad, Iraq and the participants for their support to conduct the  ... 
doi:10.3390/jimaging6080073 pmid:34460688 fatcat:zmid23k67vbozb54sfji4nlfiy

Challenges in Multi-modal Gesture Recognition [chapter]

Sergio Escalera, Vassilis Athitsos, Isabelle Guyon
2017 Gesture Recognition  
using multimodal data in this area of application.  ...  We published papers using this technology and other more conventional methods, including regular video cameras, to record data, thus providing a good overview of uses of machine learning and computer vision  ...  Acknowledgments This work has been partially supported by ChaLearn Challenges in Machine Learning http: //chalearn.org, the Human Pose Recovery and Behavior Analysis Group 7 , the Pascal2 network of excellence  ... 
doi:10.1007/978-3-319-57021-1_1 fatcat:vfeijghqtvffllogw2tium3pwa

Challenges in multimodal gesture recognition

Sergio Escalera, Vassilis Athitsos, Isabelle Guyon
2016 Journal of machine learning research  
using multimodal data in this area of application.  ...  We published papers using this technology and other more conventional methods, including regular video cameras, to record data, thus providing a good overview of uses of machine learning and computer vision  ...  Acknowledgments This work has been partially supported by ChaLearn Challenges in Machine Learning http: //chalearn.org, the Human Pose Recovery and Behavior Analysis Group 7 , the Pascal2 network of excellence  ... 
dblp:journals/jmlr/EscaleraAG16 fatcat:r4q2iywy7balhjlh2vpknltrde

ChaLearn multi-modal gesture recognition 2013

Sergio Escalera, Cristian Sminchisescu, Richard Bowden, Stan Sclaroff, Jordi Gonzàlez, Xavier Baró, Miguel Reyes, Isabelle Guyon, Vassilis Athitsos, Hugo Escalante, Leonid Sigal, Antonis Argyros
2013 Proceedings of the 15th ACM on International conference on multimodal interaction - ICMI '13  
More than 54 teams participated in the challenge and a final error rate of 12% was achieved by the winner of the competition.  ...  This includes multi-modal descriptors, multi-class learning strategies for segmentation and classification of temporal data, as well as relevant applications in the field, including multi-modal Social  ...  de Barcelona, the Computer Vision Center, the Universitat Oberta de Catalunya, and the Human Pose Recovery and Behavior Analysis Group 4 .  ... 
doi:10.1145/2522848.2532597 dblp:conf/icmi/EscaleraGBRGAESASBS13 fatcat:tx5jk4bdjjaohk3n4brrj62e4y

Survey on Emotional Body Gesture Recognition

Fatemeh Noroozi, Dorota Kaminska, Ciprian Corneanu, Tomasz Sapinski, Sergio Escalera, Gholamreza Anbarjafari
2019 IEEE Transactions on Affective Computing  
We introduce person detection and comment static and dynamic body pose estimation methods both in RGB and 3D.  ...  We first introduce emotional body gestures as a component of what is commonly known as "body language" and comment general aspects as gender differences and culture dependence.  ...  Audiovisual information of three different expressions, i.e. intensities, of 23 emotions are included, as well as tracking of facial features and skeletal tracking. 10 professional actors participated  ... 
doi:10.1109/taffc.2018.2874986 fatcat:zjnr2w4orje7vj2bhmia4f5qki

A Survey of Applications and Human Motion Recognition with Microsoft Kinect

Roanna Lun, Wenbing Zhao
2015 International journal of pattern recognition and artificial intelligence  
On the applications front, we review the applications of the Kinect technology in a variety of areas, including healthcare, education and performing arts, robotics, sign language recognition, retail services  ...  As such, it has commanded intense interests in research and development on the Kinect technology.  ...  Acknowledgments The authors wish to sincerely thank the anonymous reviewers, and the editor, for their invaluable suggestions in improving an earlier version of this article.  ... 
doi:10.1142/s0218001415550083 fatcat:7mwojm7lirc3dpyye2zui2kioy
« Previous Showing results 1 — 15 out of 934 results