Filters








15,673 Hits in 2.2 sec

Multi-Task and Multi-Modal Learning for RGB Dynamic Gesture Recognition

Dinghao Fan, Hengjie Lu, Shugong Xu, Shan Cao
2021 IEEE Sensors Journal  
Our framework is trained to learn a representation for multi-task learning: gesture segmentation and gesture recognition.  ...  Existing multi-modal gesture recognition systems take multi-modal data as input to improve accuracy, but such methods require more modality sensors, which will greatly limit their application scenarios  ...  CONCLUSION This paper proposes an efficient end-to-end multi-task and multi-modal learning 2D CNN-based framework for RGB dynamic gesture recognition.  ... 
doi:10.1109/jsen.2021.3123443 fatcat:4biyoph3xbe6dksji53pzpcc6i

Margin-constrained multiple kernel learning based multi-modal fusion for affect recognition

Shizhi Chen, Yingli Tian
2013 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG)  
for application of multi-modal feature fusion.  ...  We validate the proposed MCMKL method for affect recognition from face and body gesture modalities on the FABO dataset.  ...  MULTI-MODAL FUSION FOR AFFECT RECOGNITION Affect recognition from multiple modalities is a challenging problem.  ... 
doi:10.1109/fg.2013.6553810 dblp:conf/fgr/ChenT13 fatcat:yufqjid6dfcqld3roe6k5ew24u

Multi-modal Fusion for Single-Stage Continuous Gesture Recognition [article]

Harshala Gammulle, Simon Denman, Sridha Sridharan, Clinton Fookes
2021 arXiv   pre-print
This approach learns the natural transitions between gestures and non-gestures without the need for a pre-processing segmentation step to detect individual gestures.  ...  In contrast, we introduce a single-stage continuous gesture recognition framework, called Temporal Multi-Modal Fusion (TMMF), that can detect and classify multiple gestures in a video via a single model  ...  However, the models in [9] , [10] are unsuitable for a multi-modal problem. Hence we design our gesture recognition model to exploit multi-modal data. C.  ... 
arXiv:2011.04945v2 fatcat:q4z7xkt22vbdjlavwkglom33gy

Improving Dynamic Hand Gesture Recognition on Multi-views with Multi-modalities

Huong-Giang Doan, Huong-Giang Doan is with the Control and Automation Faculty at Electrical Power University Hanoi, Vietnam, Van-Toi Nguyen
2019 International Journal of Machine Learning and Computing  
Hand gesture recognition topic has been researched for many recent decades because it could be used in many fields as sign language, virtual game, human-robot interaction, entertainment and so on.  ...  We consider methods for extracting features of different data sources (RGB images and depth images) based on both manifold learning and deep learning technique.  ...  APPROACH FOR MULTI-MODALITIES HAND GESTURE RECOGNITION A.  ... 
doi:10.18178/ijmlc.2019.9.6.875 fatcat:5xcpf7qac5bl7ornq2h6y6buua

ChaLearn multi-modal gesture recognition 2013

Sergio Escalera, Cristian Sminchisescu, Richard Bowden, Stan Sclaroff, Jordi Gonzàlez, Xavier Baró, Miguel Reyes, Isabelle Guyon, Vassilis Athitsos, Hugo Escalante, Leonid Sigal, Antonis Argyros
2013 Proceedings of the 15th ACM on International conference on multimodal interaction - ICMI '13  
A total of 9 relevant papers with basis on multi-modal gesture recognition were accepted for presentation.  ...  We organized a Grand Challenge and Workshop on Multi-Modal Gesture Recognition.  ...  We thank the Kaggle submission website for wonderful support, together with the committee members and participants of the ICMI 2013 Multi-modal Gesture Recognition workshop for their support, reviews and  ... 
doi:10.1145/2522848.2532597 dblp:conf/icmi/EscaleraGBRGAESASBS13 fatcat:tx5jk4bdjjaohk3n4brrj62e4y

Relational Graph Learning on Visual and Kinematics Embeddings for Accurate Gesture Recognition in Robotic Surgery [article]

Yonghao Long, Jie Ying Wu, Bo Lu, Yueming Jin, Mathias Unberath, Yun-Hui Liu, Pheng Ann Heng, Qi Dou
2021 arXiv   pre-print
Next, we identify multi-relations in these multi-modal embeddings and leverage them through a hierarchical relational graph learning module.  ...  kinematics data to boost gesture recognition accuracies.  ...  Fig. 1 . 1 The overview of our proposed multi-modal relational graph network for surgical gesture recognition in robot-assisted surgery.  ... 
arXiv:2011.01619v2 fatcat:5io4a2qwtrfhpk2skxgoqu4m6i

Searching Multi-Rate and Multi-Modal Temporal Enhanced Networks for Gesture Recognition [article]

Zitong Yu, Benjia Zhou, Jun Wan, Pichao Wang, Haoyu Chen, Xin Liu, Stan Z. Li, Guoying Zhao
2020 arXiv   pre-print
for gesture recognition.  ...  In this paper, we propose the first neural architecture search (NAS)-based method for RGB-D gesture recognition.  ...  For RGB-D based gesture recognition, complementary feature learning from different data modalities is beneficial.  ... 
arXiv:2008.09412v1 fatcat:vphe2saxbjhxtee2twdkbby3yi

Multi-Modal Cross Learning for an FMCW Radar Assisted by Thermal and RGB Cameras to Monitor Gestures and Cooking Processes

Marco Altmann, Peter Ott, Nicolaj C. Stache, Christian Waldschmidt
2021 IEEE Access  
The multi-modal cross learning approach considerably outperforms single-modal approaches on that challenging classification task.  ...  This paper proposes a multi-modal cross learning approach to augment the neural network training phase by additional sensor data.  ...  Reference [7] extends these modalities with a pose estimation and audio using a multi-modal dropout approach for gesture recognition.  ... 
doi:10.1109/access.2021.3056878 fatcat:gxf6ucqlmfcr5ktejfntjke5ii

ChaLearn Looking at People: A Review of Events and Resources [article]

Sergio Escalera, Xavier Baró, Hugo Jair Escalante, Isabelle Guyon
2017 arXiv   pre-print
We started in 2011 (with the release of the first Kinect device) to run challenges related to human action/activity and gesture recognition.  ...  /moving backgrounds or cameras; Large scale gesture recognition o Multi-modal features for gesture recognition, including nonconventional input sources, such as inertial, depth or thermal data; Integrating  ...  recognition from still images or image sequences, often including multi-modal data.  ... 
arXiv:1701.02664v2 fatcat:hpcusvcofvhrtby6ypdfdqyksy

ChaLearn Looking at People Challenge 2014: Dataset and Results [chapter]

Sergio Escalera, Xavier Baró, Jordi Gonzàlez, Miguel A. Bautista, Meysam Madadi, Miguel Reyes, Víctor Ponce-López, Hugo J. Escalante, Jamie Shotton, Isabelle Guyon
2015 Lecture Notes in Computer Science  
for multi-modal gesture recognition, making it feasible to be applied in real applications.  ...  The competition was split into three independent tracks: human pose recovery from RGB data, action and interaction recognition from RGB data sequences, and multi-modal gesture recognition from RGB-Depth  ...  Special thanks to Pau Rodríguez for annotating part of the multi-modal gestures. We thank Microsoft Codalab submission website and researchers who joined the PC and reviewed for the workshop.  ... 
doi:10.1007/978-3-319-16178-5_32 fatcat:arlegvujt5c2fb3fmou4fx5iwu

Bayesian Co-Boosting for Multi-modal Gesture Recognition [chapter]

Jiaxiang Wu, Jian Cheng
2017 Gesture Recognition  
In this paper, we propose a novel Bayesian Co-Boosting framework for multi-modal gesture recognition.  ...  With the development of data acquisition equipment, more and more modalities become available for gesture recognition.  ...  Model Learning In the task of multi-modal gesture recognition, two or more modalities (in this paper, we constraint the amount of modalities to be two) are simultaneously available for describing gesture  ... 
doi:10.1007/978-3-319-57021-1_14 fatcat:3j4adpazx5fgvhxkolryezmv2u

Gesture Recognition Based on 3D Human Pose Estimation and Body Part Segmentation for RGB Data Input

Ngoc-Hoang Nguyen, Tran-Dac-Thinh Phan, Guee-Sang Lee, Soo-Hyung Kim, Hyung-Jeong Yang
2020 Applied Sciences  
This paper presents a novel approach for dynamic gesture recognition using multi-features extracted from RGB data input.  ...  In this paper, we develop a gesture recognition approach by hybrid deep learning where RGB frames, 3D skeleton joint information, and body part segmentation are used to overcome such problems.  ...  In this paper, we propose a multi-modal gesture recognition method for RGB data input with a multi-modal algorithm.  ... 
doi:10.3390/app10186188 fatcat:sdygawbngncjtinuek225vgzoy

Domain Adaptive Robotic Gesture Recognition with Unsupervised Kinematic-Visual Data Alignment [article]

Xueying Shi, Yueming Jin, Qi Dou, Jing Qin, Pheng-Ann Heng
2021 arXiv   pre-print
We extensively evaluate our method for gesture recognition using DESK dataset with peg transfer procedure.  ...  It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture.  ...  Several works propose to develop multi-modal learning methods to leverage the complementary cues contained in the video vision data and kinematics for accurate gesture recognition [19] , [20] .  ... 
arXiv:2103.04075v2 fatcat:w2agc7ygjzhk5lyieyfxdvqip4

Multi-modal Gesture Recognition Using Skeletal Joints and Motion Trail Model [chapter]

Bin Liang, Lihong Zheng
2015 Lecture Notes in Computer Science  
The proposed approach is evaluated on the 2014 ChaLearn Multi-modal Gesture Recognition Challenge dataset.  ...  This paper proposes a novel approach to multi-modal gesture recognition by using skeletal joints and motion trail model. The approach includes two modules, i.e. spotting and recognition.  ...  In this paper, we propose to use multi-modal data for gesture recognition.  ... 
doi:10.1007/978-3-319-16178-5_44 fatcat:3byivkbqizhvhb72d5lm6vtzkm

Multi-modal Gesture Recognition Using Integrated Model of Motion, Audio and Video
身体運動・音声・映像の特徴を用いた統合モデルによるマルチモーダルジェスチャー認識

Yusuke GOUTSU, Takaki KOBAYASHI, Junya OBARA, Ikuo KUSAJIMA, Kazunari TAKEICHI, Wataru TAKANO, Yoshihiko NAKAMURA
2015 Transactions of the Society of Instrument and Control Engineers  
All the experiments are conducted on dataset provided by the competition organizer of MMGRC, which is a workshop for Multi-Modal Gesture Recognition Challenge.  ...  With increasing motion sensor development, multiple data sources have become available, which leads to the rise of multi-modal gesture recognition.  ...  1. 1 Kinect Fig. 1 Overview of multi-modal gesture recognition system. We use motion, audio and video data captured by Kinect.  ... 
doi:10.9746/sicetr.51.390 fatcat:rxofp2wbhzcrzlzlstfwrpfkvi
« Previous Showing results 1 — 15 out of 15,673 results