1,288 Hits in 5.5 sec

Depth Data and Fusion of Feature Descriptors for Static Gesture Recognition

Prachi Sharma, R S Anand
2020 IET Image Processing  
In this study, the authors propose a novel methodology for static gesture recognition in a complex background using only depth map from Microsoft's Kinect camera.  ...  Four different types of features are extracted and analysed on two public static gesture datasets.  ...  Acknowledgments The authors are thankful to the Ministry of Human Resource Development (MHRD), Government of India, India, for financial support in pursuing the proposed work.  ... 
doi:10.1049/iet-ipr.2019.0230 fatcat:b3oh5mxsnjcadotvnlx65tsjum

Multi-modal Gesture Recognition Using Skeletal Joints and Motion Trail Model [chapter]

Bin Liang, Lihong Zheng
2015 Lecture Notes in Computer Science  
Finally, a fusion scheme incorporates the probability weights of each classifier for gesture recognition.  ...  For depth maps and user masks, we employ 2D Motion Trail Model (2DMTM) for gesture representation to capture motion region information.  ...  Inspired by the great success of silhouette based methods developed for visual data, Jalal et al. [10] extract depth silhouettes to construct feature vectors. HMM is then utilized for recognition.  ... 
doi:10.1007/978-3-319-16178-5_44 fatcat:3byivkbqizhvhb72d5lm6vtzkm

Using Appearance-Based Hand Features for Dynamic RGB-D Gesture Recognition

Xi Chen, Markus Koskela
2014 2014 22nd International Conference on Pattern Recognition  
We extract multiple hand features with the assistance of body and hand masks from RGB and depth frames, and full-body features from the skeleton data.  ...  In this paper we propose an online gesture recognition method for multimodal RGB-D data.  ...  In our work, we use multimodal data from the skeleton model, RGB, and depth through fusion in a common gesture recognition framework.  ... 
doi:10.1109/icpr.2014.79 dblp:conf/icpr/ChenK14 fatcat:4nxyie4p3zbndby4pok2dd3ekq

Real-Time Hand Gesture Recognition Using Fine-Tuned Convolutional Neural Network

Jaya Prakash Sahoo, Allam Jaya Prakash, Paweł Pławiak, Saunak Samantray
2022 Sensors  
Hand gesture recognition is one of the most effective modes of interaction between humans and computers due to being highly flexible and user-friendly.  ...  Due to the unavailability of large labeled image samples in static hand gesture images, it is a challenging task to train deep CNN networks such as AlexNet, VGG-16 and ResNet from scratch.  ...  The general steps for vision-based static hand gesture recognition are data acquisition, segmentation of the hand region, feature extraction and gesture classification based on identified features [7,  ... 
doi:10.3390/s22030706 pmid:35161453 pmcid:PMC8840381 fatcat:kcoz2emcovcxvijy2fgzy6mih4

HSFE Network and Fusion Model based Dynamic Hand Gesture Recognition

2020 KSII Transactions on Internet and Information Systems  
With the growth of hand-pose estimation as well as 3D depth sensors, depth, and the hand-skeleton dataset is proposed to bring much research in depth and 3D hand skeleton approaches.  ...  Fusion between two models brings the best accuracy in dynamic hand gesture (DHG) dataset.  ...  Different from static hand gesture recognition(s-HGR) detecting hand region and extracting hand feature from hand segmentation at the specific time, the dynamic hand gesture recognition(d-HGR) needs to  ... 
doi:10.3837/tiis.2020.09.020 fatcat:tmn74iy5ujfmlip5zfki7w3dim

A light-weight real-time applicable hand gesture recognition system for automotive applications

Thomas Kopinski, Stephane Magand, Alexander Gepperth, Uwe Handmann
2015 2015 IEEE Intelligent Vehicles Symposium (IV)  
A sophisticated temporal fusion technique boosts the overall robustness of recognition by taking into account data coming from previous classification steps.  ...  We present a novel approach for improved handgesture recognition by a single time-of-flight(ToF) sensor in an automotive environment.  ...  The next steps consist of adding a disambiguation module for the most difficult cases as well as using our fusion technique to extend our recognition from static hand poses to dynamic hand gestures.  ... 
doi:10.1109/ivs.2015.7225708 dblp:conf/ivs/KopinskiMGH15 fatcat:4vzyfcmo5nbj3cuqm6yr3fxy3a

Spatial-Temporal Shape and Motion Features for Dynamic Hand Gesture Recognition in Depth Video

Vo Hoai Viet, Nguyen Thanh Thien Phuc, Pham Minh Hoang, Liu Kim Nghia
2018 International Journal of Image Graphics and Signal Processing  
In this work, we propose a set of features extracted from depth maps for dynamic hand gesture recognition. We extract HOG2 for shape and appearance of hand in gesture representation.  ...  With the birth of depth sensors, many new techniques have been developed and gained a lot of achievements.  ...  In this part, we create a compact feature for dynamic gesture representation by early fusion two mentioned descriptors HOG2 and HOF2. 22 ; fusion HOF HOG h h h    (10) The order of histogram components  ... 
doi:10.5815/ijigsp.2018.09.03 fatcat:rs2e7dxvsbgmpplhqlx434eumi

Hand Posture Recognition Using Skeletal Data and Distance Descriptor

Tomasz Kapuściński, Dawid Warchoł
2020 Applied Sciences  
In this paper, a method for the recognition of static hand postures based on skeletal data was presented. A novel descriptor was proposed.  ...  The experiments were performed using three challenging datasets of gestures from Polish and American Sign Languages. The proposed method was compared with other approaches found in the literature.  ...  The recognition rate of 97.7% for a fusion of features and 97.1% for a fusion of classifiers was obtained.  ... 
doi:10.3390/app10062132 fatcat:muvjgz23o5gahgx27ilozdfvfm

Vision Based Hand Gesture Recognition Using Fourier Descriptor for Indian Sign Language

Archana Ghotkar, Pujashree Vidap, Santosh Ghotkar
2016 Signal & Image Processing An International Journal  
In this paper, methodology for recognition of static ISL manual alphabets, number and static symbols is given. ISL alphabet consists of single handed and two handed sign.  ...  Among different human modalities, hand is the primarily used modality to any sign language interpretation system so, hand gesture was used for recognition of manual alphabets and numbers.  ...  Currently, research work is going on 3-D image data which includes depth information [16] for gesture processing with various use of depth cameras such as leap motion controller [17] , Kinect sensor  ... 
doi:10.5121/sipij.2016.7603 fatcat:dntkiqpf7zdv5a4w4g2lexvrxi

Probability-based Dynamic Time Warping and Bag-of-Visual-and-Depth-Words for Human Gesture Recognition in RGB-D

Antonio Hernández-Vela, Miguel Ángel Bautista, Xavier Perez-Sala, Víctor Ponce-López, Sergio Escalera, Xavier Baró, Oriol Pujol, Cecilio Angulo
2014 Pattern Recognition Letters  
State-of-the-art RGB and depth features, including a newly proposed depth descriptor, are analysed and combined in a late fusion form.  ...  We present a methodology to address the problem of human gesture segmentation and recognition in video and depth image sequences.  ...  Acknowledgments This work has been partially supported by the ''Comissionat per a Universitats i Recerca del Departament d'Innovació, Universitats i Empresa de la Generalitat de Catalunya'' and the following  ... 
doi:10.1016/j.patrec.2013.09.009 fatcat:jtgkoj25kfhezagzfokgq4wyqq

A comparative study of color and depth features for hand gesture recognition in naturalistic driving settings

Eshed Ohn-Bar, Mohan M. Trivedi
2015 2015 IEEE Intelligent Vehicles Symposium (IV)  
In order to provide a common experimental setup for previously proposed space-time features, we study a color and depth naturalistic hand gesture benchmark.  ...  The dataset allows for evaluation of descriptors under settings of common self-occlusion and large illumination variation.  ...  Analysis of fusion techniques for the color and depth descriptors is also provided. The final proposed feature set is fast to extract, allowing for real-time hand gesture recognition. II.  ... 
doi:10.1109/ivs.2015.7225790 dblp:conf/ivs/Ohn-BarT15a fatcat:uaqer5yeanhpzjpnkfu5zx5jbq

Hand Gesture Recognition in Automotive Human–Machine Interaction Using Depth Cameras

Nico Zengeler, Thomas Kopinski, Uwe Handmann
2018 Sensors  
In this review, we describe current Machine Learning approaches to hand gesture recognition with depth data from time-of-flight sensors.  ...  We investigated several sensor data fusion techniques in a deep learning framework and performed user studies to evaluate our system in practice.  ...  [6] [7] [8] used the Kinect camera for hand gesture recognition purposes, operating simultaneously on RGB and depth data. On a minimal example of 75 data points, Ref.  ... 
doi:10.3390/s19010059 fatcat:pg7pvn3kozc4pi572h34iqhf54

Performance Improvement of Data Fusion Based Real-Time Hand Gesture Recognition by Using 3-D Convolution Neural Networks With Kinect V2

2019 Information and Knowledge Management  
In this paper, we propose Data Fusion Based Real-Time Hand Gesture Recognition using 3-D Convolutional Neural Networks and Kinect V2. To achieve the accurate segmentation and tracking with Kinect V2.  ...  Hand gestures are natural and intuitive communication way for the human being to interact with his environment.  ...  Algorithm 2 describes the recognition process with FD descriptor. The descriptors improve the in terms of accuracy and time required for recognition.  ... 
doi:10.7176/ikm/9-1-02 fatcat:x3sb6olpszh4xcjjmbhtszfwje

PRAXIS: Towards automatic cognitive assessment using gesture recognition

Farhood Negin, Pau Rodriguez, Michal Koperski, Adlen Kerboua, Jordi Gonzàlez, Jeremy Bourgeois, Emmanuelle Chapoulie, Philippe Robert, Francois Bremond
2018 Expert systems with applications  
In this paper, we propose a novel framework to investigate the potential of static and dynamic upper-body gestures based on the Praxis test and their potential in a medical framework to automatize the  ...  The experiments show effectiveness of our deep learning based approach in gesture recognition and performance assessment tasks.  ...  First of all, using the depth and RGB camera intrinsics and their extrinsic rela-235 tion, the depth data are registered on RGB images.  ... 
doi:10.1016/j.eswa.2018.03.063 fatcat:3olyusgaqrbnrohmdgeppzziku

Gesture-based human-machine interaction for assistance systems

Thomas Kopinski, Stefan Geisler, Uwe Handmann
2015 2015 IEEE International Conference on Information and Automation  
We register any movement in a nearby driver area and crop data efficiently with the means of PCA transforming it into so-called feature vectors which present the input for our multi-layer perceptrons (  ...  This contribution demonstrates the efficient embedding of a single depth-camera into the automotive environment making mid-air gesture interaction for mobile applications viable in such a scenario.  ...  Depth data is cropped and transformed into a feature vector capturing the shape of the hand and classified by a sophisticated fusion technique optimized for this problem.  ... 
doi:10.1109/icinfa.2015.7279341 dblp:conf/icinfa/KopinskiGH15 fatcat:egcg7krmqncjtdgbsjwlf6uxke
« Previous Showing results 1 — 15 out of 1,288 results