Filters








57,705 Hits in 7.4 sec

Learning a joint discriminative-generative model for action recognition

Ioannis Alexiou, Tao Xiang, Shaogang Gong
2015 2015 International Conference on Systems, Signals and Image Processing (IWSSIP)  
In this work, a novel approach is proposed to explore the best of both worlds by discriminatively learning a generative action model.  ...  Specifically, our approach is based on discriminative Fisher kernel learning which learns a dynamic generative model so that the distance between the log-likelihood gradients induced by two actions of  ...  In this work, we propose a new method for action recognition that explores both discriminative feature learning and generative temporal modelling, that is, to discriminatingly learn a dynamic generative  ... 
doi:10.1109/iwssip.2015.7313922 dblp:conf/iwssip/AlexiouXG15 fatcat:a6d3wlqaejcsxf3cvlvomf5hmq

Combining unsupervised learning and discrimination for 3D action recognition

Guang Chen, Daniel Clarke, Manuel Giuliani, Andre Gaschler, Alois Knoll
2015 Signal Processing  
We propose an ensemble approach using a discriminative learning algorithm, where each base learner is a discriminative multi-kernel-learning classifier, trained to learn an optimal combination of joint-based  ...  Furthermore, we analyze the efficiency of our approach in a 3D action recognition system.  ...  The aim of our method is to learn a discriminative subset of joints for each action class.  ... 
doi:10.1016/j.sigpro.2014.08.024 fatcat:eyqktxop6fe2ldnmgmmkuvu5ra

Learning Weighted Joint-based Features for Action Recognition using Depth Camera
english

Guang Chen, Daniel Clarke, Alois C. Knoll
2014 Proceedings of the 9th International Conference on Computer Vision Theory and Applications  
Human action recognition based on joints is a challenging task. The 3D positions of the tracked joints are very noisy if occlusions occur, which increases the intra-class variations in the actions.  ...  To capture the intraclass variance, a multiple kernel learning approach is employed to learn the skeleton structure that combine these joints-base features.  ...  It is generally agreed that knowing the 3D joint position is helpful for action recognition.  ... 
doi:10.5220/0004735705490556 dblp:conf/visapp/ChenCK14 fatcat:vdlgxpo63rdmtmglah6hlhjf74

Deep Learning Based Human Activity Recognition Using Spatio-Temporal Image Formation of Skeleton Joints

Nusrat Tasnim, Mohammad Khairul Islam, Joong-Hwan Baek
2021 Applied Sciences  
In this paper, we suggest a spatio-temporal image formation (STIF) technique of 3D skeleton joints by capturing spatial information and temporal changes for action discrimination.  ...  However, there is still a challenging problem of providing an effective and efficient method for human action discrimination using a 3D skeleton dataset.  ...  Acknowledgments: We would like to acknowledge Korea Aerospace University with much appreciation for its ongoing support to our research.  ... 
doi:10.3390/app11062675 fatcat:okj6lvqksnenxc7ka4oo6fj5oy

Improving Skeleton-based Action Recognitionwith Robust Spatial and Temporal Features [article]

Zeshi Yang, Kangkang Yin
2020 arXiv   pre-print
Recently skeleton-based action recognition has made signif-icant progresses in the computer vision community.  ...  In this paper, we propose a novel mechanism to learn more robustdiscriminative features in space and time.  ...  Skeleton-based action recognition, however, works with extracted position and/or orientation of skeletal joints to model the dynamics of human motion.  ... 
arXiv:2008.00324v1 fatcat:tyiuhmd54nhrfcqg36sbcie6me

Mining actionlet ensemble for action recognition with depth cameras

Jiang Wang, Zicheng Liu, Ying Wu, Junsong Yuan
2012 2012 IEEE Conference on Computer Vision and Pattern Recognition  
The proposed approach is evaluated on two challenging action recognition datasets captured by commodity depth cameras, and another dataset captured by a MoCap system.  ...  In this paper, an actionlet ensemble model is learnt to represent each action and to capture the intra-class variance. In addition, novel features that are suitable for depth data are proposed.  ...  It is generally agreed that knowing the 3D joint position is helpful for action recognition.  ... 
doi:10.1109/cvpr.2012.6247813 dblp:conf/cvpr/WangLWY12 fatcat:oqin2xmo3zcg3cbomhg6iesy5u

Leveraging Hierarchical Parametric Networks for Skeletal Joints Based Action Segmentation and Recognition

Di Wu, Ling Shao
2014 2014 IEEE Conference on Computer Vision and Pattern Recognition  
We propose a hierarchial dynamic framework that first extracts high level skeletal joints features and then uses the learned representation for estimating emission probability to infer action sequences  ...  Over the last few years, with the immense popularity of the Kinect, there has been renewed interest in developing methods for human gesture and action recognition from 3D skeletal data.  ...  Acknowledgments: The authors would like to thank Ben Glocker, Antonio Criminisi and Sebastian Nowozin for their helpful suggestions and discussions about temporal modeling.  ... 
doi:10.1109/cvpr.2014.98 dblp:conf/cvpr/WuS14 fatcat:m6fsoxu2ivffrju3h2eynociby

An End-to-End Spatio-Temporal Attention Model for Human Action Recognition from Skeleton Data [article]

Sijie Song, Cuiling Lan, Junliang Xing, Wenjun Zeng, Jiaying Liu
2016 arXiv   pre-print
In this work, we propose an end-to-end spatial and temporal attention model for human action recognition from skeleton data.  ...  Furthermore, to ensure effective training of the network, we propose a regularized cross-entropy loss to drive the model learning process and develop a joint training strategy accordingly.  ...  We then introduce a regularized learning objective of our model and a joint training strategy, which help overcome the difficulty of model learning for the highly coupled network.  ... 
arXiv:1611.06067v1 fatcat:tgerulkcbnbxfkf2xkv6qsjf5a

Action recognition using ensemble weighted multi-instance learning

Guang Chen, Manuel Giuliani, Daniel Clarke, Andre Gaschler, Alois Knoll
2014 2014 IEEE International Conference on Robotics and Automation (ICRA)  
In this paper, we propose a novel, 3.5D representation of a depth video for action recognition. A 3.5D graph of the depth video consists of a set of nodes that are the joints of the human body.  ...  To address this problem, we propose the Ensemble Weighted Multi-Instance Learning approach (EnwMi) for the action recognition task. It considers the class imbalance and intraclass variations.  ...  It is generally agreed that knowing the 3D joint position of human subject is helpful for action recognition. Wang et al.  ... 
doi:10.1109/icra.2014.6907519 dblp:conf/icra/ChenGCGK14 fatcat:ite72s77ebakhlpavd6g6uewcm

PA3D: Pose-Action 3D Machine for Video Recognition

An Yan, Yali Wang, Zhifeng Li, Yu Qiao
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
for action recognition.  ...  Recent studies have witnessed the successes of using 3D CNNs for video action recognition.  ...  This indicates that our PA3D can learn the discriminative pose dynamics for action recognition.  ... 
doi:10.1109/cvpr.2019.00811 dblp:conf/cvpr/YanWLQ19 fatcat:hjoteum4knfijg33lqgkj34p6a

An Attention Enhanced Graph Convolutional LSTM Network for Skeleton-Based Action Recognition [article]

Chenyang Si, Wentao Chen, Wei Wang, Liang Wang, Tieniu Tan
2019 arXiv   pre-print
In this paper, we propose a novel Attention Enhanced Graph Convolutional LSTM Network (AGC-LSTM) for human action recognition from skeleton data.  ...  Skeleton-based action recognition is an important task that requires the adequate understanding of movement characteristics of a human action from the given skeleton sequence.  ...  For skeleton based action recognition, the existing methods explore different models to learn spatial and temporal features. Song et al.  ... 
arXiv:1902.09130v2 fatcat:v5my74xbsbcbbes5vnxhgellie

Moving Poselets: A Discriminative and Interpretable Skeletal Motion Representation for Action Recognition

Lingling Tao, Rene Vidal
2015 2015 IEEE International Conference on Computer Vision Workshop (ICCVW)  
In contrast, our goal is to develop a principled feature learning framework to learn discriminative and interpretable skeletal motion patterns for action recognition.  ...  We also propose a simple algorithm for jointly learning Moving Poselets and action classifiers.  ...  The features learned by generic CNN models are usually hard to interpret.  ... 
doi:10.1109/iccvw.2015.48 dblp:conf/iccvw/TaoV15 fatcat:yn3x4gi2xrejjngbiv2qx5afbu

Unsupervised Feature Learning of Human Actions as Trajectories in Pose Embedding Manifold [article]

Jogendra Nath Kundu, Maharshi Gor, Phani Krishna Uppala, R. Venkatesh Babu
2018 arXiv   pre-print
Further, we use the pose embeddings generated by EnGAN to model human actions using a bidirectional RNN auto-encoder architecture, PoseRNN.  ...  We demonstrate state-of-the-art transfer-ability of the learned representation against other supervisedly and unsupervisedly learned motion embeddings for the task of fine-grained action recognition on  ...  Acknowledgements This work was supported by a CSIR Fellowship (Jogendra), and a project grant from Robert Bosch Centre for Cyber-Physical Systems, IISc.  ... 
arXiv:1812.02592v1 fatcat:37jnz4444faudnvaiao5d6sjym

Action recognition using dynamics features

Al Mansur, Yasushi Makihara, Yasushi Yagi
2011 2011 IEEE International Conference on Robotics and Automation  
In this paper, we propose a method of action recognition using dynamics features based on physics model.  ...  These features are more discriminative than the kinematics features, and they result in a low dimensional representation of a human action which preserves much information of the original high dimensional  ...  Learning the parameters of these distributions corresponds to maximizing the joint probability P(O, S). For each action class, we learn a separate HMM model λ i .  ... 
doi:10.1109/icra.2011.5979900 dblp:conf/icra/MansurMY11 fatcat:avzt2zj73netxm7hsqayzltptq

Learning discriminative trajectorylet detector sets for accurate skeleton-based action recognition [article]

Ruizhi Qiao, Lingqiao Liu, Chunhua Shen, Anton von den Hengel
2015 arXiv   pre-print
Devising a representation suitable for characterising actions on the basis of noisy skeleton sequences remains a challenge, however. We here provide two insights into this challenge.  ...  The introduction of low-cost RGB-D sensors has promoted the research in skeleton-based human action recognition.  ...  In our second contribution, a novel framework is proposed to generate robust and discriminative representation for action instances from a set of learned template trajectorylet detectors.  ... 
arXiv:1504.04923v1 fatcat:wbcwjpedejgcrgdkrx5dklqtae
« Previous Showing results 1 — 15 out of 57,705 results