A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2018; you can also visit the original URL.
The file type is application/pdf
.
Filters
A Dataset for Persistent Multi-target Multi-camera Tracking in RGB-D
2017
2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
In addition to raw data, we provide identity annotation for benchmarking, and tracking results from a contemporary RGB-D tracker -thus allowing focus on the higher level monitoring problems. ...
To reflect the challenges of a realistic practical application, the dataset includes clothes changes and visitors to ensure the global reasoning is a realistic open-set problem. ...
in an unconstrained manner. ...
doi:10.1109/cvprw.2017.189
dblp:conf/cvpr/LayneHCHHXMD17
fatcat:4eqjoeslffbmzmcbqe66rw4dqm
Guest editorial: web multimedia semantic inference using multi-cues
2015
World wide web (Bussum)
The paper "From constrained to unconstrained datasets an evaluation of local action descriptors and fusion strategies for interaction recognition" introduce a new unconstrained video dataset for interaction ...
The results show the potential of the dataset to promote practical methods on interaction video recognition. ...
doi:10.1007/s11280-015-0360-2
fatcat:vc4plge5qvg7hfmza3dffmawki
High-level event recognition in unconstrained videos
2012
International Journal of Multimedia Information Retrieval
However, due to the fast growing popularity of such videos, especially on the Web, solutions to this problem are in high demands and have attracted great interest from researchers. ...
In this paper, we review current technologies for complex event recognition in unconstrained videos. ...
Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. ...
doi:10.1007/s13735-012-0024-2
fatcat:mfzttic3svb4tho2xb6aczgp4y
A Fine Grainedresearch Over Human Action Recognition
2019
VOLUME-8 ISSUE-10, AUGUST 2019, REGULAR ISSUE
Human Action Recognition from videos has been an active research is in the computer vision due to its significant applicability in various real-time applications like video retrieval, human-robot interactions ...
Unlike the earlier ones, this paper provides a detailed survey according to the basic working methodology of Human action recognition system. ...
of subjects, they are categorized into two classes such as constrained and unconstrained. ...
doi:10.35940/ijitee.a4677.119119
fatcat:tacsukuctjehde4vzub5gzvfqu
Histogram of Oriented Gradient-Based Fusion of Features for Human Action Recognition in Action Video Sequences
2020
Sensors
Human Action Recognition (HAR) is the classification of an action performed by a human. The goal of this study was to recognize human actions in action video sequences. ...
The proposed approach is performed and compared with the state-of-the-art methods for action recognition on two publicly available benchmark datasets (KTH and Weizmann) and for cross-validation on the ...
Acknowledgments: The authors would like to thank the reviewers for their valuable suggestions which helped in improving the quality of this paper. ...
doi:10.3390/s20247299
pmid:33353248
pmcid:PMC7766717
fatcat:n6b5qzdqd5cehcrwfkko3ywjnq
Effective Codebooks for Human Action Representation and Classification in Unconstrained Videos
2012
IEEE transactions on multimedia
Recognition and classification of human actions for annotation of unconstrained video sequences has proven to be challenging because of the variations in the environment, appearance of actors, modalities ...
It improves on previous contributions through the definition of a novel local descriptor that uses image gradient and optic flow to respectively model the appearance and motion of human actions at interest ...
Each action, is represented by an histogram H of codewords w obtained according to k-means
Fig. 3 :Fig. 5 : 35 Two fusion strategies: early-fusion (at the descriptor level) and late-fusion (at the codebook ...
doi:10.1109/tmm.2012.2191268
fatcat:f7r3w7clofgn3akdoxkmiszs4m
Actions in the Eye: Dynamic Gaze Datasets and Learnt Saliency Models for Visual Recognition
2015
IEEE Transactions on Pattern Analysis and Machine Intelligence
Systems based on bag-of-words models from image features collected at maxima of sparse interest point operators have been used successfully for both computer visual object and action recognition tasks. ...
of visual action and scene context recognition tasks. ...
vision descriptors and fusion methods, leads to state of the art results in the Hollywood-2 and UCF-Sports action datasets. ...
doi:10.1109/tpami.2014.2366154
pmid:26352449
fatcat:i7estk2krzcvbhem3hfhq5vqjq
Being the center of attention: A Person-Context CNN framework for Personality Recognition
[article]
2019
arXiv
pre-print
From a given scenario, we extract spatio-temporal motion descriptors from every individual in the scene, spatio-temporal motion descriptors encoding social group dynamics, and proxemics descriptors to ...
Experiments on two public datasets demonstrate the effectiveness of jointly modeling the mutual Person-Context information, outperforming the state-of-the art-results for personality recognition in two ...
ACKNOWLEDGMENTS This work has been funded by the European Union' Horizon 2020 Research and Innovation Programme under Grant Agreement N • 690090 (ICT4Life project). ...
arXiv:1910.06690v1
fatcat:ic6y3awyofeblmi5bfo2fre564
Recognizing Human Actions by Using Effective Codebooks and Tracking
[chapter]
2013
Advanced Topics in Computer Vision
Recognition and classification of human actions for annotation of unconstrained video sequences has proven to be challenging because of the variations in the environment, appearance of actors, modalities ...
This variability reflects in the difficulty of defining effective descriptors and deriving appropriate and effective codebooks for action categorization. ...
Local descriptors have shown better performance and are in principle better suited for videos taken in both constrained and unconstrained contexts. ...
doi:10.1007/978-1-4471-5520-1_3
dblp:series/acvpr/BallanSS13
fatcat:jcneb56tarbhtb7zelmarkddqe
A Review on Computer Vision-Based Methods for Human Action Recognition
2020
Journal of Imaging
Next, the most common datasets of human action recognition are presented. ...
Human action recognition targets recognising different actions from a sequence of observations and different environmental conditions. ...
In addition, DBNs based methods were used by [164] to learn features from an unconstrained video stream for human action recognition. ...
doi:10.3390/jimaging6060046
pmid:34460592
pmcid:PMC8321068
fatcat:eyp2pu6egzcunagferl7dhffay
Unconstrained Biometric Recognition: Summary of Recent SOCIA Lab. Research
[article]
2020
arXiv
pre-print
The development of biometric recognition solutions able to work in visual surveillance conditions, i.e., in unconstrained data acquisition conditions and under covert protocols has been motivating growing ...
This report summarises the research works published by elements of the SOCIA Lab. in the last decade in the scope of biometric recognition in unconstrained conditions. ...
for extracting face descriptors from the LFW, IJB-A and MegaFace datasets. ...
arXiv:2001.09703v2
fatcat:hugkig4wxvgaldscwbobn6yhuy
The Automatic Detection of Chronic Pain-Related Expression: Requirements, Challenges and the Multimodal EmoPain Dataset
2016
IEEE Transactions on Affective Computing
First, through literature reviews, an overview of how pain is expressed in chronic pain and the motivation for detecting it in physical rehabilitation is provided. ...
Natural unconstrained pain related facial expressions and body movement behaviours were elicited from people with chronic pain carrying out physical exercises. ...
on a pre-segemented constrained dataset. ...
doi:10.1109/taffc.2015.2462830
pmid:30906508
pmcid:PMC6430129
fatcat:uvpdc6jzo5gibf5yu57uxos7wm
A Review on Human Activity Recognition Using Vision-Based Method
2017
Journal of Healthcare Engineering
Human activity recognition (HAR) aims to recognize activities from a series of observations on the actions of subjects and the environmental conditions. ...
For the representation methods, we sort out a chronological research trajectory from global representations to local representations, and recent depth-based representations. ...
Science and Technology Development Plan (no. 16-5-1-13-jch); and The Aoshan Innovation Project in Science and Technology of Qingdao National Laboratory for Marine Science and Technology (no. 2016ASKJ07 ...
doi:10.1155/2017/3090343
pmid:29065585
pmcid:PMC5541824
fatcat:g6qbbbjpcref3p54kvquu5rltq
Joint Sparsity-Based Representation and Analysis of Unconstrained Activities
2013
2013 IEEE Conference on Computer Vision and Pattern Recognition
We demonstrate the efficacy of our approach for activity classification and clustering by reporting competitive results on standard datasets such as, HMDB, UCF-50, Olympic Sports and KTH. ...
We then present modeling strategies based on subspace-driven manifold metrics to characterize patterns among these components, across other videos in the system, to perform subsequent video analysis. ...
While there has been a gamut of feature representations ranging from spatio-temporal volumes [13, 3] and trajectories [32, 42] to local interest point descriptors [36, 20] and action attributes ...
doi:10.1109/cvpr.2013.353
dblp:conf/cvpr/Gopalan13a
fatcat:hdzjdfml7jfapaauu3vhlluyde
Face Recognition Using Smoothed High-Dimensional Representation
[chapter]
2015
Lecture Notes in Computer Science
In this work, we propose application specific learning to train a separate BSIF descriptor for each of the local face regions. ...
In detail, we provide a thorough evaluation on FERET and LFW benchmarks comparing our face representation method to the state-of-the-art in face recognition showing enhanced performance on FERET and promising ...
FERET [14] is a standard dataset for benchmarking face recognition methods in constrained imaging conditions. ...
doi:10.1007/978-3-319-19665-7_44
fatcat:3to3xz35gbflho3secgo4bdelu
« Previous
Showing results 1 — 15 out of 339 results