Filters








4,103 Hits in 5.4 sec

Spatio-temporal relationship match: Video structure comparison for recognition of complex human activities

M S Ryoo, J K Aggarwal
2009 2009 IEEE 12th International Conference on Computer Vision  
We introduce a novel matching, spatio-temporal relationship match, which is designed to measure structural similarity between sets of features extracted from two videos.  ...  Our match hierarchically considers spatio-temporal relationships among feature points, thereby enabling detection and localization of complex non-periodic activities.  ...  In this paper, we propose a novel video matching approach, spatio-temporal relationship match, which enables the recognition of complex human activities from realistic videos.  ... 
doi:10.1109/iccv.2009.5459361 dblp:conf/iccv/RyooA09 fatcat:p5ocdrp3wfebrg3g44wxwvr57i

Middle-Level Representation for Human Activities Recognition: The Role of Spatio-Temporal Relationships [chapter]

Fei Yuan, Véronique Prinet, Junsong Yuan
2012 Lecture Notes in Computer Science  
We tackle the challenging problem of human activity recognition in realistic video sequences.  ...  To further exploit the interdependencies of the moving parts, we then define spatio-temporal relationships between pairwise components.  ...  This work is supported by the 863 program of the Ministry of Science and Technology of China, and is supported in part by the Nanyang Assistant Professorship (SUG M58040015) and the French National Research  ... 
doi:10.1007/978-3-642-35749-7_13 fatcat:tqfqoiszgngczil2wlbnv4sgka

What are they doing? : Collective activity classification using spatio-temporal relationship among people

Wongun Choi, Khuram Shahid, Silvio Savarese
2009 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops  
We present a local spatio-temporal descriptor effective in capturing the spatial distribution of pedestrians over time as well as their pose.  ...  Our proposed solution employs extended Kalman filtering for tracking of detected pedestrians in 2D 1/2 scene coordinates as well as camera parameter and horizon estimation for tracker filtering and stabilization  ...  A spatio-temporal descriptor is constructed using these tracking results [Sec. 3.5] and employed for the ensuing classification stage where activities of individuals Human Detection and Pose Classification  ... 
doi:10.1109/iccvw.2009.5457461 dblp:conf/iccvw/ChoiSS09 fatcat:kesnsdkssnevzolh7thyeaflzu

Recognizing and Localizing Individual Activities through Graph Matching

Anh-Phuong Ta, Christian Wolf, Guillaume Lavoue, Atilla Baskurt
2010 2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance  
In contrast to most previous methods which classify entire video sequences, we design a video matching method from two sets of ST-points for human activity recognition.  ...  spatio-temporal interest points (STIP), and therefore ignore the spatio-temporal relationships between them.  ...  Acknowledgments This work was partly financed through two French National grants: ANR-CaNaDA Comportements Anormaux : Analyse, Détection, Alerte, No. 128, which is part of the call for projects CSOSG 2006  ... 
doi:10.1109/avss.2010.81 dblp:conf/avss/TaWLB10 fatcat:37ghjjx6kbhzlc57oozvdcphii

Semantic Video-to-Video Search Using Sub-graph Grouping and Matching

Tae Eun Choe, Hongli Deng, Feng Guo, Mun Wai Lee, Niels Haering
2013 2013 IEEE International Conference on Computer Vision Workshops  
Videos are analyzed semantically and represented by a graphical structure. Now the problem is to match the graph with other graphs of events in the database.  ...  After grouping and indexing subgraphs, the complex graph matching problem becomes simple vector comparison in reduced dimension.  ...  For activity recognition, the video is represented by visual features (Spatio-temporal HOG or SIFT) and a complex event is learned from those set of features, called topics (or themes).  ... 
doi:10.1109/iccvw.2013.108 dblp:conf/iccvw/ChoeDGLH13 fatcat:nmyt5ftvo5gybhvngxhu6aigli

Action Recognition in Video by Covariance Matching of Silhouette Tunnels

Kai Guo, Prakash Ishwar, Janusz Konrad
2009 2009 XXII Brazilian Symposium on Computer Graphics and Image Processing  
Action recognition is a challenging problem in video analytics due to event complexity, variations in imaging conditions, and intra-and inter-individual action-variability.  ...  In this paper, an action is viewed as a temporal sequence of local shape-deformations of centroid-centered object silhouettes, i.e., the shape of the centroid-centered object silhouette tunnel.  ...  5] , [6] , and spatio-temporal features extracted from space-time video volume [7] , [8] , [9] .  ... 
doi:10.1109/sibgrapi.2009.29 dblp:conf/sibgrapi/GuoIK09 fatcat:zatnfqhkpfctjogdbx2dj4kvtm

Spatio-Temporal Representation Matching-based Open-set Action Recognition by Joint Learning of Motion and Appearance

Yongsang Yoon, Jongmin Yu, Moongu Jeon
2019 IEEE Access  
In this paper, we propose the spatio-temporal representation matching (STRM) for video-based action recognition under the open-set condition.  ...  INDEX TERMS Action recognition, open-set recognition, spatio-temporal representation, joint learning of motion and appearance.  ...  SPATIO-TEMPORAL REPRESENTATION MATCHING (STRM) In this section, we describe the learning and action recognition processes in the STRM method.  ... 
doi:10.1109/access.2019.2953455 fatcat:knlfdwfefvgv3d4cuwzssneuiu

Activity recognition from videos with parallel hypergraph matching on GPUs [article]

Eric Lombardi and Christian Wolf and Oya Celiktutan and Bülent Sankur
2015 arXiv   pre-print
In this paper, we propose a method for activity recognition from videos based on sparse local features and hypergraph matching.  ...  We benefit from special properties of the temporal domain in the data to derive a sequential and fast graph matching algorithm for GPUs.  ...  Acknowledgement This work has been partially funded by the ANR project SoLStiCe (ANR-13-BS02-0002-01), a project of the grant program "ANR blanc".  ... 
arXiv:1505.00581v1 fatcat:4sl353qkpjghjl2pkls3geyavy

Knowledge Graph Driven Approach to Represent Video Streams for Spatiotemporal Event Pattern Matching in Complex Event Processing [article]

Piyush Yadav, Dhaval Salwala, Edward Curry
2020 arXiv   pre-print
We defined a set of nine event pattern rules for two domains (Activity Recognition and Traffic Management), which act as a query and applied over VEKG graphs to discover complex event patterns.  ...  This work introduces a graph-based structure for continuous evolving video streams, which enables the CEP system to query complex video event patterns.  ...  We also thanks Dibya Prakash Das from Indian Institute of Technology(IIT) Kharagpur for his initial contribution as an intern during the project.  ... 
arXiv:2007.06292v1 fatcat:kmqdo22wi5axne4vysqvo4ctua

Matching Trajectories of Anatomical Landmarks Under Viewpoint, Anthropometric and Temporal Transforms

Alexei Gritai, Yaser Sheikh, Cen Rao, Mubarak Shah
2009 International Journal of Computer Vision  
The fact that the human body has approximate anthropometric proportion allows innovative use of the machinery of epipolar geometry to provide constraints for analyzing actions performed by people of different  ...  An approach is presented to match imaged trajectories of anatomical landmarks (e.g. hands, shoulders and feet) using semantic correspondences between human bodies.  ...  An approach based on the statistical features of spatio-temporal gradient direction is used for classifying human activities, e.g. walking, running, and jumping (Caspi and Irani 2000) .  ... 
doi:10.1007/s11263-009-0239-8 fatcat:uzf57mlbsncfjlbodiycjdahm4

Matching shape sequences in video with applications in human movement analysis

Veeraraghavan A, A.K. Roy-Chowdhury, R. Chellappa
2005 IEEE Transactions on Pattern Analysis and Machine Intelligence  
We also show the efficacy of this algorithm by its application to gait-based human recognition.  ...  these models to capture the nature of shape deformations using experiments on gaitbased human recognition.  ...  to the understanding of human activities from videos.  ... 
doi:10.1109/tpami.2005.246 pmid:16355658 fatcat:zdasbo6iwjb2vmcnqiyvzkxweq

Human Interaction Recognition Based on the Co-occurrence of Visual Words

Khadidja Nour el Houda Slimani, Yannick Benezeth, Feriel Souami
2014 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops  
A 3-D XYT spatio-temporal volume is generated for each interacting person and a set of visual words is extracted to represent his activity.  ...  This paper describes a novel methodology for automated recognition of high-level activities.  ...  Encoding the spatio-temporal structure of visual words is of great importance for the recognition of human interaction over video sequences.  ... 
doi:10.1109/cvprw.2014.74 dblp:conf/cvpr/SlimaniBS14 fatcat:wdhvjpnnjrg4lddyxtgmjrjxte

A Multi-Scale Hierarchical Codebook Method for Human Action Recognition in Videos Using a Single Example

Mehrsan Javan Roshtkhari, Martin D. Levine
2012 2012 Ninth Conference on Computer and Robot Vision  
The algorithm was applied to three available video datasets for action recognition with different complexities (KTH, Weizmann, and MSR II) and the results were superior to other approaches, especially  ...  Given a single example of an activity as a query video, the proposed method finds similar videos to the query in a video dataset.  ...  ACKNOWLEDGMENTS The authors would like to acknowledge the financial support of the Natural Sciences and Engineering Research Council of Canada (NSERC) and the McGill International Doctoral Awards (MIDA  ... 
doi:10.1109/crv.2012.32 dblp:conf/crv/RoshtkhariL12 fatcat:woqfp4exlbbytbpkydgsjfni5i

Human activity recognition in videos using a single example

Mehrsan Javan Roshtkhari, Martin D. Levine
2013 Image and Vision Computing  
This paper presents a novel approach for action recognition, localization and video matching based on a hierarchical codebook model of local spatio-temporal video volumes.  ...  The hierarchical algorithm codes a video as a compact set of spatio-temporal volumes, while considering their spatio-temporal compositions in order to account for spatial and temporal contextual information  ...  Table 1 1 Action recognition comparison with the state-of-the-art for single video action matching (percentage of the average recognition rate).  ... 
doi:10.1016/j.imavis.2013.08.005 fatcat:oyvytzkhnfalnlxqdp2zkqtwr4

Structured feature-graph model for human activity recognition

Wanru Xu, Zhenjiang Miao, Xiao-Ping Zhang
2015 2015 IEEE International Conference on Image Processing (ICIP)  
In this paper, the activity is represented as a string of structured feature-graphs (SFGs) which models spatial structures and temporal structures simultaneously.  ...  Recent works have shown that extracting and learning midlevel features lead to significant improvement in human activity recognition.  ...  Thus, by combining local spatial matching with global temporal matching, we are able to match videos respecting their spatio-temporal structure simultaneously.  ... 
doi:10.1109/icip.2015.7350999 dblp:conf/icip/XuMZ15 fatcat:ndfrmdlxifgwvaresdi75lwnia
« Previous Showing results 1 — 15 out of 4,103 results