Filters








13,621 Hits in 4.2 sec

Contextualized Spatio-Temporal Contrastive Learning with Self-Supervision [article]

Liangzhe Yuan, Rui Qian, Yin Cui, Boqing Gong, Florian Schroff, Ming-Hsuan Yang, Hartwig Adam, Ting Liu
2022 arXiv   pre-print
While being very effective on learning holistic image and video representations, such an objective becomes sub-optimal for learning spatio-temporally fine-grained features in videos, where scenes and instances  ...  In this paper, we present Contextualized Spatio-Temporal Contrastive Learning (ConST-CL) to effectively learn spatio-temporally fine-grained video representations via self-supervision.  ...  Chicago) and Jianing Wei (Google) for the help with object tracking.  ... 
arXiv:2112.05181v2 fatcat:2gjl5kojwrh6dph4y642gvonxi

Fast Visual Tracking via Dense Spatio-temporal Context Learning [chapter]

Kaihua Zhang, Lei Zhang, Qingshan Liu, David Zhang, Ming-Hsuan Yang
2014 Lecture Notes in Computer Science  
In this paper, we present a simple yet fast and robust algorithm which exploits the dense spatio-temporal context for visual tracking.  ...  The Fast Fourier Transform (FFT) is adopted for fast learning and detection in this work, which only needs 4 FFT operations.  ...  Conclusion In this paper, we presented a simple yet fast and robust algorithm which exploits dense spatio-temporal context information for visual tracking.  ... 
doi:10.1007/978-3-319-10602-1_9 fatcat:unqq53pwvfel5g3rq4fxnkmw2e

Robust Visual Tracking Integrating Spatio-Temporal Model

Min Jiang, Jiao Wu, Jun Kong, Chenhua Liu, Shengwei Tian
2016 International Journal of Signal Processing, Image Processing and Pattern Recognition  
To solve this issue, a robust target tracking method integrating spatio-temporal model to constrain the searching area is proposed in this paper.  ...  By integrating the spatio-temporal model, the proposed method is more robust in dealing with appearance variation. Performance in background clutter and abrupt motion.  ...  the target from drifting to the background. 2 The Fast Fourier Transform is adopted to learn the spatio -temporal model and a very sparse measurement matrix is adopted to efficiently extract the features  ... 
doi:10.14257/ijsip.2016.9.10.19 fatcat:px6qt6hmd5d2finukkdpr4yihi

Visual learning and recognition of a probabilistic spatio-temporal model of cyclic human locomotion

M. Peternel, A. Leonardis
2004 Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004.  
We analyze a PCA representation of a set of cyclic curves, pointing out properties of the representation which can be used for spatio-temporal alignment in tracking and recognition tasks.  ...  We present a novel representation of cyclic human locomotion based on a set of spatio-temporal curves of tracked points on the surface of a person.  ...  The main contribution of this paper is a method for learning and recognition of the spatio-temporal distribution of a set of spatio-temporal curves over a number of iterations of cyclic motion.  ... 
doi:10.1109/icpr.2004.1333725 dblp:conf/icpr/PeternelL04 fatcat:5vqfx5auqbhnncjihavoaw2ipm

An adaptive learning method for target tracking across multiple cameras

Kuan-Wen Chen, Chih-Chuan Lai, Yi-Ping Hung, Chu-Song Chen
2008 2008 IEEE Conference on Computer Vision and Pattern Recognition  
Two visual cues are usually employed for tracking targets across cameras: spatio-temporal cue and appearance cue.  ...  This paper proposes an adaptive learning method for tracking targets across multiple cameras with disjoint views.  ...  Tracking targets across multiple cameras with disjoint views is generally a correspondence problem dependent on two visual cues: spatio-temporal cue and appearance cue.  ... 
doi:10.1109/cvpr.2008.4587505 dblp:conf/cvpr/ChenLHC08 fatcat:brm6vssy6ngype2h35j4xlzeki

Fast Tracking via Spatio-Temporal Context Learning [article]

Kaihua Zhang and Lei Zhang and Ming-Hsuan Yang and David Zhang
2013 arXiv   pre-print
In this paper, we present a simple yet fast and robust algorithm which exploits the spatio-temporal context for visual tracking.  ...  The Fast Fourier Transform is adopted for fast learning and detection in this work.  ...  CONCLUSION In this paper, we present a simple yet fast and robust algorithm which exploits spatio-temporal context information for visual tracking.  ... 
arXiv:1311.1939v1 fatcat:l2v5u5g6eja3vgpmautrndakj4

A Real-time Visual Tracking System with Delivery Robot

Chao LI, Chu-qing CAO, Yun-feng GAO
2017 DEStech Transactions on Computer Science and Engineering  
In this paper, we proposed a real-time visual tracking system for delivery robot. The RGB-D data are collected for environmental perception.  ...  In the system designing, a FFT based tracking algorithm is used and the experimental results showed that this algorithm is both robust and efficient, thus is very appropriate for the luggage delivery robotic  ...  Next, the learned spatial context model is used to update a spatio-temporal context model for the next frame.  ... 
doi:10.12783/dtcse/aita2016/7554 fatcat:3zw22ulyobfbhfgzzufbswxg7y

Saliency Map for Object Tracking

Dongping Zhang, Wenting Li, Min Sun, Haibin Yu
2015 International Journal of Signal Processing, Image Processing and Pattern Recognition  
This paper investigates the contribution of saliency map for object tracking, and proposes a saliency detection method which is combined with the location information.  ...  Most of the state-of-the-art tracking algorithms rely on either intensity or color information.  ...  The STC Tracker Our approach is based on the STC tracker [17] , which exploits the spatio-temporal context learning for object tracking.  ... 
doi:10.14257/ijsip.2015.8.10.25 fatcat:faalw27csvgnzgbreaq6a7fjhy

View-Invariant Action Recognition [chapter]

Yogesh Singh Rawat, Shruti Vyas
2020 Computer Vision  
We have seen a lot of research exploring this dynamics of spatio-temporal appearance for learning a visual representation of human actions.  ...  The varying pattern of spatio-temporal appearance generated by human action is key for identifying the performed action.  ...  Spatio-temporal volume The tracking of joints in the motion trajectories can be effective for action representation.  ... 
doi:10.1007/978-3-030-03243-2_878-1 fatcat:up2g5kc6h5borf56tvg3vj4ljm

A perceptually based spatio-temporal computational framework for visual saliency estimation

Petros Koutras, Petros Maragos
2015 Signal processing. Image communication  
The purpose of this paper is to demonstrate a perceptually based spatio-temporal computational framework for visual saliency estimation.  ...  We have developed a new spatiotemporal visual frontend based on biologically inspired 3D Gabor filters, which is applied on both the luminance and the color streams and produces spatio-temporal energy  ...  Special thanks to Nassos Katsamanis for his advices during database collection and detailed comments on the paper. Appendix A.  ... 
doi:10.1016/j.image.2015.08.004 fatcat:kha3povnbna5lbecznl2syefqm

2019 Index IEEE Transactions on Circuits and Systems for Video Technology Vol. 29

2019 IEEE transactions on circuits and systems for video technology (Print)  
., +, TCSVT July 2019 2126-2137 Action Recognition With Spatio-Temporal Visual Attention on Skeleton Image Sequences.  ...  ., +, TCSVT Oct. 2019 2941-2959 Revisiting Jump-Diffusion Process for Visual Tracking: A Reinforcement Learning Approach.  ... 
doi:10.1109/tcsvt.2019.2959179 fatcat:2bdmsygnonfjnmnvmb72c63tja

Human-centric Spatio-Temporal Video Grounding With Visual Transformers [article]

Zongheng Tang, Yue Liao, Si Liu, Guanbin Li, Xiaojie Jin, Hongxu Jiang, Qian Yu, Dong Xu
2021 arXiv   pre-print
We tackle this task by proposing an effective baseline method named Spatio-Temporal Grounding with Visual Transformers (STGVT), which utilizes Visual Transformers to extract cross-modal representations  ...  for video-sentence matching and temporal localization.  ...  The single-stream models [14] - [18] employ a single transformer to learn cross-modal features from the visual and the textual modalities.  ... 
arXiv:2011.05049v2 fatcat:lfgpc7gsxvbbzdwqhhv3qgv4b4

Aligning Videos in Space and Time [article]

Senthil Purushwalkam, Tian Ye, Saurabh Gupta, Abhinav Gupta
2020 arXiv   pre-print
Obtaining training data for such a fine-grained alignment task is challenging and often ambiguous.  ...  In this paper, we focus on the task of extracting visual correspondences across videos. Given a query video clip from an action class, we aim to align it with training videos in space and time.  ...  As a first step for spatio-temporal alignment, we retrieve the best video to align.  ... 
arXiv:2007.04515v1 fatcat:6tdjecdhyrfzrgcblhsbydn4iy

15 Keypoints Is All You Need [article]

Michael Snower, Asim Kadav, Farley Lai, Hans Peter Graf
2020 arXiv   pre-print
Then, a Transformer-based network makes a binary classification as to whether one pose temporally follows another.  ...  However, existing pose tracking methods are unable to accurately model temporal relationships and require significant computation, often computing the tracks offline.  ...  Instead, in a supervised setting, KeyTrack uses transformers to learn spatio-temporal keypoint relationships for the visual problem of pose tracking.  ... 
arXiv:1912.02323v2 fatcat:ogyl37mfazeynb4o66hh6caji4

STC: Spatio-Temporal Contrastive Learning for Video Instance Segmentation [article]

Zhengkai Jiang, Zhangxuan Gu, Jinlong Peng, Hang Zhou, Liang Liu, Yabiao Wang, Ying Tai, Chengjie Wang, Liqing Zhang
2022 arXiv   pre-print
To improve instance association accuracy, a novel bi-directional spatio-temporal contrastive learning strategy for tracking embedding across frames is proposed.  ...  Moreover, an instance-wise temporal consistency scheme is utilized to produce temporally coherent results.  ...  Next, we present a novel spatio-temporal contrastive learning strategy for tracking embeddings to achieve accurate and robust instance association.  ... 
arXiv:2202.03747v1 fatcat:wdj6puyv35cgpe2fad2vrzsrny
« Previous Showing results 1 — 15 out of 13,621 results