Filters








38,001 Hits in 6.2 sec

STResNet_CF Tracker: The deep spatiotemporal features learning for correlation filter based robust visual object tracking

Zhengyu Zhu, Bing Liu, Yunbo Rao, Qiao Liu, Rui Zhang
2019 IEEE Access  
This paper presents a novel method for visual object tracking based on spatiotemporal feature combined with correlation filters.  ...  INDEX TERMS Spatiotemporal residual network, correlation filter, visual object tracking, deep learning, convolutional neural networks. 30142 2169-3536  ...  that the appearance model based on deep CNNs can achieve more excellence performance on visual object tracking task [10] .  ... 
doi:10.1109/access.2019.2903161 fatcat:uas76gauobcspkjwvuvy6wpevi

Spatially Supervised Recurrent Convolutional Neural Networks for Visual Object Tracking [article]

Guanghan Ning, Zhi Zhang, Chen Huang, Zhihai He, Xiaobo Ren, Haohong Wang
2016 arXiv   pre-print
In this paper, we develop a new approach of spatially supervised recurrent convolutional neural networks for visual object tracking.  ...  Our extensive experimental results and performance comparison with state-of-the-art tracking methods on challenging benchmark video tracking datasets shows that our tracker is more accurate and robust  ...  In this work, we propose to develop a new visual tracking approach based on recurrent convolutional neural networks, which extends the neural network learning and analysis into the spatial and temporal  ... 
arXiv:1607.05781v1 fatcat:o3qgs5ggvvbnfeyw3qnung7gyi

Deep Motion-Appearance Convolutions for Robust Visual Tracking

Haojie Li, Sihang Wu, Shuangping Huang, Kin-Man Lam, Xiaofen Xing
2019 IEEE Access  
In this paper, we propose a deep neural network for visual tracking, namely the Motion-Appearance Dual (MADual) network, which employs a dual-branch architecture, by using deep two-dimensional (2D) and  ...  To further improve the tracking precision, an extra ridge-regression model is trained for the tracking process, based not only on the bounding box given in the first frame, but also on its synchro-frame-cube  ...  based on spatial-temporal information.  ... 
doi:10.1109/access.2019.2958405 fatcat:777v6zmdobgk5njbvsxsuv5x4i

An Analytical Review on Some Recent Advances in Deep Learning Object Tracking Approaches

K. Nani Kumar, M. James Stephen, P. V. G. D. Prasad Reddy, Andhra University
2020 International Journal of Engineering Research and  
This paper presents a detailed review on some of the recent advances in Deep Learning Based Visual Object Tracking Approaches from a wide variety of algorithms often cited in research literature.  ...  Visual Object tracking in real world, real time application scenarios is a complex problem, therefore, it remains a most active area of research in computer vision.  ...  Visual Object Tracking involves both spatial and temporal information of video frames, over the recent years.  ... 
doi:10.17577/ijertv9is010309 fatcat:e7wny2gl35cuvfxrcfec3zxn7y

2019 Index IEEE Transactions on Circuits and Systems for Video Technology Vol. 29

2019 IEEE transactions on circuits and systems for video technology (Print)  
., +, TCSVT Dec. 2019 3673-3686 Video Saliency Prediction Based on Spatial-Temporal Two-Stream Network.  ...  ., +, TCSVT May 2019 1339-1349 Video Saliency Prediction Based on Spatial-Temporal Two-Stream Network.  ... 
doi:10.1109/tcsvt.2019.2959179 fatcat:2bdmsygnonfjnmnvmb72c63tja

Joint Group Feature Selection and Discriminative Filter Learning for Robust Visual Object Tracking [article]

Tianyang Xu, Zhen-Hua Feng, Xiao-Jun Wu, Josef Kittler
2019 arXiv   pre-print
We propose a new Group Feature Selection method for Discriminative Correlation Filters (GFS-DCF) based visual object tracking.  ...  By design, specific temporal-spatial-channel configurations are dynamically learned in the tracking process, highlighting the relevant features, and alleviating the performance degrading impact of less  ...  It is indisputable that recent advances in DCF-based tracking owe to a great extent to the use of robust deep CNN features.  ... 
arXiv:1907.13242v2 fatcat:ky4ro3y6xbd4zb3edee7uatd2a

Saliency-Enhanced Robust Visual Tracking [article]

Caglar Aytekin, Francesco Cricri, Emre Aksu
2018 arXiv   pre-print
Discrete correlation filter (DCF) based trackers have shown considerable success in visual object tracking.  ...  Deep salient object detection is one example of such high level features, as it make use of semantic information to highlight the important regions in the given scene.  ...  In this work we select this feature as the decision of a deep salient object detection network.  ... 
arXiv:1802.02783v1 fatcat:bqsbx4rmbzetdlfzpxnmnkhsva

Joint Group Feature Selection and Discriminative Filter Learning for Robust Visual Object Tracking

Tianyang Xu, Zhen-Hua Feng, Xiao-Jun Wu, Josef Kittler
2019 2019 IEEE/CVF International Conference on Computer Vision (ICCV)  
We propose a new Group Feature Selection method for Discriminative Correlation Filters (GFS-DCF) based visual object tracking.  ...  In contrast to the widely used spatial regularisation or feature selection methods, to the best of our knowledge, this is the first time that channel selection has been advocated for DCF-based tracking  ...  It is indisputable that recent advances in DCF-based tracking owe to a great extent to the use of robust deep CNN features.  ... 
doi:10.1109/iccv.2019.00804 dblp:conf/iccv/XuFWK19 fatcat:ukz6kxi6yrcr3aj5qec6t6ll4q

Learning Spatial–Temporal Background-Aware Based Tracking

Peiting Gu, Peizhong Liu, Jianhua Deng, Zhi Chen
2021 Applied Sciences  
In this paper, a spatialtemporal regularization module based on BACF (background-aware correlation filter) framework is proposed, which is performed by introducing a temporal regularization to deal effectively  ...  Discriminative correlation filter (DCF) based tracking algorithms have obtained prominent speed and accuracy strengths, which have attracted extensive attention and research.  ...  based on deep learning networks.  ... 
doi:10.3390/app11188427 fatcat:ecwy6lflg5bkppm2byyanuoquy

Real Time Visual Tracking using Spatial-Aware Temporal Aggregation Network [article]

Tao Hu, Lichao Huang, Xianming Liu, Han Shen
2019 arXiv   pre-print
More powerful feature representations derived from deep neural networks benefit visual tracking algorithms widely.  ...  This paper proposes a correlation filter based tracking method which aggregates historical features in a spatial-aligned and scale-aware paradigm.  ...  Related Work In this section, we give a brief overview on the CF based visual tracking and spatial-temporal fusion approaches related to our method.  ... 
arXiv:1908.00692v1 fatcat:acwywix2jbgkbicu6ysgordjfu

Multiple Traffic Target Tracking with Spatial-Temporal Affinity Network

Yamin Sun, Yue Zhao, Sirui Wang, Lorenzo Putzu
2022 Computational Intelligence and Neuroscience  
In this study, a spatial-temporal encoder-decoder affinity network is proposed for multiple traffic targets tracking, aiming to utilize the power of deep learning to learn a robust spatial-temporal affinity  ...  spatial and temporal information from the detections and the tracklets of the tracked targets.  ...  Acknowledgments e authors want to express their great gratitude to National Natural Science Foundation of China (Grant no. 52108246) and China Postdoctoral Science Foundation (Grant no. 2020M673608XB)  ... 
doi:10.1155/2022/9693767 pmid:35655505 pmcid:PMC9152393 fatcat:6nkbbtg64beenpfo5kqhu2psfy

TRAT: Tracking by Attention Using Spatio-Temporal Features [article]

Hasan Saribas, Hakan Cevikalp, Okan Köpüklü, Bedirhan Uzun
2020 arXiv   pre-print
In this paper, we propose a two-stream deep neural network tracker that uses both spatial and temporal features.  ...  Robust object tracking requires knowledge of tracked objects' appearance, motion and their evolution over time.  ...  [54] proposed a deep neural network that includes temporal and spatial networks, where the temporal network collects key historical temporal samples by solving a sparse optimization problem.  ... 
arXiv:2011.09524v1 fatcat:u32znmfjrzae5dmj7rtbjqzon4

2020 Index IEEE Transactions on Image Processing Vol. 29

2020 IEEE Transactions on Image Processing  
., +, TIP 2020 3168-3182 Deep Reinforcement Learning for Weak Human Activity Localization. Xu, W., +, TIP 2020 1522-1535 Deep Spatial and Temporal Network for Robust Visual Object Tracking.  ...  Fan, J., Deep Spatial and Temporal Network for Robust Visual Object Tracking; TIP 2020 1762-1775 Tennakoon, R., see Muthu, S., TIP 2020 5557-5570 Teyfouri, N., Rabbani, H., Kafieh, R., and Jabbari,  ... 
doi:10.1109/tip.2020.3046056 fatcat:24m6k2elprf2nfmucbjzhvzk3m

Graph Convolutional Tracking

Junyu Gao, Tianzhu Zhang, Changsheng Xu
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
To comprehensively leverage the spatial-temporal structure of historical target exemplars and get benefit from the context information, in this work, we present a novel Graph Convolutional Tracking (GCT  ...  Tracking by siamese networks has achieved favorable performance in recent years.  ...  A Spatial-Temporal GCN (ST-GCN) is employed to learn a robust target appearance model on this graph and generate a spatial-temporal feature (ST-Feature).  ... 
doi:10.1109/cvpr.2019.00478 dblp:conf/cvpr/GaoZX19 fatcat:gbvsjl2szjccnciwhajkynipwe

Toward human-centric deep video understanding

Wenjun Zeng
2020 APSIPA Transactions on Signal and Information Processing  
We show that semantic models, view-invariant models, and spatial-temporal visual attention mechanisms are important building blocks. We also discuss the future perspectives of video understanding.  ...  In this paper, we share our views on why and how to use a human centric approach to address the challenging video understanding problems.  ...  Fig. 3 . 3 Spatial-temporal attention network. Both attention networks use a one-layer LSTM network.  ... 
doi:10.1017/atsip.2019.26 fatcat:rtrqzokr6bc4lj6vs6megf5xru
« Previous Showing results 1 — 15 out of 38,001 results