Filters








1,487 Hits in 6.9 sec

Learning Target-oriented Dual Attention for Robust RGB-T Tracking [article]

Rui Yang, Yabin Zhu, Xiao Wang, Chenglong Li, Jin Tang
2019 arXiv   pre-print
In this paper, we propose two visual attention mechanisms for robust RGB-T object tracking.  ...  Existing RGB-T trackers fuse different modalities by robust feature representation learning or adaptive modal weighting.  ...  Different from these works, we propose a novel local and global attention for robust RGB-T tracking.  ... 
arXiv:1908.04441v1 fatcat:qx22gd4yjnfpfhbit72x65667y

2021 Index IEEE Signal Processing Letters Vol. 28

2021 IEEE Signal Processing Letters  
The Author Index contains the primary entry for each item, listed under the first author's name.  ...  Liu, H., +, LSP 2021 653-657 A Dual Rank-Constrained Filter Pruning Approach for Convolutional Neural C., +, LSP 2021 419-423 TSFNet: Two-Stage Fusion Network for RGB-T Salient Object Detection.  ...  ., LSP 2021 46-50 TSFNet: Two-Stage Fusion Network for RGB-T Salient Object Detection.  ... 
doi:10.1109/lsp.2022.3145253 fatcat:a3xqvok75vgepcckwnhh2mty74

2021 Index IEEE Transactions on Image Processing Vol. 30

2021 IEEE Transactions on Image Processing  
The Author Index contains the primary entry for each item, listed under the first author's name.  ...  Yang, K., +, TIP 2021 1866-1881 Jointly Modeling Motion and Appearance Cues for Robust RGB-T Tracking.  ...  Fu, C., +, TIP 2021 1608-1622 Fast Hyperspectral Image Recovery of Dual-Camera Compressive Hyper-Sparse Learning-Based Correlation Filter for Robust Tracking.  ... 
doi:10.1109/tip.2022.3142569 fatcat:z26yhwuecbgrnb2czhwjlf73qu

Table of Contents

2021 IEEE Signal Processing Letters  
Hu TSFNet: Two-Stage Fusion Network for RGB-T Salient Object Detection . . . . . . . . . . . Q. Guo, W. Zhou, J. Lei, and L.  ...  Coelho Learning Dynamic Spatial-Temporal Regularization for UAV Object Tracking . . . ..C. Deng, S. He, Y. Han, and B.  ... 
doi:10.1109/lsp.2021.3134549 fatcat:m6obtl7k7zdqvd62eo3c4tptfy

Temporal Aggregation for Adaptive RGBT Tracking [article]

Zhangyong Tang, Tianyang Xu, Xiao-Jun Wu
2022 arXiv   pre-print
In this paper, we propose an RGBT tracker which takes spatio-temporal clues into account for robust appearance model learning, and simultaneously, constructs an adaptive fusion sub-network for cross-modal  ...  Visual object tracking with RGB and thermal infrared (TIR) spectra available, shorted in RGBT tracking, is a novel and challenging research topic which draws increasing attention nowadays.  ...  Take 'Tir->Rgb' for an example, as shown in Fig. 5 (a), an attention map is firstly obtained from RGB modality to guide the learning of TIR modality.  ... 
arXiv:2201.08949v2 fatcat:shfvqqqixncltcbfaaofbhfi5a

Exploring Fusion Strategies for Accurate RGBT Visual Object Tracking [article]

Zhangyong Tang
2022 arXiv   pre-print
We address the problem of multi-modal object tracking in video and explore various options of fusing the complementary information conveyed by the visible (RGB) and thermal infrared (TIR) modalities including  ...  Feature-level fusion is fulfilled by attention mechanism with channels excited optionally.  ...  Tracking with Multiple Modalities Nowadays, tracking with multi-modal inputs draws increasing attention, such as RGBT [41] and RGB-Depth tracking [42] .  ... 
arXiv:2201.08673v1 fatcat:iktlnaplubd7joaxbkbl2bk4t4

Table of Contents

2021 IEEE Signal Processing Letters  
Gai Learning Dynamic Spatial-Temporal Regularization for UAV Object Tracking . . . . . C. Deng, S. He, Y.Han, and B.  ...  Hauptmann TSFNet: Two-Stage Fusion Network for RGB-T Salient Object Detection . . . . . . . . . . . Q. Guo, W. Zhou, J. Lei, and L.  ... 
doi:10.1109/lsp.2021.3134551 fatcat:ab4b4tb5rrcu5cq6aifdekrizq

Full-Angle Quaternions for Robustly Matching Vectors of 3D Rotations

Stephan Liwicki, Minh-Tri Pham, Stefanos Zafeiriou, Maja Pantic, Bjorn Stenger
2014 2014 IEEE Conference on Computer Vision and Pattern Recognition  
For the latter, we incorporate online subspace learning with the proposed FAQ representation to highlight the benefits of the new representation.  ...  We apply the distance to the problems of 3D shape recognition from point clouds and 2D object tracking in color video.  ...  We conclude that the FAQ representation can be employed for fast and robust online subspace learning for object tracking.  ... 
doi:10.1109/cvpr.2014.21 dblp:conf/cvpr/LiwickiPZPS14 fatcat:gcsnugktqvdthcknwnmdheklcq

Critical Overview of Visual Tracking with Kernel Correlation Filter

Srishti Yadav, Shahram Payandeh
2021 Technologies  
matrices) for training and tracking in real-time.  ...  It is unlike deep learning, which is data intensive.  ...  Consent for Publication: The picture materials quoted in this article have no copyright requirements, and the source, if applicable, has been indicated.  ... 
doi:10.3390/technologies9040093 fatcat:e7klqu6545dp5mqbozy4a5cbau

ADTrack: Target-Aware Dual Filter Learning for Real-Time Anti-Dark UAV Tracking [article]

Bowen Li, Changhong Fu, Fangqiang Ding, Junjie Ye, Fuling Lin
2021 arXiv   pre-print
Specifically, ADTrack adopts dual regression, where the context filter and the target-focused filter restrict each other for dual filter learning.  ...  The target-aware mask can be applied to jointly train a target-focused filter that assists the context filter for robust tracking.  ...  trains dual filters w g and w o by learning context information and target information separately.  ... 
arXiv:2106.02495v1 fatcat:snsn6cilbfatlkdc7c6xwdwhum

Deep Learning on Monocular Object Pose Detection and Tracking: A Comprehensive Overview [article]

Zhaoxin Fan, Yazhi Zhu, Yulin He, Qi Sun, Hongyan Liu, Jun He
2022 arXiv   pre-print
Among methods for object pose detection and tracking, deep learning is the most promising one that has shown better performance than others.  ...  Object pose detection and tracking has recently attracted increasing attention due to its wide applications in many areas, such as autonomous driving, robotics, and augmented reality.  ...  Monocular object pose tracking: Given a series of monocular RGB/RGBD images I 𝑛−𝑘 , I 𝑛−𝑘+1 ,..., I 𝑛 from time steps 𝑇 𝑛−𝑘 to 𝑇 𝑛 and initial pose P 0 of the target object, poses of the target  ... 
arXiv:2105.14291v2 fatcat:2kxd4owthvf7tbcbnlqlqu4r3m

A Graphical Social Topology Model for Multi-Object Tracking [article]

Shan Gao, Xiaogang Chen, Qixiang Ye, Junliang Xing, Arjan Kuijper, Xiangyang Ji
2017 arXiv   pre-print
Tracking multiple objects is a challenging task when objects move in groups and occlude each other.  ...  Experiments on both RGB and RGB-D datasets confirm that the proposed multi-object tracker improves the state-of-the-arts especially in crowded scenes.  ...  D new should be Algorithm 1: Group learning • Birth Input: E = {∅} Output: G = {G k } Step-1: Calculate T = {T ij } in Eq. 1; For each connection T ij ; IF T ij < τ , E = E ∪ {(n i , n j )}; End for  ... 
arXiv:1702.04040v2 fatcat:x6sllwwdknf63l3tjbpiry3qma

Occlusion Aware Kernel Correlation Filter Tracker using RGB-D [article]

Srishti Yadav
2021 arXiv   pre-print
Unlike deep learning which requires large training datasets, correlation filter-based trackers like Kernelized Correlation Filter (KCF) uses implicit properties of tracked images (circulant matrices) for  ...  Despite their practical application in tracking, a need for a better understanding of the fundamentals associated with KCF in terms of theoretically, mathematically, and experimentally exists.  ...  of the tracked object for robustness.  ... 
arXiv:2105.12161v1 fatcat:kemrvvyqlba6dg3gwxnbcvfjdi

Robust Tracking with Discriminative Ranking Middle-Level Patches

Hong Liu, Zilin Liang, Qianru Sun
2014 International Journal of Advanced Robotic Systems  
The appearance model has been shown to be essential for robust visual tracking since it is the basic criterion to locating targets in video sequences.  ...  Bottom-up features are defined at the pixel level, and each feature gets its discrimination score through selective feature attention mechanism.  ...  (such as RGB, HSV, no use for grey images) • intensity, texture, orientation and colour (Red/Green and Blue/Yellow) contrast channels as calculated by Itti and Koch's saliency method [21] . • Graph-based  ... 
doi:10.5772/58430 fatcat:keavvuxtonbspdxrxpllo56s2i

Interactions Between Specific Human and Omnidirectional Mobile Robot Using Deep Learning Approach: SSD-FN-KCF

Chih-Lyang Hwang, Ding-Sheng Wang, Fan-Chen Weng, Sheng-Lin Lai
2020 IEEE Access  
INDEX TERMS Deep learning, human detection, face recognition, visual tracking, omnidirectional mobile robot, adaptive finite-time hierarchical constraint control, human following.  ...  Based on the image processing result, the required pose for searching or tracking (specific) human is accomplished by the image-based adaptive finite-time hierarchical constraint control.  ...  In [24] , both KCF and dual correlation filter outperform topranking trackers such as structured output tracking with kernels or tracking-learning detection on a 50 videos benchmark, despite running at  ... 
doi:10.1109/access.2020.2976712 fatcat:66aoevfbbnbz7heoed5ohk2dzi
« Previous Showing results 1 — 15 out of 1,487 results