Filters








11,741 Hits in 5.0 sec

Online Visual Tracking with One-Shot Context-Aware Domain Adaptation [article]

Hossein Kashiani, Amir Abbas Hamidi Imani, Shahriar Baradaran Shokouhi, Ahmad Ayatollahi
2021 arXiv   pre-print
Online learning policy makes visual trackers more robust against different distortions through learning domain-specific cues.  ...  However, the trackers adopting this policy fail to fully leverage the discriminative context of the background areas.  ...  Visual object tracking is a fundamental task for human-machine interaction, autonomous driving, visual sports analysis, virtual reality, and human motion analysis.  ... 
arXiv:2008.09891v2 fatcat:2o57dm6rozggrin6pun6fwwtpq

Tracking by Joint Local and Global Search: A Target-aware Attention based Approach [article]

Xiao Wang, Jin Tang, Bin Luo, Yaowei Wang, Yonghong Tian, Feng Wu
2021 arXiv   pre-print
In the tracking procedure, we integrate the target-aware attention with multiple trackers by exploring candidate search regions for robust tracking.  ...  In this paper, we propose a novel and general target-aware attention mechanism (termed TANet) and integrate it with tracking-by-detection framework to conduct joint local and global search for robust tracking  ...  Visualization In addition to the aforementioned quantitative analysis, we also give some visualization about the learned target-aware Target-Aware Attention.  ... 
arXiv:2106.04840v1 fatcat:3pktzebc6vfzrmr7ueu7l7qpd4

GraspARL: Dynamic Grasping via Adversarial Reinforcement Learning [article]

Tianhao Wu, Fangwei Zhong, Yiran Geng, Hongchen Wang, Yongjian Zhu, Yizhou Wang, Hao Dong
2022 arXiv   pre-print
In this work, we introduce an adversarial reinforcement learning framework for dynamic grasping, namely GraspARL.  ...  Conventional approaches rely on a set of manually defined object motion patterns for training, resulting in poor generalization to unseen object trajectories.  ...  We would also like to thanks the lab mates for the helpful discussion.  ... 
arXiv:2203.02119v2 fatcat:h7rbwcv2jrg7fozipm5ypk4s5u

MFGNet: Dynamic Modality-Aware Filter Generation for RGB-T Tracking [article]

Xiao Wang, Xiujun Shu, Shiliang Zhang, Bo Jiang, Yaowei Wang, Yonghong Tian, Feng Wu
2022 arXiv   pre-print
convolutional kernels for various input images in practical tracking.  ...  The spatial and temporal recurrent neural network is used to capture the direction-aware context for accurate global attention prediction.  ...  We adopt DFN for robust feature learning to mine the modality-aware context information for robust RGB-T tracking. Attention Mechanism.  ... 
arXiv:2107.10433v2 fatcat:3mxe5iidvrgbvbxdna4pwwlv74

You Only Demonstrate Once: Category-Level Manipulation from Single Visual Demonstration [article]

Bowen Wen, Wenzhao Lian, Kostas Bekris, Stefan Schaal
2022 arXiv   pre-print
For the latter part, a category-level behavior cloning (CatBC) method leverages motion tracking to perform closed-loop control.  ...  Extensive experiments demonstrate its efficacy in a range of challenging industrial tasks in highprecision assembly, which involve learning complex, long-horizon policies.  ...  Model-free 6 DoF Object Motion Tracking This work utilizes 6 DoF motion tracking for 2 purposes.  ... 
arXiv:2201.12716v2 fatcat:qbgchamnlvfvhhu6avtulp6yq4

Graph-Structured Visual Imitation [article]

Maximilian Sieb, Zhou Xian, Audrey Huang, Oliver Kroemer, Katerina Fragkiadaki
2020 arXiv   pre-print
multiple visual entity detectors for each demonstration without human annotations or robot interactions.  ...  Our robotic agent is rewarded when its actions result in better matching of relative spatial configurations for corresponding visual entities detected in its workspace and teacher's demonstration.  ...  Conclusion We proposed encoding video frames in terms of visual entities and their spatial relationships, and we used this encoding to compute a perceptual cost function for visual imitation.  ... 
arXiv:1907.05518v2 fatcat:2rkz4bj3prgc5cgkhhjcrpfpmm

Table of Contents

2021 IEEE Robotics and Automation Letters  
Xie 5573 A Robust Optical Flow Tracking Method Based On Prediction Model for Visual-Inertial Odometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ...  Ren 4923 Learning Barrier Functions With Memory for Robust Safe Navigation . . . . K. Long, C. Qian, J. Cortés, and N.  ... 
doi:10.1109/lra.2021.3095987 fatcat:uyk6vlvv45hifbzj4ruzdi6w54

Deep Drone Racing: Learning Agile Flight in Dynamic Environments [article]

Elia Kaufmann, Antonio Loquercio, Rene Ranftl, Alexey Dosovitskiy, Vladlen Koltun, Davide Scaramuzza
2018 arXiv   pre-print
We demonstrate our method in autonomous agile flight scenarios, in which a vision-based quadrotor traverses drone-racing tracks with possibly moving gates.  ...  The CNN directly maps raw images into a robust representation in the form of a waypoint and desired speed.  ...  The system combines the robust perceptual awareness of modern machine learning pipelines with the stability and speed of well-known control algorithms.  ... 
arXiv:1806.08548v3 fatcat:qolj5xr5svf6ta4wqvpz7xr55e

Deep Learning for Visual Tracking: A Comprehensive Survey [article]

Seyed Mojtaba Marvasti-Zadeh, Li Cheng, Hossein Ghanei-Yakhdan, and Shohreh Kasaei
2019 arXiv   pre-print
It also extensively evaluates and analyzes the leading visual tracking methods.  ...  Visual target tracking is one of the most sought-after yet challenging research topics in computer vision.  ...  Kamal Nasrollahi (Visual Analysis of People Lab (VAP), Aalborg University) for his beneficial comments.  ... 
arXiv:1912.00535v1 fatcat:v5ikqi2cpbblhgtkiu6z6l5anq

Deep Drone Racing: From Simulation to Reality With Domain Randomization

Antonio Loquercio, Elia Kaufmann, Rene Ranftl, Alexey Dosovitskiy, Vladlen Koltun, Davide Scaramuzza
2019 IEEE Transactions on robotics  
A racing drone must traverse a track with possibly moving gates at high speed.  ...  We enable this functionality by combining the performance of a state-of-the-art planning and control system with the perceptual awareness of a convolutional neural network (CNN).  ...  As a result, the system combines the robust perceptual awareness of modern machine learning pipelines with the precision and speed of well-known control algorithms.  ... 
doi:10.1109/tro.2019.2942989 fatcat:u65own2qmvcmxkglwlmduyedfi

Learning to Jump from Pixels [article]

Gabriel B. Margolis, Tao Chen, Kartik Paigwar, Xiang Fu, Donghyun Kim, Sangbae Kim, Pulkit Agrawal
2021 arXiv   pre-print
The requirement for agility and terrain awareness in this setting reinforces the need for robust control.  ...  Such dynamic motion results in significant motion of onboard sensors, which introduces a new set of challenges for real-time visual processing.  ...  We are grateful to Elijah Stanger-Jones for his support in working with the robot hardware and electronics.  ... 
arXiv:2110.15344v1 fatcat:fr3wyu4udfhvjbqod6hog33l6u

Socially-Aware Multi-Agent Following with 2D Laser Scans via Deep Reinforcement Learning and Potential Field [article]

Yuxiang Cui, Xiaolong Huang, Yue Wang, Rong Xiong
2021 arXiv   pre-print
In this paper, we propose a multi-agent method for an arbitrary number of robots to follow the target in a socially-aware manner using only 2D laser scans.  ...  Specifically, with the help of laser scans in obstacle map representation, the learning-based policy can help the robots avoid collisions with both static obstacles and dynamic obstacles like pedestrians  ...  Imitation learning based ones learn the tracking policy from expert demonstrations.  ... 
arXiv:2109.01874v1 fatcat:4nbqyrdqkzbtzid46bfy5aob5i

2019 Index IEEE Robotics and Automation Letters Vol. 4

2019 IEEE Robotics and Automation Letters  
Guo, Y., +, LRA July 2019 2801-2806 Deep Visual MPC-Policy Learning for Navigation.  ...  ., +, LRA Oct. 2019 4094-4101 Provably Robust Learning-Based Approach for High-Accuracy Tracking Control of Lagrangian Systems.  ...  Permanent magnets Adaptive Dynamic Control for Magnetically Actuated Medical Robots.  ... 
doi:10.1109/lra.2019.2955867 fatcat:ckastwefh5chhamsravandtnx4

Catch Carry: Reusable Neural Controllers for Vision-Guided Whole-Body Tasks [article]

Josh Merel, Saran Tunyasuvunakool, Arun Ahuja, Yuval Tassa, Leonard Hasenclever, Vu Pham, Tom Erez, Greg Wayne, Nicolas Heess
2020 arXiv   pre-print
We demonstrate the utility of our approach for several tasks, including goal-conditioned box carrying and ball catching, and we characterize its behavioral robustness.  ...  We develop an integrated neural-network based approach consisting of a motor primitive module, human demonstrations, and an instructed reinforcement learning regime with curricula and task variations.  ...  studio visit, and Audiomotion Studios for services related to motion capture collection and clean up.  ... 
arXiv:1911.06636v2 fatcat:zjtpjueewbafjgqi5grgr6dcra

Real-time Perception meets Reactive Motion Generation [article]

Daniel Kappler, Jim Mainprice , Vincent Berenz and Jeannette Bohg Autonomous Motion Department at the MPI for Intelligent Systems, Tübingen, Germany, CLMC lab at the University of Southern California, Los Angeles, CA, USA, Lula Robotics Inc., Seattle, WA, USA, Dept. of Computer Science & Engineering, Univ. of Washington (+1 others)
2017 arXiv   pre-print
We also report on the lessons learned for system building.  ...  All architectures rely on the same components for real-time perception and reactive motion generation to allow a quantitative evaluation.  ...  Therefore, we opted for model-based visual tracking and querying SDFs only for a small subset of points on the robot.  ... 
arXiv:1703.03512v3 fatcat:y3y25efvw5fsjeqw6j3xh7glve
« Previous Showing results 1 — 15 out of 11,741 results