Filters








24,922 Hits in 7.3 sec

EVO: A Geometric Approach to Event-Based 6-DOF Parallel Tracking and Mapping in Real Time

Henri Rebecq, Timo Horstschaefer, Guillermo Gallego, Davide Scaramuzza
2017 IEEE Robotics and Automation Letters  
The implementation runs in real-time on a standard CPU and outputs up to several hundred pose estimates per second.  ...  We believe that this work makes significant progress in SLAM by unlocking the potential of event cameras.  ...  However, contrarily to [12] , our geometric approach is computationally efficient (runs in real-time on the CPU in case of moderate motions), and does not need to recover image intensity to estimate depth  ... 
doi:10.1109/lra.2016.2645143 dblp:journals/ral/RebecqHGS17 fatcat:ezlx5eejdngqhbatygoyedmeha

Robust camera localisation with depth reconstruction for bronchoscopic navigation

Mali Shen, Stamatia Giannarou, Guang-Zhong Yang
2015 International Journal of Computer Assisted Radiology and Surgery  
It outperforms the existing vision-based registration methods resulting in smaller pose estimation error of the bronchoscopic camera.  ...  The pose of the bronchoscopic camera is estimated by maximising the similarity between the depth recovered from a video image and that captured from a virtual camera projection of the CT model.  ...  Fani Deligianni who provided details of the pq-space based registration approach originally proposed in [7] and the data used for validation in that publication.  ... 
doi:10.1007/s11548-015-1197-y pmid:25903774 fatcat:fgyq544khngntnmu64azx4zt4u

Robust Computer Vision Techniques for High-Quality 3D Modeling

Joon-Young Lee, Jiyoung Jung, Yunsu Bok, Jaesik Park, Dong-Geol Choi, Yudeog Han, In So Kweon
2013 2013 2nd IAPR Asian Conference on Pattern Recognition  
The second method performing shape-fromshading with a Kinect sensor estimates the shape of an object under uncalibrated natural illumination.  ...  Subsequently, we summarize a calibration algorithm of a timeof-flight (ToF) sensor and a camera fusion system with a 2.5D pattern.  ...  Summary In conclusion, we have presented an extrinsic calibration method to estimate the pose of a 3D ToF camera with respect to a color camera.  ... 
doi:10.1109/acpr.2013.215 dblp:conf/acpr/LeeJBPCHK13 fatcat:d7kzhi6emfc75pft7ccgwsno4u

Event-based Stereo Visual Odometry [article]

Yi Zhou, Guillermo Gallego, Shaojie Shen
2021 arXiv   pre-print
Specifically, the mapping module builds a semi-dense 3D map of the scene by fusing depth estimates from multiple local viewpoints (obtained by spatio-temporal consistency) in a probabilistic fashion.  ...  Our system follows a parallel tracking-and-mapping approach, where novel solutions to each subproblem (3D reconstruction and camera pose estimation) are developed with two objectives in mind: being principled  ...  Siqi Liu for the help in data collection, and Dr. Alex Zhu for providing the CopNet baseline [21] , [63] and assistance in using the dataset [56] .  ... 
arXiv:2007.15548v2 fatcat:xc5z5zzosvfubabzltwdkm4s4a

Semi-direct EKF-based monocular visual-inertial odometry

Petri Tanskanen, Tobias Naegeli, Marc Pollefeys, Otmar Hilliges
2015 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)  
are used simultaneously during camera pose estimation.  ...  rotation, and not depending on the presence of corner-like features in the scene.  ...  The algorithm is optimizing the camera pose and the patch depth by minimizing the intensity residual.  ... 
doi:10.1109/iros.2015.7354242 dblp:conf/iros/TanskanenNPH15 fatcat:gfltsfpvhfczhlaazr22tdca44

EMVS: Event-Based Multi-View Stereo—3D Reconstruction with an Event Camera in Real-Time

Henri Rebecq, Guillermo Gallego, Elias Mueggler, Davide Scaramuzza
2017 International Journal of Computer Vision  
Despite its simplicity (it can be implemented in a few lines of code), our algorithm is able to produce accurate, semi-dense depth maps, without requiring any explicit data association or intensity estimation  ...  Despite its simplicity (it can be implemented in a few lines of code), our algorithm is able to produce accurate, semidense depth maps, without requiring any explicit data association or intensity estimation  ...  Acknowledgements This research was supported by the National Centre of Competence in Research Robotics (NCCR) and the UZH Forschungskredit.  ... 
doi:10.1007/s11263-017-1050-6 fatcat:vycogvcggngyhjzmos4hvmvxva

Event-Based, 6-DOF Camera Tracking from Photometric Depth Maps

Guillermo Gallego, Jon E.A. Lund, Elias Mueggler, Henri Rebecq, Tobi Delbruck, Davide Scaramuzza
2018 IEEE Transactions on Pattern Analysis and Machine Intelligence  
With these applications in mind, this paper tackles the problem of accurate, low-latency tracking of an event camera from an existing photometric depth map (i.e., intensity plus depth information) built  ...  Our approach tracks the 6-DOF pose of the event camera upon the arrival of each event, thus virtually eliminating latency.  ...  The pose tracker is based on edge-map alignment and the scene depth is estimated without intensity reconstruction, thus allowing the system to run in real-time on the CPU.  ... 
doi:10.1109/tpami.2017.2769655 pmid:29990121 fatcat:i3jmrunyanefrjvintm2ckgfz4

NID-SLAM: Robust Monocular SLAM Using Normalised Information Distance

Geoffrey Pascoe, Will Maddern, Michael Tanner, Pedro Pinies, Paul Newman
2017 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
a vehicle-mounted camera.  ...  In contrast to current state-of-the-art direct methods based on photometric error minimisation, our information-theoretic NID metric provides robustness to appearance variation due to lighting, weather  ...  In contrast to existing feature-based and direct photometric methods, NID-SLAM uses a global appearance metric to solve for camera pose relative to a key-frame depth map even in the presence of significant  ... 
doi:10.1109/cvpr.2017.158 dblp:conf/cvpr/PascoeMTP017 fatcat:omw32nxg5vgghaoc5flca54lli

Egocentric Real-time Workspace Monitoring using an RGB-D camera

Dima Damen, Andrew Gee, Walterio Mayol-Cuevas, Andrew Calway
2012 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems  
A prototype on-body system developed in the context of work-flow analysis for industrial manipulation and assembly tasks is described.  ...  The approach is egocentric, facilitating full flexibility, and operates in real-time, providing object detection and recognition, and 3D trajectory estimation whilst the user undertakes tasks in the workspace  ...  estimated camera pose.  ... 
doi:10.1109/iros.2012.6385829 dblp:conf/iros/DamenGMC12 fatcat:badaqizpfrfuzdg3lt6fwftzca

Towards Urban 3D Reconstruction from Video

A. Akbarzadeh, J.-M. Frahm, P. Mordohai, B. Clipp, C. Engels, D. Gallup, P. Merrell, M. Phelps, S. Sinha, B. Talton, L. Wang, Q. Yang (+6 others)
2006 Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06)  
Besides high quality in terms of both geometry and appearance, we aim at real-time performance.  ...  We present the main considerations in designing the system and the steps of the processing pipeline. We show results on real video sequences captured by our system.  ...  Acknowledgement This work is partially supported by DARPA under the UrbanScape project, which is lead by the Geo-Spatial Technologies Information Division of SAIC.  ... 
doi:10.1109/3dpvt.2006.141 dblp:conf/3dpvt/AkbarzadehFMCEGMPSTWYSYWTNP06 fatcat:mcwdxl47rvfbzerp3zv43afexq

REMODE: Probabilistic, monocular dense reconstruction in real time

Matia Pizzoli, Christian Forster, Davide Scaramuzza
2014 2014 IEEE International Conference on Robotics and Automation (ICRA)  
In this paper, we solve the problem of estimating dense and accurate depth maps from a single moving camera.  ...  We demonstrate that our method outperforms stateof-the-art techniques in terms of accuracy, while exhibiting high efficiency in memory usage and computing power.  ...  Camera pose estimation At every time step k, the pose of the camera T k,r in the depth map reference frame r is computed by a visual odometry routine that is based on recent advancement on semi-direct  ... 
doi:10.1109/icra.2014.6907233 dblp:conf/icra/PizzoliFS14 fatcat:7vjx2dhbcnemtkccw4q5xtc4du

CNN-SLAM: Real-Time Dense Monocular SLAM with Learned Depth Prediction

Keisuke Tateno, Federico Tombari, Iro Laina, Nassir Navab
2017 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
We demonstrate the use of depth prediction for estimating the absolute scale of the reconstruction, hence overcoming one of the major limitations of monocular SLAM.  ...  Given the recent advances in depth prediction from Convolutional Neural Networks (CNNs), this paper investigates how predicted depth maps from a deep neural network can be deployed for accurate and dense  ...  Every stage of the framework is now detailed in the following Subsections. Camera Pose Estimation The camera pose estimation is inspired by the key-frame approach in [4] .  ... 
doi:10.1109/cvpr.2017.695 dblp:conf/cvpr/TatenoTLN17 fatcat:szx7xilnzjeovlhgbuarzlig24

CNN-SLAM: Real-time dense monocular SLAM with learned depth prediction [article]

Keisuke Tateno, Federico Tombari, Iro Laina, Nassir Navab
2017 arXiv   pre-print
We demonstrate the use of depth prediction for estimating the absolute scale of the reconstruction, hence overcoming one of the major limitations of monocular SLAM.  ...  Given the recent advances in depth prediction from Convolutional Neural Networks (CNNs), this paper investigates how predicted depth maps from a deep neural network can be deployed for accurate and dense  ...  Every stage of the framework is now detailed in the following Subsections. Camera Pose Estimation The camera pose estimation is inspired by the key-frame approach in [4] .  ... 
arXiv:1704.03489v1 fatcat:7oiapw4zxbbmxlj4tjolqwxzzm

DI-Fusion: Online Implicit 3D Reconstruction with Deep Priors [article]

Jiahui Huang, Shi-Sheng Huang, Haoxuan Song, Shi-Min Hu
2021 arXiv   pre-print
With such deep priors, we are able to perform online implicit 3D reconstruction achieving state-of-the-art camera trajectory estimation accuracy and mapping quality, while achieving better storage efficiency  ...  In this paper, we present DI-Fusion (Deep Implicit Fusion), based on a novel 3D representation, i.e.  ...  Intensity Term. The intensity term used in our camera tracking (Sec. 3.2) also influences the camera pose estimation accuracy.  ... 
arXiv:2012.05551v2 fatcat:ec52lm6e4bhsheoiku34i6mrzq

Keyframe-based monocular SLAM: design, survey, and future directions

Georges Younes, Daniel Asmar, Elie Shammas, John Zelek
2017 Robotics and Autonomous Systems  
Although filter-based monocular SLAM systems were common at some time, the more efficient keyframe-based solutions are becoming the de facto methodology for building a monocular SLAM system.  ...  Second, it presents a survey that covers the various keyframe-based monocular SLAM systems in the literature, detailing the components of their implementation, and critically assessing the specific strategies  ...  Pose optimization 210 Direct and feature-based methods estimate the camera pose by minimizing a measure of error between frames; direct methods measure the photometric error, modeled as the intensity difference  ... 
doi:10.1016/j.robot.2017.09.010 fatcat:b6ilgshsinckjdnlsozmjppkly
« Previous Showing results 1 — 15 out of 24,922 results