Filters








3,766 Hits in 5.6 sec

Fusing Time-of-Flight Depth and Color for Real-Time Segmentation and Tracking [chapter]

Amit Bleiweiss, Michael Werman
2009 Lecture Notes in Computer Science  
We present an improved framework for real-time segmentation and tracking by fusing depth and RGB color data.  ...  We are able to solve common problems seen in tracking and segmentation of RGB images, such as occlusions, fast motion, and objects of similar color.  ...  The authors would like to thank Omek Interactive for providing the sequence data.  ... 
doi:10.1007/978-3-642-03778-8_5 fatcat:zrbnlcqqbbdyzcexavltrxpfje

FAST-Dynamic-Vision: Detection and Tracking Dynamic Objects with Event and Depth Sensing [article]

Botao He, Haojia Li, Siyuan Wu, Dong Wang, Zhiwei Zhang, Qianli Dong, Chao Xu, Fei Gao
2021 arXiv   pre-print
Finally, we propose an optimizationbased approach that asynchronously fuses event and depth cameras for trajectory prediction.  ...  However, dodging fast-moving objects in flight remains a challenge, limiting the further application of unmanned aerial vehicles (UAVs).  ...  Meanwhile, a real-time detection and tracking algorithm is also indispensable.  ... 
arXiv:2103.05903v2 fatcat:bx4gx76ntrhczdhmvceh3336oa

Real-Time Foreground Segmentation with Kinect Sensor [chapter]

Luigi Cinque, Alessandro Danani, Piercarlo Dondi, Luca Lombardi
2015 Lecture Notes in Computer Science  
The most known of these devices is certainly the Microsoft Kinect, able to provide at the same time a color image and a depth map of the scene.  ...  This paper presents an alternative and more general solution for the foreground segmentation and a comparison with the standard background subtraction algorithm of Kinect.  ...  input color/depth sequence and not for its real-time capability.  ... 
doi:10.1007/978-3-319-23234-8_6 fatcat:lsypxe2p6bhkpc5kfqmusnmgcm

Co-fusion: Real-time segmentation, tracking and fusion of multiple objects

Martin Runz, Lourdes Agapito
2017 2017 IEEE International Conference on Robotics and Automation (ICRA)  
tracking and reconstructing their 3D shape in real time.  ...  In contrast, we enable the robot to maintain 3D models for each of the segmented objects and to improve them over time through fusion.  ...  ACKNOWLEDGMENT This work has been supported by the SeconHands project, funded from the EU Horizon 2020 Research and Innovation programme under grant agreement No 643950.  ... 
doi:10.1109/icra.2017.7989518 dblp:conf/icra/RunzA17 fatcat:tzlfylrf5baxjnse77fkokjmiy

Hand Tracking based on Hierarchical Clustering of Range Data [article]

Roberto Cespi, Andreas Kolb, Marvin Lindner
2011 arXiv   pre-print
In this paper we present a real-time hand segmentation and tracking algorithm using Time-of-Flight (ToF) range cameras and intensity data.  ...  Fast and robust hand segmentation and tracking is an essential basis for gesture recognition and thus an important component for contact-less human-computer interaction (HCI).  ...  Introduction Gesture-based real-time human-computer interaction requires a fast and robust segmentation of the human hands [13] . Classical approaches are based on 2D intensity or color images.  ... 
arXiv:1110.5450v1 fatcat:gnkvvfszdrdnzlndqeuisxrqpm

Real-time Scalable Dense Surfel Mapping [article]

Kaixuan Wang and Fei Gao and Shaojie Shen
2019 arXiv   pre-print
The performances of urban-scale and room-scale reconstruction are demonstrated using the KITTI dataset and autonomous aggressive flights, respectively.  ...  The code is available for the benefit of the community.  ...  Our system uses state-of-the-art sparse visual SLAM systems to track camera poses and fuses intensity images and depth images into a globally consistent model.  ... 
arXiv:1909.04250v1 fatcat:y6icd3zgi5agjd27hk5etyq6pq

Free LSD: Prior-Free Visual Landing Site Detection for Autonomous Planes [article]

Timo Hinzmann, Thomas Stastny, Cesar Cadena, Roland Siegwart, Igor Gilitschenski
2018 arXiv   pre-print
The proposed framework has been successfully tested on photo-realistic synthetic datasets and in challenging real-world environments.  ...  Full autonomy for fixed-wing unmanned aerial vehicles (UAVs) requires the capability to autonomously detect potential landing sites in unknown and unstructured terrain, allowing for self-governed mission  ...  The authors wish to thank Felix Renaut for an initial implementation of the presented frontend, Lucas Teixeira (Vision for Robotics Lab, ETH Zurich) for sharing scripts that bridge the gap between Blender  ... 
arXiv:1802.09043v1 fatcat:4wzih3txjrgazdnweuqzgv5ybu

Real-time foreground segmentation via range and color imaging

Ryan Crabb, Colin Tracey, Akshaya Puranik, James Davis
2008 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops  
This paper describes a real-time method for foreground/background segmentation of a color video sequence based primarily on range data of a time-of-flight sensor.  ...  This method uses depth information of a TOF-sensor paired with a high resolution color video camera to efficiently segment foreground from background in a twostep process.  ...  In this paper, we present a method of real-time background substitution based primarily on depth, for use with a time-of-flight depth sensor and paired color video camera, which can be performed against  ... 
doi:10.1109/cvprw.2008.4563170 dblp:conf/cvpr/CrabbTPD08 fatcat:qhu6tkqac5hc3c7pqqlhdhreke

Live User-Guided Intrinsic Video for Static Scenes

Abhimitra Meka, Gereon Fox, Michael Zollhofer, Christian Richardt, Christian Theobalt
2017 IEEE Transactions on Visualization and Computer Graphics  
Abstract-We present a novel real-time approach for user-guided intrinsic decomposition of static scenes captured by an RGB-D sensor.  ...  Our approach runs at real-time frame rates, and we apply it to applications such as relighting, recoloring and material editing.  ...  ACKNOWLEDGMENTS We thank the anonymous reviewers for their helpful feedback. This work was supported by the ERC Starting Grant CapReal (335545).  ... 
doi:10.1109/tvcg.2017.2734425 pmid:28809688 fatcat:2ieprbfyt5cqdaq57ywy5ngrc4

Time-of-Flight Sensors in Computer Graphics [article]

Andreas Kolb, Erhardt Barth, Reinhard Koch, Rasmus Larsen
2009 Eurographics State of the Art Reports  
A lower-priced, fast and robust alternative for distance measurements are Time-of-Flight (ToF) cameras.  ...  The upcoming generation of ToF sensors, however, will be even more powerful and will have the potential to become "ubiquitous real-time geometry devices" for gaming, web-conferencing, and numerous other  ...  The depth is upscaled and fused like in [YYDN07, ZWYD08], and a layered depth and color map is constructed for each image frame.  ... 
doi:10.2312/egst.20091064 fatcat:kqiyxkza3zdt7bvslkgel4zmr4

Improving Video Segmentation by Fusing Depth Cues and the Visual Background Extractor (ViBe) Algorithm

Xiaoqin Zhou, Xiaofeng Liu, Aimin Jiang, Bin Yan, Chenguang Yang
2017 Sensors  
In this paper, we propose a new fusion method that combines depth and color information for foreground segmentation based on an advanced color-based algorithm.  ...  Background subtraction algorithms can be improved by fusing color and depth cues, thereby allowing many issues encountered in classical color segmentation to be solved.  ...  It is inefficient and time consuming, especially for real-time tracking.  ... 
doi:10.3390/s17051177 pmid:28531134 pmcid:PMC5470922 fatcat:3rw3kpuh3bbf7jttmx37h2j5hm

Object-Level Semantic Map Construction for Dynamic Scenes

Xujie Kang, Jing Li, Xiangtao Fan, Hongdeng Jian, Chen Xu
2021 Applied Sciences  
This paper introduces a method for robust dense bject-level SLAM in dynamic environments that takes a live stream of RGB-D frame data as input, detects moving objects, and segments the scene into different  ...  According to the camera pose accuracy and instance segmentation results, an object-level semantic map representation was constructed for the world map.  ...  All authors have read and agreed to the published version of the manuscript.  ... 
doi:10.3390/app11020645 fatcat:afpvspjjvrftnbi65dl6fujeei

Employing a RGB-D sensor for real-time tracking of humans across multiple re-entries in a smart environment

Jungong Han, E. J. Pauwels, P. M. de Zeeuw, P. H. N. de With
2012 IEEE transactions on consumer electronics  
In this paper, we intend to tackle the problems of detecting and tracking humans in a realistic home environment by exploiting the complementary nature of (synchronized) color and depth images produced  ...  data clustering, (2) human re-entry identification based on comparing visual signatures extracted from the color (RGB) information, and (3) human tracking based on the fusion of both depth and RGB data  ...  In [13] , one fuses depth and color data to segment the foreground pixels in a video sequence.  ... 
doi:10.1109/tce.2012.6227420 fatcat:72vtxgkbzba4ngixm55cguehm4

Toward automated power line corridor monitoring using advanced aircraft control and multisource feature fusion

Zhengrong Li, Troy S. Bruggemann, Jason J. Ford, Luis Mejias, Yuee Liu
2011 Journal of Field Robotics  
Airborne automation is achieved by using a novel approach that provides improved lateral control for tracking corridors and automatic real-time dynamic turning for flying between corridor segments, we  ...  Improved object recognition is achieved by fusing information from multi-sensor (LiDAR and imagery) data and multiple visual feature descriptors (color and texture).  ...  Powerline Tracking Automatic Guidance System (PTAGS) The Powerline Tracking Automatic Guidance System (PTAGS) provides improved lateral control for tracking corridors and automatic real-time dynamic turning  ... 
doi:10.1002/rob.20424 fatcat:b7i2ukpctjchbl6rge6thkfoma

Improving depth perception with motion parallax and its application in teleconferencing

Cha Zhang, Zhaozheng Yin, Dinei Florencio
2009 2009 IEEE International Workshop on Multimedia Signal Processing  
We also propose a novel foreground/background segmentation and matting algorithm with time-of-flight camera, which is robust to moving background, lighting variations, moving camera, etc.  ...  Stereopsis and motion parallax are two of the most important cues for depth perception. Most of the 3D displays today rely on stereopsis to create 3D perception.  ...  In this paper, we explore the segmentation capability of a 3D image sensor (ZCam [2] ) which provides real-time color and depth information, as shown in Fig. 4 .  ... 
doi:10.1109/mmsp.2009.5293309 dblp:conf/mmsp/ZhangYF09 fatcat:mv6a4l3mazeiddvxkfjc736mze
« Previous Showing results 1 — 15 out of 3,766 results