856 Hits in 7.9 sec

3D environment modelling using laser range sensing

Vítor Sequeira, João G.M. Gonçalves, M.Isabel Ribeiro
1995 Robotics and Autonomous Systems  
This paper describes a 3D scene analysis system that is capable of modelling real-world scenes, based on data acquired by a Laser Range Finder on board of a mobile robot.  ...  Laser range images directly provide access to three dimensional information, as opposed to intensity images.  ...  In the present case, range images were captured using a time-of-flight Laser Range Finder (LRF).  ... 
doi:10.1016/0921-8890(95)00036-f fatcat:n2rnpvzq45ebto5ztqz6txhkbm

Monocular vision and calculation of regular three-dimensional target pose based on Otsu and Haar-feature AdaBoost classifier

Yuanhong Li, 1. Department of Engineering, South China Agricultural University, Guangzhou 510642, China, Hongjun Wang, Weiliang Zhou, Zehao Xue, 2. Southern Key Laboratory of Agricultural Equipment Machinery, South China Agricultural University, Guangzhou 510642, China, 3. Department of Biological and Agricultural Engineering, Texas A&M University, College Station, TX, 77843, USA
2020 International Journal of Agricultural and Biological Engineering  
It shows that the Otsu and Haar-feature based on AdaBoost algorithm is feasible within a certain error ranges and meet the engineering requirements for solving the poses of automated regular three-dimensional  ...  The strong classifier was formed by weight combination, and the Hough contour transformation algorithm was set to correct the normal vector between plane coordinate and camera coordinate system.  ...  Solving the A 1 coordinate, L 1 is the measured result of the laser range finder. α 1 is the deflection angle of the laser range finder, and H 1 is the vertical distance from the laser point to coordinate  ... 
doi:10.25165/j.ijabe.20201305.5013 fatcat:nl4uqinb6vc27olfat6sdbrlam


A. R. Jiménez, R. Ceres, J. L. Pons
2000 Transactions of the ASAE  
The approaches using range images and shape analysis were capable of detecting fruit of any color, did not generate false alarms and gave precise information about the fruit three-dimensional position.  ...  The majority of these works use CCD cameras to capture the images and use local or shape-based analysis to detect the fruit.  ...  , and ii) scanning a Local laser range-finder over the scene to get direct range and reflectance information without any processing.  ... 
doi:10.13031/2013.3096 fatcat:3hr7xy3fjfe7jfcnblyudncdy4

Automatic fruit recognition: a survey and new results using Range/Attenuation images

A.R. Jiménez, A.K. Jain, R. Ceres, J.L. Pons
1999 Pattern Recognition  
The sensor used is a laser range-finder giving range/attenuation data of the sensed surface.  ...  The recognition system uses a laser range-finder model and a dual color/shape analysis algorithm to locate the fruit.  ...  ACKNOWLEDGEMENTS This research was done at the PRIP laboratory of the Computer Science Department at Michigan State University and was sponsored by the Spanish National Programme PN93 (CICYT-TAP93-0583  ... 
doi:10.1016/s0031-3203(98)00170-8 fatcat:jxj4nihakfdxjm6svqgtupfdka

Novel Low Cost 3D Surface Model Reconstruction System for Plant Phenotyping

Suxing Liu, Lucia Acosta-Gamboa, Xiuzhen Huang, Argelia Lorence
2017 Journal of Imaging  
Accurate high-resolution three-dimensional (3D) models are essential for a non-invasive analysis of phenotypic characteristics of plants.  ...  We present an image-based 3D plant reconstruction system that can be achieved by using a single camera and a rotation stand.  ...  Based on the analysis above, theories about the relationship between multi-view data capturing and the quality of rendered virtual view were applied to find a balance between multi-view data capturing  ... 
doi:10.3390/jimaging3030039 fatcat:pg3lwiomdvfkbizdpboit2codu

RGB-DI Images and Full Convolution Neural Network-Based Outdoor Scene Understanding for Mobile Robots

Zengshuai Qiu, Yan Zhuang, Fei Yan, Huosheng Hu, Wei Wang
2018 IEEE Transactions on Instrumentation and Measurement  
Index Terms-Full convolution neural network (FCN), mobile robots, multisensor data fusion, outdoor scene understanding, semantic segmentation.  ...  The 3-D semantic segmentation in RGB-DI cloud points is, therefore, transformed to the semantic segmentation in RGB-DI images.  ...  The key technique in this paper is to transform the estimation of laser range finder pose into a simplified perspective-three-point problem.  ... 
doi:10.1109/tim.2018.2834085 fatcat:gpeb7qzeije2rk4l4w4eyhj4mq

Flatland: A Tool for Transforming Historical Sites into Archival Drawings [article]

Rattasak Srisinroongruang, Eric Sinzinger, Glenn Hill
2007 Eurographics State of the Art Reports  
Laser range finders can be used to produce highly accurate geometric representations of the historical sites, and high resolution images provide vital detail.  ...  Flatland provides the missing link for archaeologists between three dimensional representations and archival drawings.  ...  By tracing in 3D, sampling the data to a 2D space is not required, preserving the dimensionality of the captured data.  ... 
doi:10.2312/egch.20071005 fatcat:4j5ozcycbretrgb4sv5uvejhwu

Parameters Separated Calibration Based on Particle Swarm Optimization for a Camera and a Laser-Rangefinder Information Fusion

Kaijun Zhou, Lingli Yu
2014 Mathematical Problems in Engineering  
At first, the mapping relationship among world coordinate system, camera coordinate system, and image plane is discussed, and then the calibration of camera intrinsic parameters is achieved.  ...  Furthermore, the particle swarm optimization is proposed for the extrinsic parameters estimation with different objectives, and the Gaussian elimination is utilized for the initial particle swarm.  ...  the laser range finders can give more accurate depth information [2] .  ... 
doi:10.1155/2014/291461 fatcat:f35m54rl7zdjxb3uoiabc64ply

Toward dynamic recalibration and three-dimensional reconstruction in a structured light system

Y. F. Li, B. Zhang
2007 Optical Society of America. Journal A: Optics, Image Science, and Vision  
Computer simulation and real data experiments were carried out to validate the method.  ...  In the latter method, a single image is sufficient for the whole process of calibration and reconstruction. Thus a hand-held camera can be used.  ...  ACKNOWLEDGMENT The work described in this paper was fully supported by grants from the Research Grants Council of Hong Kong (projects CityU1206/04E and CityU117605).  ... 
doi:10.1364/josaa.24.000785 pmid:17301867 fatcat:5oy5dptrxjhgdlizl43gqftlie

Identifying, visualizing, and comparing regions in irregularly spaced 3D surface data

Matthew J. Thurley, Kim C. Ng
2005 Computer Vision and Image Understanding  
Mathematical morphology and image segmentation algorithms have been extended from greyscale image based definitions and applied to irregularly spaced 3D coordinate surface data. 3D coordinate surface data  ...  and size distribution calculation, utilisation of 3D data to overcome various limitations of photographic-based image analysis, and the capacity to use 3D fragment data to eliminate the misclassification  ...  Let the set S = i S i represent a set comprising multiple scans of this scene using the same coordinate system so that all of these views are registered.  ... 
doi:10.1016/j.cviu.2003.12.002 fatcat:uqr6tzwyere7ripzuwpzovpute

Automatic reconstruction of textured 3D models

Benjamin Pitzer, Sören Kammel, Charles DuHadway, Jan Becker
2010 2010 IEEE International Conference on Robotics and Automation  
We also propose a calibration procedure for this system that determines the internal and external calibration which is necessary to transform data from one sensor into the coordinate system of another  ...  Next, we present solutions for the multi-view data registration problem, which is essentially the problem of aligning the data of multiple 3D scans into a common coordinate system.  ...  This external calibration enables subsequent processing steps to transform data points between sensor coordinate systems and fuse color with range data.  ... 
doi:10.1109/robot.2010.5509568 dblp:conf/icra/PitzerKDB10 fatcat:ve3ds7774jcuren7ycgjl6xlsm

Reflectance Analysis for 3D Computer Graphics Model Generation

Yoichi Sato, Katsushi Ikeuchi
1996 Graphical Models and Image Processing  
Then, the object shape is obtained as a collection of triangular patches by merging multiple range images.  ...  This paper describes one approach to create a three dimensional object model with physically correct reflectance properties by observing a real object. The approach consists of three steps.  ...  Then, range and color images are captured by the range finder at fixed angle steps of object orientation.  ... 
doi:10.1006/gmip.1996.0036 fatcat:bl6gbsy4cjc5xbdb6ndtiromvu

Visual Based Navigation of Mobile Robots [article]

Shailja, Soumabh Bhowmick, Jayanta Mukhopadhyay
2017 arXiv   pre-print
Using multiple taken by a simple webcam, obstacle detection and avoidance algorithms have been developed.  ...  Simple Linear Iterative Clustering (SLIC) has been used for segmentation to reduce the memory and computation cost.  ...  It involves an analysis based on homogeneous coordinates and perspective transformation matrix. 7.2.2 Homogeneous Coordinates Homogeneous coordinates are simply a way of representing N -dimensional  ... 
arXiv:1712.05482v1 fatcat:yhkwwfzjbrclde6ivvaicu2wwm

Augmented scene modeling and visualization by optical and acoustic sensor integration

A. Fusiello, V. Murino
2004 IEEE Transactions on Visualization and Computer Graphics  
To this end, the use of multiple sensors is typically necessary, but the related data integration is critical.  ...  The main idea is to integrate multiple-sensor data by geometrically registering such data to a model.  ...  In [3] , [4] , and [5] , a laser range-finder is used to acquire a depth map of a real object.  ... 
doi:10.1109/tvcg.2004.38 pmid:15527045 fatcat:a26luyhut5gfzaepmuzltf5bza

Spatial uncertainty management for a mobile robot

Ronald C. Arkin
1991 International Journal of Approximate Reasoning  
The mobile robot must contend with a minimum of three degrees of freedom (with significant uncertainty in each): two degrees of translation, which can be represented as x and y coordinates in a Cartesian  ...  The vision algorithms developed include adaptive region segmentation, fast line finding, depth-from-motion, temporal activity detection, Hough transform-based recognition, and texture-based methods [1,  ...  Both the three-dimensional world coordinates and the matched two-dimensional image plane coordinates are used in position estimation.  ... 
doi:10.1016/0888-613x(91)90033-i fatcat:cepy4fbeg5fgdl4jroqrujanqi
« Previous Showing results 1 — 15 out of 856 results