A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
FVNet: 3D Front-View Proposal Generation for Real-Time Object Detection from Point Clouds
[article]
2019
arXiv
pre-print
In this paper, we propose a novel framework called FVNet for 3D front-view proposal generation and object detection from point clouds. ...
3D object detection from raw and sparse point clouds has been far less treated to date, compared with its 2D counterpart. ...
B. 3D Object Detection based on Point Clouds Projection-Based Methods. ...
arXiv:1903.10750v3
fatcat:htpvcfs5qrbo5bdkan6lckqagq
PIXOR: Real-time 3D Object Detection from Point Clouds
[article]
2019
arXiv
pre-print
We address the problem of real-time 3D object detection from point clouds in the context of autonomous driving. Computation speed is critical as detection is a necessary component for safety. ...
We utilize the 3D data more efficiently by representing the scene from the Bird's Eye View (BEV), and propose PIXOR, a proposal-free, single-stage detector that outputs oriented 3D object estimates decoded ...
Meyer for suggesting the decoding loss, Andrei Pokrovsky for GPU implementation of oriented NMS, and the anonymous reviewers for their insightful suggestions. ...
arXiv:1902.06326v3
fatcat:enhz3kdw7rgjra6uioh7yajmpe
Fusing Bird View LIDAR Point Cloud and Front View Camera Image for Deep Object Detection
[article]
2018
arXiv
pre-print
A corresponding deep CNN is designed and tested on the KITTI bird view object detection dataset, which produces 3D bounding boxes from the bird view map. ...
The fusion method shows particular benefit for detection of pedestrians in the bird view compared to other fusion-based object detection networks. ...
The KITTI 3D object and bird's eye view evaluation are used. ...
arXiv:1711.06703v3
fatcat:y2gkx54iqndzzp2qxj7llitbgq
PIXOR: Real-time 3D Object Detection from Point Clouds
2018
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
We address the problem of real-time 3D object detection from point clouds in the context of autonomous driving. Speed is critical as detection is a necessary component for safety. ...
We utilize the 3D data more efficiently by representing the scene from the Bird's Eye View (BEV), and propose PIXOR, a proposal-free, single-stage detector that outputs oriented 3D object estimates decoded ...
In this paper, we propose an accurate real-time 3D object detector, which we call PIXOR (ORiented 3D object detection from PIXel-wise neural network predictions), that operates on 3D point clouds. ...
doi:10.1109/cvpr.2018.00798
dblp:conf/cvpr/YangLU18
fatcat:adyhkrjxjfgmdjdzs5cgdri76e
Frustum PointNets for 3D Object Detection from RGB-D Data
[article]
2018
arXiv
pre-print
Evaluated on KITTI and SUN RGB-D 3D detection benchmarks, our method outperforms the state of the art by remarkable margins while having real-time capability. ...
While previous methods focus on images or 3D voxels, often obscuring natural 3D patterns and invariances of 3D data, we directly operate on raw point clouds by popping up RGB-D scans. ...
., ONR MURI grant N00014-13-1-0341, NSF grants DMS-1546206 and IIS-1528025, a Samsung GRO award, and gifts from Adobe, Amazon, and Apple. ...
arXiv:1711.08488v2
fatcat:aatdkha3gzgcnoe4rpem6fggaa
Free Space Detection Using Camera-LiDAR Fusion in a Bird's Eye View Plane
2021
Sensors
Our result ranks 22nd in the KITTI's leaderboard and shows real-time performance. ...
This study proposes a convolutional neural network architecture that processes data transformed to a bird's eye view plane. ...
Our transformation uses a rotation matrix based on homogeneous coordinates and a look-up table (LUT) to fuse the images and point clouds in the bird's eye view. ...
doi:10.3390/s21227623
pmid:34833698
pmcid:PMC8619025
fatcat:tinnoixr3bel7jmr4sn7smhou4
Frustum PointNets for 3D Object Detection from RGB-D Data
2018
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Evaluated on KITTI and SUN RGB-D 3D detection benchmarks, our method outperforms the state of the art by remarkable margins while having real-time capability. ...
While previous methods focus on images or 3D voxels, often obscuring natural 3D patterns and invariances of 3D data, we directly operate on raw point clouds by popping up RGB-D scans. ...
., ONR MURI grant N00014-13-1-0341, NSF grants DMS-1546206 and IIS-1528025, a Samsung GRO award, and gifts from Adobe, Amazon, and Apple. ...
doi:10.1109/cvpr.2018.00102
dblp:conf/cvpr/QiLWSG18
fatcat:za5o64qpcrgqvijmioux6clnpq
RUHSNet: 3D Object Detection Using Lidar Data in Real Time
[article]
2021
arXiv
pre-print
In this work, we address the problem of 3D object detection from point cloud data in real time. ...
We propose a novel neural network architecture along with the training and optimization details for detecting 3D objects in point cloud data. ...
Our detector accurately regresses the bounding box around objects in real time in birds eye view. ...
arXiv:2006.01250v6
fatcat:th3nzzxkqrhdrccwcvrh4d5iau
Deep Learning on Radar Centric 3D Object Detection
[article]
2020
arXiv
pre-print
Even though many existing 3D object detection algorithms rely mostly on camera and LiDAR, camera and LiDAR are prone to be affected by harsh weather and lighting conditions. ...
To the best of our knowledge, we are the first ones to demonstrate a deep learning-based 3D object detection model with radar only that was trained on the public radar dataset. ...
Fig. 2 . 2 's eye view detection network, we exploit Complex-YOLO [2], a state of the art real-time one stage 3D object detection network. ...
arXiv:2003.00851v1
fatcat:fabq5stsffcuvnsgeqoewxakf4
3D Fast Object Detection Based on Discriminant Images and Dynamic Distance Threshold Clustering
2020
Sensors
However, most algorithms are often based on a large amount of point cloud data, which makes real-time detection difficult. ...
To solve this problem, this paper proposes a 3D fast object detection method based on three main steps: First, the ground segmentation by discriminant image (GSDI) method is used to convert point cloud ...
Introduction Real-time and accurate object detection is essential for the safe driving of autonomous vehicles. ...
doi:10.3390/s20247221
pmid:33348559
fatcat:tvkcscvmlrd5xbka6d4avrlz7q
Improving Map Re-localization with Deep 'Movable' Objects Segmentation on 3D LiDAR Point Clouds
[article]
2019
arXiv
pre-print
In this paper we propose the use of a deep learning architecture to segment movable objects from 3D LiDAR point clouds in order to obtain longer-lasting 3D maps. ...
This will in turn allow for better, faster and more accurate re-localization and trajectoy estimation on subsequent days. ...
We base on this approach to build our maps as it provides a real-time and accurate representation as well as is more robust using real raw LiDAR data. ...
arXiv:1910.03336v1
fatcat:jpw4x6vitzamjnpdlwtezgq454
SegVoxelNet: Exploring Semantic Context and Depth-aware Features for 3D Vehicle Detection from Point Cloud
[article]
2020
arXiv
pre-print
3D vehicle detection based on point cloud is a challenging task in real-world applications such as autonomous driving. ...
A semantic context encoder is proposed to leverage the free-of-charge semantic segmentation masks in the bird's eye view. ...
[8] encoded the point cloud as bird's eye view feature maps and projected the 3D proposals to different views (e.g.bird's eye view for point cloud and front view for image) to crop object features from ...
arXiv:2002.05316v1
fatcat:fotbzjfgpfeb5brvhfimxrjmva
Improving Map Re-localization with Deep 'Movable' Objects Segmentation on 3D LiDAR Point Clouds
2019
2019 IEEE Intelligent Transportation Systems Conference (ITSC)
In this paper we propose the use of a deep learning architecture to segment movable objects from 3D LiDAR point clouds in order to obtain longer-lasting 3D maps. ...
This will in turn allow for better, faster and more accurate re-localization and trajectoy estimation on subsequent days. ...
We base on this approach to build our maps as it provides a real-time and accurate representation as well as is more robust using real raw LiDAR data. ...
doi:10.1109/itsc.2019.8917390
dblp:conf/itsc/VaqueroFMSM19
fatcat:oti33brggbg27obhhylawxvcte
RangeRCNN: Towards Fast and Accurate 3D Object Detection with Range Image Representation
[article]
2021
arXiv
pre-print
We present RangeRCNN, a novel and effective 3D object detection framework based on the range image representation. Most existing methods are voxel-based or point-based. ...
Experiments show that RangeRCNN achieves state-of-the-art performance on the KITTI dataset and the Waymo Open dataset, and provides more possibilities for real-time 3D object detection. ...
For the real-time 3D object detector, [14] proposes the pillar-based voxel to significantly improve the efficiency. ...
arXiv:2009.00206v2
fatcat:pdj6yvcayffgpflmexmfpaotue
BEV-Seg: Bird's Eye View Semantic Segmentation Using Geometry and Semantic Point Cloud
[article]
2020
arXiv
pre-print
Bird's-eye-view (BEV) is a powerful and widely adopted representation for road scenes that captures surrounding objects and their spatial locations, along with overall context in the scene. ...
In this work, we focus on bird's eye semantic segmentation, a task that predicts pixel-wise semantic segmentation in BEV from side RGB images. ...
MVP [26] predicts a bird's eye view by detecting 3D objects, but it disregard other features such as roads, road lanes, and buildings. ...
arXiv:2006.11436v2
fatcat:3k7doi2bejgylfryf7xgrskrua
« Previous
Showing results 1 — 15 out of 1,249 results