CSPC-Dataset: New LiDAR Point Cloud Dataset and Benchmark for Large-scale Semantic Segmentation

Guofeng Tong, Yong Li, Dong Chen, Qi Sun, Wei Cao, Guiqiu Xiang
2020 IEEE Access  
Large-scale point clouds scanned by light detection and ranging (lidar) sensors provide detailed geometric characteristics of scenes due to the provision of 3D structural data. The semantic segmentation of large-scale point clouds is a crucial step for an in-depth understanding of complex scenes. Of late, although a large number of point cloud semantic segmentation algorithms have been proposed, semantic segmentation methods are still far from being satisfactory in terms of precision and
more » ... y of large-scale point clouds. For machine learning (ML) and deep learning (DL) methodologies, the semantic segmentation is largely influenced by the quality of training sets and methods themselves. Therefore, we construct a new point cloud dataset, namely CSPC-Dataset (Complex Scene Point Cloud Dataset) for large-scale scene semantic segmentation. CSPC-Dataset point clouds are acquired by a wearable laser mobile mapping robot. It covers five complex urban and rural scenes and mainly includes six types of objects, i.e., ground, car, building, vegetation, bridge, and pole. It provides large-scale outdoor scenes with color information, which has advantages such as the scene more complete, point density relatively uniform, diversity and complexity of objects and the high discrepancy between different scenes. Based on the CSPC-Dataset, we construct a new benchmark, which includes approximately 68 million points with explicit semantic labels. To extend the dataset into a wide range of applications, this paper provides the semantic segmentation results and comparative analysis of 7 baseline methods based on CSPC-Dataset. In the experiment part, three groups of experiments are conducted for benchmarking, which offers an effective way to make comparisons with different point-labeling algorithms. The labeling results have shown that the highest Intersection over Union (IoU) of pole, ground, building, car, vegetation, and bridge for all benchmarks is 36.0%, 97.8%, 93.7%, 65.6%, 92.0%, and 69.6%. INDEX TERMS LiDAR, benchmark, point clouds, large-scale datasets, scene understanding.
doi:10.1109/access.2020.2992612 fatcat:nxyyh2hjh5atfalsnlcjyifjru