Filters








187 Hits in 14.1 sec

V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision Transformer [article]

Runsheng Xu, Hao Xiang, Zhengzhong Tu, Xin Xia, Ming-Hsuan Yang, Jiaqi Ma
2022 arXiv   pre-print
Extensive experimental results demonstrate that V2X-ViT sets new state-of-the-art performance for 3D object detection and achieves robust performance even under harsh, noisy environments.  ...  We present a robust cooperative perception framework with V2X communication using a novel vision Transformer.  ...  This material is supported in part by the Federal Highway Administration Exploratory Advanced Research (EAR) Program, and by the US National Science Foundation through Grants CMMI # 1901998.  ... 
arXiv:2203.10638v3 fatcat:frot3yl4kzhshipwevhld3s6lm

A Review of Research on Traffic Conflicts Based on Intelligent Vehicles

Lin Hu, Jian Ou, Jing Huang, Yimin Chen, Dongpu Cao
2020 IEEE Access  
The intelligent vehicles can perceive the surrounding environment, extract road condition information, and detect obstacles for avoiding collisions or mitigating accidents.  ...  The major challenges are accurately perceiving the road traffic environment, detecting the potential traffic conflicts, and proposing the alternative driving strategies.  ...  [51] proposed a multi-modal vehicle detection system combining 3D lidar and color camera data, using convolutional neural network fusion strategy to make vehicle detection more accurate.  ... 
doi:10.1109/access.2020.2970164 fatcat:rd44et55sfgffb2dakzcat6wmq

Cooperative Perception Technology of Autonomous Driving in the Internet of Vehicles Environment: A Review

Guangzhen Cui, Weili Zhang, Yanqiu Xiao, Lei Yao, Zhanpeng Fang
2022 Sensors  
Based on the analysis of the related literature on the Internet of vehicles (IoV), this paper summarizes the multi-sensor information fusion method, information sharing strategy, and communication technology  ...  Firstly, cooperative perception information fusion methods, such as image fusion, point cloud fusion, and image–point cloud fusion, are summarized and compared according to the approaches of sensor information  ...  Acknowledgments: This paper and research are inseparable from the joint efforts and help of the advisors and team members. Conflicts of Interest: The authors declare no conflict of interest.  ... 
doi:10.3390/s22155535 pmid:35898039 pmcid:PMC9332497 fatcat:x2mghmdhafatjboshktjtpjxxa

V2X-Sim: Multi-Agent Collaborative Perception Dataset and Benchmark for Autonomous Driving [article]

Yiming Li, Dekun Ma, Ziyan An, Zixun Wang, Yiqi Zhong, Siheng Chen, Chen Feng
2022 arXiv   pre-print
V2X-Sim provides: (1) multi-agent sensor recordings from the road-side unit (RSU) and multiple vehicles that enable collaborative perception, (2) multi-modality sensor streams that facilitate multi-modality  ...  To fill this gap, we present V2X-Sim, a comprehensive simulated multi-agent perception dataset for V2X-aided autonomous driving.  ...  The authors would like to thank anonymous reviewers for their helpful suggestions, and NYU high performance computing (HPC) for the support.  ... 
arXiv:2202.08449v2 fatcat:yv2jbv76nbgyvetwxqlaricocm

Going Beyond RF: How AI-enabled Multimodal Beamforming will Shape the NextG Standard [article]

Debashri Roy, Batool Salehi, Stella Banou, Subhramoy Mohanti, Guillem Reus-Muns, Mauro Belgiovine, Prashant Ganesh, Carlos Bocanegra, Chris Dick, Kaushik Chowdhury
2022 arXiv   pre-print
This survey presents a thorough analysis of the different approaches used for beamforming today, focusing on mmWave bands, and then proceeds to make a compelling case for considering non-RF sensor data  ...  from multiple modalities, such as LiDAR, Radar, GPS for increasing beamforming directional accuracy and reducing processing time.  ...  The vehicle is assumed to be equipped with GPS and LiDAR sensors that enable the vehicle to acquire its location and detect blocking objects nearby.  ... 
arXiv:2203.16706v1 fatcat:44pger2flveondbtachzhcdgam

MmWave Radar and Vision Fusion for Object Detection in Autonomous Driving: A Review [article]

Zhiqing Wei, Fengkai Zhang, Shuo Chang, Yangyang Liu, Huici Wu, Zhiyong Feng
2022 arXiv   pre-print
In addition, we introduce three-dimensional(3D) object detection, the fusion of lidar and vision in autonomous driving and multimodal information fusion, which are promising for the future.  ...  Millimeter wave (mmWave) radar and vision fusion is a mainstream solution for accurate obstacle detection.  ...  [21] summarized the multi-sensor fusion and vehicle communication technology for autonomous driving, involving the fusion of cameras, mmWave radar, lidar, global positioning system (GPS), and other  ... 
arXiv:2108.03004v3 fatcat:xr5vch2xwbgb3gfnqp2b5cvqee

MmWave Radar and Vision Fusion for Object Detection in Autonomous Driving: A Review

Zhiqing Wei, Fengkai Zhang, Shuo Chang, Yangyang Liu, Huici Wu, Zhiyong Feng
2022 Sensors  
In addition, we introduce three-dimensional(3D) object detection, the fusion of lidar and vision in autonomous driving and multimodal information fusion, which are promising for the future.  ...  Millimeter wave (mmWave) radar and vision fusion is a mainstream solution for accurate obstacle detection.  ...  [21] summarized the multi-sensor fusion and vehicle communication technology for autonomous driving, involving the fusion of cameras, mmWave radar, lidar, global positioning system (GPS), and other  ... 
doi:10.3390/s22072542 pmid:35408157 pmcid:PMC9003130 fatcat:ekeca2ul2fgb7jkurgwpt3fbh4

Compound Positioning Method for Connected Electric Vehicles Based on Multi-Source Data Fusion

Lin Wang, Zhenhua Li, Qinglan Fan
2022 Sustainability  
Firstly, Dempster-Shafer (D-S) evidence theory is used to fuse the position probability in multi-sensor detection information, and screen vehicle existence information.  ...  multi-source data fusion technology, which can provide data support for the CVIS.  ...  Wu and L. Gao for their technical assistance with the experiments and analyses. Conflicts of Interest: The authors declare no conflict of interest.  ... 
doi:10.3390/su14148323 fatcat:ckepzrs4njhgzfhtkjvr3azf54

A Survey of Autonomous Vehicles: Enabling Communication Technologies and Challenges

M. Nadeem Ahangar, Qasim Z. Ahmed, Fahd A. Khan, Maryam Hafeez
2021 Sensors  
This situation stresses the need for an intelligent transport system (ITS) that improves road safety and security by avoiding human errors with the use of autonomous vehicles (AVs).  ...  First, the paper discusses various sensors and their role in AVs. Then, various communication technologies for AVs to facilitate vehicle to everything (V2X) communication are discussed.  ...  LiDAR is used for positioning, obstacle detection and environmental reconstruction [15] . 3D LiDAR sensors are playing an increasingly significant role in the AV system [62] .  ... 
doi:10.3390/s21030706 pmid:33494191 fatcat:drhtm37j7vbvvhvzoapr6t6s4u

Autonomous Driving in Adverse Weather Conditions: A Survey [article]

Yuxiao Zhang, Alexander Carballo, Hanting Yang, Kazuya Takeda
2021 arXiv   pre-print
Automated Driving Systems (ADS) open up a new domain for the automotive industry and offer new possibilities for future transportation with higher efficiency and comfortable experiences.  ...  This paper assesses the influences and challenges that weather brings to ADS sensors in an analytic and statistical way, and surveys the solutions against inclement weather conditions.  ...  with multi-echo LiDAR sensor only.  ... 
arXiv:2112.08936v1 fatcat:hmgjhywy7rgx3fgrk6yxnu56ie

A Grid-Based Framework for Collective Perception in Autonomous Vehicles

Jorge Godoy, Víctor Jiménez, Antonio Artuñedo, Jorge Villagra
2021 Sensors  
The proposed framework was validated on a set of experiments using real vehicles and infrastructure sensors for sensing static and dynamic objects.  ...  For each sensor, including V2X, independent grids are calculated from sensor measurements and uncertainties and then fused in terms of both occupancy and confidence.  ...  Acknowledgments: The authors would like to acknowledge Carolina Sainz, Sergio Martín and Arturo Medela, from TST Sistemas, for their valuable help with the UWB sensors setup.  ... 
doi:10.3390/s21030744 pmid:33499331 fatcat:2k246i46rbfcjdr6s3fzicvr2i

Cloud Control System Architectures, Technologies and Applications on Intelligent and Connected Vehicles: a Review

Wenbo Chu, Qiqige Wuniri, Xiaoping Du, Qiuchi Xiong, Tai Huang, Keqiang Li
2021 Chinese Journal of Mechanical Engineering  
Intelligent and connected vehicles (ICV) cloud control system (CCS) has been introduced as a new concept as it is a potentially synthetic solution for high level automated driving to improve safety and  ...  However, vehicles equipped with on-board sensors still have limitations in acquiring necessary environmental data for optimal driving decisions.  ...  The road side infrastructure uses fixed sensors to form multi-sensor network for sensor fusion, which has stable range and results [169] .  ... 
doi:10.1186/s10033-021-00638-4 fatcat:32pl6levhfbsrprxfwct76vfzu

Deep Multi-modal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges [article]

Di Feng, Christian Haase-Schuetz, Lars Rosenbaum, Heinz Hertlein, Claudius Glaeser, Fabian Timm, Werner Wiesbeck, Klaus Dietmayer
2020 arXiv   pre-print
This review paper attempts to systematically summarize methodologies and discuss challenges for deep multi-modal object detection and semantic segmentation in autonomous driving.  ...  To this end, we first provide an overview of on-board sensors on test vehicles, open datasets, and background information for object detection and semantic segmentation in autonomous driving research.  ...  We also thank Bill Beluch, Rainer Stal, Peter Möller and Ulrich Michael for their suggestions and inspiring discussions.  ... 
arXiv:1902.07830v4 fatcat:or6enjxktnamdmh2yekejjr4re

COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked Vehicles [article]

Jiaxun Cui, Hang Qiu, Dian Chen, Peter Stone, Yuke Zhu
2022 arXiv   pre-print
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.  ...  Optical sensors and learning algorithms for autonomous vehicles have dramatically advanced in the past few years.  ...  V2V Communication We use the Winner II wireless channel model [25] in our simulator and use the measured C-V2X radio capacity and packet loss rate in the channel model.  ... 
arXiv:2205.02222v1 fatcat:sg7rbexumngivn3acn2etpd4pm

Autonomous Driving with Deep Learning: A Survey of State-of-Art Technologies [article]

Yu Huang, Yue Chen
2020 arXiv   pre-print
Due to the limited space, we focus the analysis on several key areas, i.e. 2D and 3D object detection in perception, depth estimation from cameras, multiple sensor fusion on the data, feature and task  ...  We investigate the major fields of self-driving systems, such as perception, mapping and localization, prediction, planning and control, simulation, V2X and safety etc.  ...  [136] use the same idea and perform the 3D detection using PointNet backbone net to obtain objects ' 3D locations, dimensions and orientations with a multi-modal features fusion module to embed the  ... 
arXiv:2006.06091v3 fatcat:nhdgivmtrzcarp463xzqvnxlwq
« Previous Showing results 1 — 15 out of 187 results