Filters








6,526 Hits in 4.8 sec

Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [article]

Aditya Prakash, Kashyap Chitta, Andreas Geiger
2021 arXiv   pre-print
Therefore, we propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.  ...  Geometry alone may therefore be insufficient for effectively fusing representations in end-to-end driving models.  ...  Related Work Multi-Modal Autonomous Driving: Recent multi-modal methods for end-to-end driving [58, 65, 51, 3] have shown that complementing RGB images with depth and semantics has the potential to improve  ... 
arXiv:2104.09224v1 fatcat:au3nqx7kwfds7kqymx2jceq5ga

Software/Hardware Co-design for Multi-modal Multi-task Learning in Autonomous Systems [article]

Cong Hao, Deming Chen
2021 arXiv   pre-print
., multi-modal data from different sensors, requiring diverse data preprocessing, sensor fusion, and feature aggregation.  ...  Therefore, autonomous systems essentially require multi-modal multi-task (MMMT) learning which must be aware of hardware performance and implementation strategies.  ...  [7] propose an end-to-end model for multi-modal sensor fusion with visual and depth information of images. Multi-task learning (MTL).  ... 
arXiv:2104.04000v1 fatcat:vf673pujtvhg7nyqrlppmsk2fa

Navigating an Automated Driving Vehicle via the Early Fusion of Multi-Modality

Malik Haris, Adam Glowacz
2022 Sensors  
This paper focuses on the early fusion of multi-modality and demonstrates how it outperforms a single modality using the CARLA simulator.  ...  The latter is less well-studied but is becoming more popular since it is easier to use. This article focuses on end-to-end autonomous driving, using RGB pictures as the primary sensor input data.  ...  Conclusions This paper presents a comparison of single-and multi-modal perception data for end-to-end driving.  ... 
doi:10.3390/s22041425 pmid:35214327 pmcid:PMC8878300 fatcat:g7ti567yn5h6jbxbazzeoprnt4

Safety-Enhanced Autonomous Driving Using Interpretable Sensor Fusion Transformer [article]

Hao Shao, Letian Wang, RuoBing Chen, Hongsheng Li, Yu Liu
2022 arXiv   pre-print
In this paper, we propose a safety-enhanced autonomous driving framework, named Interpretable Sensor Fusion Transformer(InterFuser), to fully process and fuse information from multi-modal multi-view sensors  ...  However, reasoning from a global context requires access to sensors of multiple types and adequate fusion of multi-modal sensor signals, which is difficult to achieve.  ...  Conclusion We present InterFuser, a new design for autonomous driving based on an interpretable sensor fusion transformer.  ... 
arXiv:2207.14024v2 fatcat:y7n2cr2tl5gyhofmfj2ap2rruq

3D Object Detection for Autonomous Driving: A Review and New Outlooks [article]

Jiageng Mao, Shaoshuai Shi, Xiaogang Wang, Hongsheng Li
2022 arXiv   pre-print
Autonomous driving, in recent years, has been receiving increasing attention for its potential to relieve drivers' burdens and improve the safety of driving.  ...  In modern autonomous driving pipelines, the perception system is an indispensable component, aiming to accurately estimate the status of surrounding environments and provide reliable observations for prediction  ...  End-to-end learning for autonomous driving.  ... 
arXiv:2206.09474v1 fatcat:3skws77uqngjtpo6mycpo4dhny

Deep Multi-modal Object Detection for Autonomous Driving

Amal Ennajar, Nadia Khouja, Remi Boutteau, Fethi Tlili
2021 2021 18th International Multi-Conference on Systems, Signals & Devices (SSD)  
In this paper, we present methods that have been proposed in the literature for the different deep multi-modal perception techniques.  ...  Robust perception in autonomous vehicles is a huge challenge that is the main tool for detecting and tracking the different kinds of objects around the vehicle.  ...  CARLA (Intel): an open-source simulator for autonomous driving research. CARLA is a platform for the evaluation of autonomous urban driving systems.  ... 
doi:10.1109/ssd52085.2021.9429355 fatcat:ek3qpf3dzbed7iyvkcwk6mv4hu

TransFuser: Imitation with Transformer-Based Sensor Fusion for Autonomous Driving [article]

Kashyap Chitta, Aditya Prakash, Bernhard Jaeger, Zehao Yu, Katrin Renz, Andreas Geiger
2022 arXiv   pre-print
How should we integrate representations from complementary sensors for autonomous driving? Geometry-based fusion has shown promise for perception (e.g. object detection, motion forecasting).  ...  However, in the context of end-to-end driving, we find that imitation learning based on existing sensor fusion methods underperforms in complex driving scenarios with a high density of dynamic agents.  ...  RELATED WORK Multi-Modal Autonomous Driving: Recent multi-modal methods for end-to-end driving [35] , [42] - [45] have shown that complementing RGB images with depth and semantics has the potential  ... 
arXiv:2205.15997v1 fatcat:nyteapdbr5dqbadxliletwnzcy

Deep Multi-modal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges [article]

Di Feng, Christian Haase-Schuetz, Lars Rosenbaum, Heinz Hertlein, Claudius Glaeser, Fabian Timm, Werner Wiesbeck, Klaus Dietmayer
2020 arXiv   pre-print
This review paper attempts to systematically summarize methodologies and discuss challenges for deep multi-modal object detection and semantic segmentation in autonomous driving.  ...  To this end, we first provide an overview of on-board sensors on test vehicles, open datasets, and background information for object detection and semantic segmentation in autonomous driving research.  ...  ACKNOWLEDGMENT We thank Fabian Duffhauss for collecting literature and reviewing the paper.  ... 
arXiv:1902.07830v4 fatcat:or6enjxktnamdmh2yekejjr4re

FUTR3D: A Unified Sensor Fusion Framework for 3D Detection [article]

Xuanyao Chen, Tianyuan Zhang, Yue Wang, Yilun Wang, Hang Zhao
2022 arXiv   pre-print
FUTR3D employs a query-based Modality-Agnostic Feature Sampler (MAFS), together with a transformer decoder with a set-to-set loss for 3D detection, thus avoiding using late fusion heuristics and post-processing  ...  In this work, we propose the first unified end-to-end sensor fusion framework for 3D detection, named FUTR3D, which can be used in (almost) any sensor configuration.  ...  systems for autonomous driving.  ... 
arXiv:2203.10642v1 fatcat:6ggsokiqoves3pzqsybi3hikuq

Multi-Modality Cascaded Fusion Technology for Autonomous Driving [article]

Hongwu Kuang, Xiaodong Liu, Jingwei Zhang, Zicheng Fang
2020 arXiv   pre-print
Multi-modality fusion is the guarantee of the stability of autonomous driving systems.  ...  Last, the proposed step-by-step cascaded fusion framework is more interpretable and flexible compared to the end-toend fusion methods.  ...  In autonomous driving, thanks to the arise of deep learning, not only single sensor research has made great progress [10, 13, 11, 18] , such as camera, LiDAR, multi-modality fusion has also gain increasing  ... 
arXiv:2002.03138v1 fatcat:szse6ak5ffemvmmc6mrwc6ucly

Autonomous Navigation in Complex Environments with Deep Multimodal Fusion Network [article]

Anh Nguyen, Ngoc Nguyen, Kim Tran, Erman Tjiputra, Quang D. Tran
2020 arXiv   pre-print
We further show that the use of multiple modalities is essential for autonomous navigation in complex environments.  ...  We then propose a Navigation Multimodal Fusion Network (NMFNet) which has three branches to effectively handle three visual modalities: laser, RGB images, and point cloud data.  ...  In [27] , Bojarski et al. proposed the first end-to-end navigation system for autonomous car using 2D images. Smolyanskiy et al.  ... 
arXiv:2007.15945v1 fatcat:aoyrmjgue5fl5jpnx7xsr3jsgm

Multi-Modal Fusion for Sensorimotor Coordination in Steering Angle Prediction [article]

Farzeen Munir, Shoaib Azam, Byung-Geun Lee, Moongu Jeon
2022 arXiv   pre-print
This work explores the fusion of frame-based RGB and event data for learning end-to-end lateral control by predicting steering angle.  ...  To this end, we propose DRFuser, a novel convolutional encoder-decoder architecture for learning end-to-end lateral control.  ...  However, the vision modality that is frame-based RGB cameras has shown impressive results for end-to-end driving [3] [4] [5] [6] .  ... 
arXiv:2202.05500v1 fatcat:yk7e4qskxfashdbpawbkt27up4

Robust Sensor Fusion Algorithms Against Voice Command Attacks in Autonomous Vehicles [article]

Jiwei Guan, Xi Zheng, Chen Wang, Yipeng Zhou, Alireza Jolfa
2021 arXiv   pre-print
With recent advances in autonomous driving, Voice Control Systems have become increasingly adopted as human-vehicle interaction methods.  ...  To this end, we propose a novel multimodal deep learning classification system to defend against inaudible command attacks.  ...  On the other hand, multi-modality sensor fusion algorithms have been widely adopted for Unmanned Aerial Vehicle (UAV) landing and autonomous driving needs to provide requested robustness and safety assurance  ... 
arXiv:2104.09872v3 fatcat:orgbuv7b4reyzpv76s5amtnxzy

MMFN: Multi-Modal-Fusion-Net for End-to-End Driving [article]

Qingwen Zhang, Mingkai Tang, Ruoyu Geng, Feiyi Chen, Ren Xin, Lujia Wang
2022 arXiv   pre-print
Inspired by the fact that humans use diverse sensory organs to perceive the world, sensors with different modalities are deployed in end-to-end driving to obtain the global context of the 3D scene.  ...  In previous works, camera and LiDAR inputs are fused through transformers for better driving performance.  ...  We also thank the anonymous reviewers for their constructive comments.  ... 
arXiv:2207.00186v2 fatcat:7fa3oo54ybc5dpsgoy5533hhtq

BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation [article]

Zhijian Liu, Haotian Tang, Alexander Amini, Xinyu Yang, Huizi Mao, Daniela Rus, Song Han
2022 arXiv   pre-print
Multi-sensor fusion is essential for an accurate and reliable autonomous driving system. Recent approaches are based on point-level fusion: augmenting the LiDAR point cloud with camera features.  ...  In this paper, we break this deeply-rooted convention with BEVFusion, an efficient and generic multi-task multi-sensor fusion framework.  ...  We would like to thank Xuanyao Chen and Brady Zhou for their guidance on detection and segmentation evaluation, and Yingfei Liu and Tiancai Wang for their helpful discussions.  ... 
arXiv:2205.13542v2 fatcat:qtunylgozjcvrdrjzdk23xjpve
« Previous Showing results 1 — 15 out of 6,526 results