Filters








1,032 Hits in 12.4 sec

Why did the Robot Cross the Road? - Learning from Multi-Modal Sensor Data for Autonomous Road Crossing [article]

Noha Radwan, Wera Winterhalter, Christian Dornhege, Wolfram Burgard
2017 arXiv   pre-print
In this work, we propose a novel multi-modal learning approach for the problem of autonomous street crossing.  ...  Our approach solely relies on laser and radar data and learns a classifier based on Random Forests to predict when it is safe to cross the road.  ...  Meissner et al. use a multi-sensor tracking system for classification of relevant objects [16] .  ... 
arXiv:1709.06039v1 fatcat:5cpxarxpnbguxolqny6hjmljry

ROAD: The ROad event Awareness Dataset for Autonomous Driving [article]

Gurkirt Singh, Stephen Akrigg, Manuele Di Maio, Valentina Fontana, Reza Javanmard Alitappeh, Suman Saha, Kossar Jeddisaravi, Farzad Yousefi, Jacob Culley, Tom Nicholson, Jordan Omokeowa, Salman Khan (+4 others)
2021 arXiv   pre-print
To this purpose, we introduce the ROad event Awareness Dataset (ROAD) for Autonomous Driving, to our knowledge the first of its kind.  ...  ROAD comprises 22 videos, originally from the Oxford RobotCar Dataset, annotated with bounding boxes showing the location in the image plane of each road event.  ...  The latest generation of robot-cars is equipped with a range of different sensors (i.e., laser rangefinders, radar, cameras, GPS) to provide data on what is happening on the road [6] .  ... 
arXiv:2102.11585v2 fatcat:k25yvvjkonf33clpioyyf5eola

Color Vision for Road Following [chapter]

Jill D. Crisman, Charles E. Thorpe
1990 The Kluwer International Series in Engineering and Computer Science  
Reflectance also changes from place to place along the road, as the road surface goes from dirty to clean or from wet to dry.  ...  This repon describes progress in vision and navigation for outdoor mobile robots at the Carnegie Mellon Robotics Insfflute during 1988.  ...  Keith Grcmban ported and demonstrated the road following program on the Manin Marietta ALV.  ... 
doi:10.1007/978-1-4613-1533-9_2 fatcat:lqsollwsc5c2lbov2gwlmqkyry

Road and Railway Smart Mobility: A High-definition Ground Truth Hybrid Dataset

Redouane Khemmar, Antoine Mauri, Camille Dulompont, Jayadeep Gajula, Vincent Vauchey, Madjid Haddad, Rémi Boutteau
2022 Sensors  
A robust visual understanding of complex urban environments using passive optical sensors is an onerous and essential task for autonomous navigation.  ...  For this purpose, in order to improve the level of instances in datasets used for the training and validation of Autonomous Vehicles (AV), Advanced Driver Assistance Systems (ADAS), and autonomous driving  ...  This is still relatively less time consuming and expensive than building the multi-modal dataset often including range data from LiDAR or Radar [3, 24] .  ... 
doi:10.3390/s22103922 pmid:35632331 fatcat:gcilmijdtfa35p3scz6bfizd5e

Cost-Efficient Global Robot Navigation in Rugged Off-Road Terrain

T. Braun
2011 Künstliche Intelligenz  
Abstract This thesis addresses the problem of finding a global robot navigation strategy for rugged off-road terrain which is robust against inaccurate self-localization and scalable to large environments  ...  And finally, I am very thankful for the unwavering support received from my girlfriend Christiane, who tolerated the many late nights at the lab and constantly provided comfort during difficult times when  ...  A probabilistic online learning framework for autonomous off-road robot navigation is presented in [Erkan 07 ].  ... 
doi:10.1007/s13218-011-0088-9 fatcat:u4ssainmcvc3jmqdon6gpjg674

Vulnerable road users and the coming wave of automated vehicles: Expert perspectives

Wilbert Tabone, Joost de Winter, Claudia Ackermann, Jonas Bärgman, Martin Baumann, Shuchisnigdha Deb, Colleen Emmenegger, Azra Habibovic, Marjan Hagenzieker, P.A. Hancock, Riender Happee, Josef Krems (+6 others)
2021 Transportation Research Interdisciplinary Perspectives  
Acknowledgements This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 860410.  ...  These come in different modalities, including LED strips and screens, robotic attachments, projections on the road, and auditory signals, amongst others.  ...  For public transport or robot taxis, you can train a vehicle to drive on specific routes. For the general public, this will be interpreted as fully automated or autonomous driving.  ... 
doi:10.1016/j.trip.2020.100293 fatcat:rwtx64g63ngk5ps6m4elho6qge

Autonomous Driving in Adverse Weather Conditions: A Survey [article]

Yuxiao Zhang, Alexander Carballo, Hanting Yang, Kazuya Takeda
2021 arXiv   pre-print
However, autonomous driving under adverse weather conditions has been the problem that keeps autonomous vehicles (AVs) from going to level 4 or higher autonomy for a long time.  ...  This paper assesses the influences and challenges that weather brings to ADS sensors in an analytic and statistical way, and surveys the solutions against inclement weather conditions.  ...  “Multi-Modal Sensor [176] Haiyan Wu et al. “Contrastive Learning for Com- Fusion-Based Semantic Segmentation for Snow pact Single Image Dehazing”.  ... 
arXiv:2112.08936v1 fatcat:hmgjhywy7rgx3fgrk6yxnu56ie

Joint Attention in Driver-Pedestrian Interaction: from Theory to Practice [article]

Amir Rasouli, John K. Tsotsos
2018 arXiv   pre-print
The interaction between road users is a form of negotiation in which the parties involved have to share their attention regarding a common objective or a goal (e.g. crossing an intersection), and coordinate  ...  More specifically, we will discuss the theoretical background behind joint attention, its application to traffic interaction and practical approaches to implementing joint attention for autonomous vehicles  ...  In autonomous driving, different sensor modalities can be used to improve the performance of detection. For instance, Lange et al.  ... 
arXiv:1802.02522v2 fatcat:nzeq5eleajcktjl32m2kyqu7rq

Pedestrian Models for Autonomous Driving Part II: high level models of human behaviour [article]

Fanta Camara, Nicola Bellotto, Serhan Cosar, Florian Weber, Dimitris Nathanael, Matthias Althoff, Jingyuan Wu, Johannes Ruenz, André Dietrich, Gustav Markkula, Anna Schieben, Fabio Tango, Natasha Merat (+1 others)
2020 arXiv   pre-print
Autonomous vehicles (AVs) must share space with human pedestrians, both in on-road cases such as cars at pedestrian crossings and off-road cases such as delivery vehicles navigating through crowds on high-streets  ...  level image detection to high-level psychological models, from the perspective of an AV designer.  ...  The authors in [120] present a multi-modal dataset for obstacle detection in agriculture.  ... 
arXiv:2003.11959v1 fatcat:acjjwohahvdlxgy56j45fjtkdq

Deep Learning-Based Frameworks for Semantic Segmentation of Road Scenes

Haneen Alokasi, Muhammad Bilal Ahmad
2022 Electronics  
To overcome a lack of enough data required for the training process, data augmentation techniques and their experimental results are reviewed.  ...  This paper presents a detailed review of deep learning-based frameworks used for semantic segmentation of road scenes, highlighting their architectures and tasks.  ...  [32] and Ros and Alvarez [46] generated the ground truth from the road detection challenge for 323 images with three classes: sky, vertical, and road. Ros et al.  ... 
doi:10.3390/electronics11121884 fatcat:ekykzfnqtjcbla3fh4vlwhkvgu

Autonomous Vehicles That Interact With Pedestrians: A Survey of Theory and Practice

Amir Rasouli, John K. Tsotsos
2019 IEEE transactions on intelligent transportation systems (Print)  
To make it a reality, autonomous vehicles require the ability to communicate with other road users and understand their intentions.  ...  We will also review the practical applications aimed at solving the interaction problem, including design approaches for autonomous vehicles that communicate with pedestrians and visual perception and  ...  The authors use a pre-recorded video of the pedestrians who were instructed to engage in various activities with the robot (e.g. approaching the robot for interaction or simply blocking its way), to learn  ... 
doi:10.1109/tits.2019.2901817 fatcat:anhktkdjx5hnphwvoksfxthdfi

Collaborative Multi-Robot Search and Rescue: Planning, Coordination, Perception and Active Vision

Jorge Pena Queralta, Jussi Taipalmaa, Bilge Can Pullinen, Victor Kathan Sarker, Tuan Nguyen Gia, Hannu Tenhunen, Moncef Gabbouj, Jenni Raitoharju, Tomi Westerlund
2020 IEEE Access  
ACKNOWLEDGMENT This research work is supported by the Academy of Finland's AutoSOS project (Grant No. 328755).  ...  data from different modalities to a joint space, alignment, i.e., how to understand the relations of the elements of data from different modalities, for example, which parts of the data describe the same  ...  development of multi-modal sensor fusion algorithms.  ... 
doi:10.1109/access.2020.3030190 fatcat:exigopjplzgfzlghxvr7s3l3di

Collaborative Multi-Robot Systems for Search and Rescue: Coordination and Perception [article]

Jorge Peña Queralta, Jussi Taipalmaa, Bilge Can Pullinen, Victor Kathan Sarker, Tuan Nguyen Gia, Hannu Tenhunen, Moncef Gabbouj, Jenni Raitoharju, Tomi Westerlund
2020 arXiv   pre-print
In this paper, we review and analyze the existing approaches to multi-robot SAR support, from an algorithmic perspective and putting an emphasis on the methods enabling collaboration among the robots as  ...  multi-robot SAR systems.  ...  ACKNOWLEDGMENT This research work is supported by the Academy of Finland's AutoSOS project (Grant No. 328755).  ... 
arXiv:2008.12610v1 fatcat:hq5lqtnsoreapjm4dpgg4z5xki

A2D2: Audi Autonomous Driving Dataset [article]

Jakob Geyer, Yohannes Kassahun, Mentar Mahmudi, Xavier Ricou, Rupesh Durgesh, Andrew S. Chung, Lorenz Hauswald, Viet Hoang Pham, Maximilian Mühlegg, Sebastian Dorn, Tiffany Fernandez, Martin Jänicke (+7 others)
2020 arXiv   pre-print
Research in machine learning, mobile robotics, and autonomous driving is accelerated by the availability of high quality annotated data.  ...  In addition, we provide 392,556 sequential frames of unannotated sensor data for recordings in three cities in the south of Germany. These sequences contain several loops.  ...  in checking the quality of the dataset.  ... 
arXiv:2004.06320v1 fatcat:oirikiaxjbb6zn2pieu3qmhkou

Explainable artificial intelligence for autonomous driving: An overview and guide for future research directions [article]

Shahin Atakishiyev, Mohammad Salameh, Hengshuai Yao, Randy Goebel
2022 arXiv   pre-print
First, we provide a thorough overview of the state-of-the-art studies on XAI for autonomous driving.  ...  We then propose an XAI framework that considers all the societal and legal requirements for explainability of autonomous driving systems.  ...  ACKNOWLEDGMENT We acknowledge support from the Alberta Machine Intelligence Institute (Amii), from the Computing Science Department of the University of Alberta, and the Natural Sciences and Engineering  ... 
arXiv:2112.11561v2 fatcat:zluqlvmtznh25eihtouubib3ba
« Previous Showing results 1 — 15 out of 1,032 results