Filters








1,437 Hits in 9.5 sec

Evaluating the Robustness of Deep Reinforcement Learning for Autonomous and Adversarial Policies in a Multi-agent Urban Driving Environment [article]

Aizaz Sharif, Dusica Marijan
<span title="2022-05-27">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Deep reinforcement learning is actively used for training autonomous and adversarial car policies in a simulated driving environment.  ...  A benchmarking framework for the comparison of deep reinforcement learning in a vision-based autonomous driving will open up the possibilities for training better autonomous car driving policies.  ...  DEEP REINFORCEMENT LEARNING FOR AUTONOMOUS DRIVING Reinforcement learning (RL) is mainly modeled as a formulation of the Markov Decision Process (MDP), where the desired goal of the agents in a certain  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.11947v2">arXiv:2112.11947v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/xfmgcgfnrbhe3cimnd4eedoqjq">fatcat:xfmgcgfnrbhe3cimnd4eedoqjq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220531042926/https://arxiv.org/pdf/2112.11947v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/c8/82/c88262c0b73a6e57b5fd6fa196c140a71785221e.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.11947v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Adversarial Deep Reinforcement Learning for Improving the Robustness of Multi-agent Autonomous Driving Policies [article]

Aizaz Sharif, Dusica Marijan
<span title="2022-05-27">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Our methodology supports testing ACs in a multi-agent environment, where we train and compare adversarial car policy on two custom reward functions to test the driving control decision of autonomous cars  ...  Our results show that adversarial testing can be used for finding erroneous autonomous driving behavior, followed by adversarial training for improving the robustness of deep reinforcement learning based  ...  Fig. 2 : 2 Fig. 2: Illustration of MAD-ARL framework for improving the robustness of AC driving policies in a multi-agent environment.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.11937v2">arXiv:2112.11937v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/74tsuibo2jeqnpefl7lazo7o2a">fatcat:74tsuibo2jeqnpefl7lazo7o2a</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220531043535/https://arxiv.org/pdf/2112.11937v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/29/17/291716577772d9455afb42f009be940122ce40a6.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.11937v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Multi-Agent Connected Autonomous Driving using Deep Reinforcement Learning [article]

Praveen Palanisamy
<span title="2019-11-11">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We provide a taxonomy of multi-agent learning environments based on the nature of tasks, nature of agents and the nature of the environment to help in categorizing various autonomous driving problems that  ...  Deep Reinforcement Learning (RL) provides a promising and scalable framework for developing adaptive learning based solutions.  ...  Multi-Agent Deep Reinforcement Learning For Connected Autonomous Driving In the formulation presented in section 2, formally, the goal of each agent is to maximize the expected value of its long-term future  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1911.04175v1">arXiv:1911.04175v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/rb6uf5mvwfhntiilsdqo5uw7ze">fatcat:rb6uf5mvwfhntiilsdqo5uw7ze</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200823112053/https://arxiv.org/pdf/1911.04175v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/4e/9c/4e9ce5c446213d73157f3c84f42683d79cb3918f.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1911.04175v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Multi-Agent Vulnerability Discovery for Autonomous Driving with Hazard Arbitration Reward [article]

Weilin Liu, Ye Mu, Chao Yu, Xuefei Ning, Zhong Cao, Yi Wu, Shuang Liang, Huazhong Yang, Yu Wang
<span title="2021-12-12">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
To this end, this work proposes a Safety Test framework by finding Av-Responsible Scenarios (STARS) based on multi-agent reinforcement learning.  ...  On the one hand, the probability of naturally encountering hazardous scenarios is low when testing a well-trained autonomous driving strategy.  ...  ACKNOWLEDGMENT The authors gratefully acknowledge the support from TOYOTA. This work was also supported by Beijing National Research Center for Information Science and Technology (BNRist).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.06185v1">arXiv:2112.06185v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/fdava3cuyzezdksbqeifo3ohwm">fatcat:fdava3cuyzezdksbqeifo3ohwm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211215010547/https://arxiv.org/pdf/2112.06185v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/40/ee/40ee510f3e897462894b36cfec55312e197ae9b8.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.06185v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Deep Reinforcement Learning for Autonomous Driving: A Survey [article]

B Ravi Kiran, Ibrahim Sobh, Victor Talpaert, Patrick Mannion, Ahmad A. Al Sallab, Senthil Yogamani, Patrick Pérez
<span title="2021-01-23">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments  ...  in real world deployment of autonomous driving agents.  ...  Multi-agent reinforcement learning (MARL) In multi-agent reinforcement learning, multiple RL agents are deployed into a common environment.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2002.00444v2">arXiv:2002.00444v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/axj3ohhjwzdrxp6dgpfqvctv2i">fatcat:axj3ohhjwzdrxp6dgpfqvctv2i</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210226121634/https://arxiv.org/pdf/2002.00444v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/3f/73/3f7310e1eda49597e96b09eeaad6fe4d16abd3c4.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2002.00444v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A Survey of Deep Reinforcement Learning Algorithms for Motion Planning and Control of Autonomous Vehicles [article]

Fei Ye, Shen Zhang, Pin Wang, Ching-Yao Chan
<span title="2021-06-01">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this survey, we systematically summarize the current literature on studies that apply reinforcement learning (RL) to the motion planning and control of autonomous vehicles.  ...  Finally, the remaining challenges applying deep RL algorithms on autonomous driving are summarized, and future research directions are also presented to tackle these challenges.  ...  Deep reinforcement learning has shown great success in the area of vehicle behavioral decision makings, especially in the highway scenarios and urban intersections.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2105.14218v2">arXiv:2105.14218v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/27glt4i4lfhg3j4ozjrlsq6i3e">fatcat:27glt4i4lfhg3j4ozjrlsq6i3e</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210611032731/https://arxiv.org/pdf/2105.14218v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e1/0c/e10c268be73a7e14fded6bd990cd8543c2ccc8a0.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2105.14218v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A Survey of Deep Learning Applications to Autonomous Vehicle Control [article]

Sampo Kuutti, Richard Bowden, Yaochu Jin, Phil Barber, Saber Fallah
<span title="2019-12-23">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Designing a controller for autonomous vehicles capable of providing adequate performance in all driving scenarios is challenging due to the highly complex environment and inability to test the system in  ...  For these reasons, the use of deep learning for vehicle control is becoming increasingly popular.  ...  Datasets and Tools for Deep Learning The rapid progress in the implementation of deep learning systems on autonomous vehicles has led to the availability of diverse deep learning data sets for autonomous  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1912.10773v1">arXiv:1912.10773v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/vtmdnxgt7zdadnlyn7cfyvldai">fatcat:vtmdnxgt7zdadnlyn7cfyvldai</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200909043703/https://arxiv.org/pdf/1912.10773v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/3f/2c/3f2ca324c18b36029d09a97a6dffee6911be857d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1912.10773v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Guest Editorial Introduction to the Special Issue on Deep Learning Models for Safe and Secure Intelligent Transportation Systems

Alireza Jolfaei, Neeraj Kumar, Min Chen, Krishna Kant
<span title="">2021</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/in6o6x6to5e2dls4y2ff52dy6u" style="color: black;">IEEE transactions on intelligent transportation systems (Print)</a> </i> &nbsp;
He has published in a wide variety of areas in computer science and authored a graduate textbook on performance modeling of computer systems.  ...  He carries a combined 40 years of experience in academia, industry, and government.  ...  to collect sensing data in the urban environment and deep-learning-based offline algorithm to predict vehicle mobility in a future time period.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/tits.2021.3090721">doi:10.1109/tits.2021.3090721</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/c2o2vno6bjbnxdn6y4zm7ztmvq">fatcat:c2o2vno6bjbnxdn6y4zm7ztmvq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210714191033/https://ieeexplore.ieee.org/ielx7/6979/9480662/09480797.pdf?tp=&amp;arnumber=9480797&amp;isnumber=9480662&amp;ref=" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/77/26/77266fa9382cf9a5129c29dbccb13b791f16ad20.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/tits.2021.3090721"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

CIRL: Controllable Imitative Reinforcement Learning for Vision-Based Self-driving [chapter]

Xiaodan Liang, Tairui Wang, Luona Yang, Eric Xing
<span title="">2018</span> <i title="Springer International Publishing"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/2w3awgokqne6te4nvlofavy5a4" style="color: black;">Lecture Notes in Computer Science</a> </i> &nbsp;
Autonomous urban driving navigation with complex multi-agent dynamics is under-explored due to the difficulty of learning an optimal driving policy.  ...  We present a general and principled Controllable Imitative Reinforcement Learning (CIRL) approach which successfully makes the driving agent achieve higher success rates based on only vision inputs in  ...  Introduction Autonomous urban driving is a long-studied and still under-explored task [27, 31] particularly in the crowded urban environments [25] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-030-01234-2_36">doi:10.1007/978-3-030-01234-2_36</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ogluc4r4jnglljbw4liklwednu">fatcat:ogluc4r4jnglljbw4liklwednu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190819021433/http://openaccess.thecvf.com:80/content_ECCV_2018/papers/Xiaodan_Liang_CIRL_Controllable_Imitative_ECCV_2018_paper.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/b7/a6/b7a61d511fea8b2feb6c9634967edef10a1c1b3c.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-030-01234-2_36"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

CIRL: Controllable Imitative Reinforcement Learning for Vision-based Self-driving [article]

Xiaodan Liang, Tairui Wang, Luona Yang, Eric Xing
<span title="2018-07-10">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Autonomous urban driving navigation with complex multi-agent dynamics is under-explored due to the difficulty of learning an optimal driving policy.  ...  We present a general and principled Controllable Imitative Reinforcement Learning (CIRL) approach which successfully makes the driving agent achieve higher success rates based on only vision inputs in  ...  Introduction Autonomous urban driving is a long-studied and still under-explored task [1, 2] particularly in the crowded urban environments [3] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1807.03776v1">arXiv:1807.03776v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/2pa3slj255c77ngk76tvwtufmq">fatcat:2pa3slj255c77ngk76tvwtufmq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191023044036/https://arxiv.org/pdf/1807.03776v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/fb/81/fb81a5a4cc374c7e037cb96b89fda0b734143de0.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1807.03776v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Recent Advances in Deep Reinforcement Learning Applications for Solving Partially Observable Markov Decision Processes (POMDP) Problems Part 2—Applications in Transportation, Industries, Communications and Networking and More Topics

Xuanchen Xiang, Simon Foo, Huanyu Zang
<span title="">2021</span> <i title="MDPI AG"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/tjwucdga6zfftlebfsmbvxjiyy" style="color: black;">Machine Learning and Knowledge Extraction</a> </i> &nbsp;
The two-part series of papers provides a survey on recent advances in Deep Reinforcement Learning (DRL) for solving partially observable Markov decision processes (POMDP) problems.  ...  The first part of the overview introduces Markov Decision Processes (MDP) problems and Reinforcement Learning and applications of DRL for solving POMDP problems in games, robotics, and natural language  ...  Acknowledgments: The authors would like to express their appreciation to friends and colleagues who had provided assistance during the preparation of this paper.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/make3040043">doi:10.3390/make3040043</a> <a target="_blank" rel="external noopener" href="https://doaj.org/article/45bf00de595c44d186fa3d200589c1c5">doaj:45bf00de595c44d186fa3d200589c1c5</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/qx4srh7qabgjvd5l6lj6nulhxa">fatcat:qx4srh7qabgjvd5l6lj6nulhxa</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220312171424/https://mdpi-res.com/d_attachment/make/make-03-00043/article_deploy/make-03-00043.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f4/21/f42157d5df03cf50bdd1d1213bfd0c1de28cb1b8.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/make3040043"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> mdpi.com </button> </a>

Imitation Learning: Progress, Taxonomies and Opportunities [article]

Boyuan Zheng, Sunny Verma, Jianlong Zhou, Ivor Tsang, Fang Chen
<span title="2021-06-23">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Its success has been demonstrated in areas such as video games, autonomous driving, robotic simulations and object manipulation.  ...  However, this replicating process could be problematic, such as the performance is highly dependent on the demonstration quality, and most trained agents are limited to perform well in task-specific environments  ...  In this case, the evaluation approaches and focus could vary from method to method, ranging from performance in sparse reward scenario to the smoothness of autonomous driving in dynamic environment.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2106.12177v1">arXiv:2106.12177v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/wcvld6wvbffq5iht5z5cb563yi">fatcat:wcvld6wvbffq5iht5z5cb563yi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210625141127/https://arxiv.org/pdf/2106.12177v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/84/71/8471c74d315874d8d7964e3149bc659480c5bbdd.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2106.12177v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Towards Safer Self-Driving Through Great PAIN (Physically Adversarial Intelligent Networks) [article]

Piyush Gupta, Demetris Coleman, Joshua E. Siegel
<span title="2020-03-24">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
To this end, we introduce a "Physically Adversarial Intelligent Network" (PAIN), wherein self-driving vehicles interact aggressively in the CARLA simulation environment.  ...  We train two agents, a protagonist and an adversary, using dueling double deep Q networks (DDDQNs) with prioritized experience replay.  ...  Acknowledgements We thank the NVIDIA Corporation for providing a Titan Xp and Vaibhav Srivastava for providing additional resources supporting this research.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2003.10662v1">arXiv:2003.10662v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/kva3zh66fbawpdpzfxxosodvym">fatcat:kva3zh66fbawpdpzfxxosodvym</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200326024315/https://arxiv.org/pdf/2003.10662v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2003.10662v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Generative Adversarial Imitation Learning for End-to-End Autonomous Driving on Urban Environments [article]

Gustavo Claudio Karl Couto, Eric Aislan Antonelo
<span title="2021-10-16">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this work, we propose two variations of GAIL for autonomous navigation of a vehicle in the realistic CARLA simulation environment for urban scenarios.  ...  Autonomous driving is a complex task, which has been tackled since the first self-driving car ALVINN in 1989, with a supervised learning approach, or behavioral cloning (BC).  ...  CONCLUSION In this work, we have proposed a GAIL-based architecture for end-to-end autonomous driving in urban environments.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2110.08586v1">arXiv:2110.08586v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/773rv4afifasvk46kfntfu5g2m">fatcat:773rv4afifasvk46kfntfu5g2m</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211021033659/https://arxiv.org/pdf/2110.08586v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/67/80/6780e340da00a44e583b29946fe15df3b6565e79.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2110.08586v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

SADA: Semantic Adversarial Diagnostic Attacks for Autonomous Applications

Abdullah Hamdi, Matthias Mueller, Bernard Ghanem
<span title="2020-04-03">2020</span> <i title="Association for the Advancement of Artificial Intelligence (AAAI)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/wtjcymhabjantmdtuptkk62mlq" style="color: black;">PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE</a> </i> &nbsp;
One major factor impeding more widespread adoption of deep neural networks (DNNs) is their lack of robustness, which is essential for safety-critical applications such as autonomous driving.  ...  In contrast, we present a general framework for adversarial attacks on trained agents, which covers semantic perturbations to the environment of the agent performing the task as well as pixel-level attacks  ...  Acknowledgments This work was supported by the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research under Award No. RGC/3/3570-01-01.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1609/aaai.v34i07.6722">doi:10.1609/aaai.v34i07.6722</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/2qhxyl23pffcbbjacth7lsurmu">fatcat:2qhxyl23pffcbbjacth7lsurmu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201104104826/https://aaai.org/ojs/index.php/AAAI/article/download/6722/6576" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/04/00/04005e5f0ccabd7096b4a01082981dd9e5c1d104.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1609/aaai.v34i07.6722"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 1,437 results