Filters








75 Hits in 3.1 sec

Learning drivers for TORCS through imitation using supervised methods

Luigi Cardamone, Daniele Loiacono, Pier Luca Lanzi
2009 2009 IEEE Symposium on Computational Intelligence and Games  
In this paper, we apply imitation learning to develop drivers for The Open Racing Car Simulator (TORCS).  ...  Our approach can be classified as a direct method in that it applies supervised learning to learn car racing behaviors from the data collected from other drivers.  ...  IMITATION LEARNING IN TORCS USING SUPERVISED METHODS In this work, we applied supervised learning to develop car controllers for The Open Car Race Simulator from the logs collected from other drivers.  ... 
doi:10.1109/cig.2009.5286480 dblp:conf/cig/CardamoneLL09 fatcat:jewvxvdxzrf5plvtsrrujzevjm

Imitation learning of car driving skills with decision trees and random forests

Paweł Cichosz, Łukasz Pawełczak
2014 International Journal of Applied Mathematics and Computer Science  
One common learning scenario that is often possible to apply is learning by imitation, in which the behavior of an exemplary driver provides training instances for a supervised learning algorithm.  ...  Machine learning is an appealing and useful approach to creating vehicle control algorithms, both for simulated and real vehicles.  ...  Training instances for imitation learning are generated using the Inferno bot distributed with TORCS as the exemplary driver.  ... 
doi:10.2478/amcs-2014-0042 fatcat:xnyegcwfozdadhqr54grlbsbhe

Gaze Training by Modulated Dropout Improves Imitation Learning [article]

Yuying Chen, Congcong Liu, Lei Tai, Ming Liu, Bertram E. Shi
2019 arXiv   pre-print
Imitation learning by behavioral cloning is a prevalent method that has achieved some success in vision-based autonomous driving.  ...  We propose a method, gaze-modulated dropout, for integrating this gaze information into a deep driving network implicitly rather than as an additional input.  ...  Among various imitation learning methods, one typical solution is behavioral cloning through supervised learning.  ... 
arXiv:1904.08377v2 fatcat:yleqwqx2nfbjbab5ela26p67ne

Robust player imitation using multiobjective evolution

Niels van Hoorn, Julian Togelius, Daan Wierstra, Jurgen Schmidhuber
2009 2009 IEEE Congress on Evolutionary Computation  
The problem of how to create NPC AI for videogames that believably imitates particular human players is addressed.  ...  Previous approaches to learning player behaviour is found to either not generalize well to new environments and noisy perceptions, or to not reproduce human behaviour in sufficient detail.  ...  capable of training recurrent neural networks for supervised learning tasks [14] .  ... 
doi:10.1109/cec.2009.4983007 dblp:conf/cec/HoornTWS09 fatcat:ifbvsf4hxzaodpuvnb4hitgtmm

Exploring applications of deep reinforcement learning for real-world autonomous driving systems [article]

Victor Talpaert, Ibrahim Sobh, B Ravi Kiran, Patrick Mannion, Senthil Yogamani, Ahmad El-Sallab, Patrick Perez
2019 arXiv   pre-print
However, a vast majority of work on DRL is focused on toy examples in controlled synthetic car simulator environments such as TORCS and CARLA.  ...  Deep Reinforcement Learning (DRL) has become increasingly powerful in recent years, with notable achievements such as Deepmind's AlphaGo.  ...  Bootstrapping RL with imitation Ability learning by imitation is used by humans to teach other humans new skills.  ... 
arXiv:1901.01536v3 fatcat:y3gck5rznjglvim4gem5dvb2ue

End-to-End Autonomous Driving Through Dueling Double Deep Q-Network

Baiyu Peng, Qi Sun, Shengbo Eben Li, Dongsuk Kum, Yuming Yin, Junqing Wei, Tianyu Gu
2021 Automotive Innovation  
This paper puts forward an end-to-end autonomous driving method through a deep reinforcement learning algorithm Dueling Double Deep Q-Network, making it possible for the vehicle to learn end-to-end driving  ...  Thirdly, the proposed method is applied to The Open Racing Car Simulator (TORCS) to demonstrate its great performance, where it surpasses human drivers.  ...  An end-to-end autonomous driving method through Dueling Double Deep Q-Network on TORCS.  ... 
doi:10.1007/s42154-021-00151-3 fatcat:ssl63wjh4rf3bcpcloittl6yoe

Deep imitation reinforcement learning for self‐driving by vision

Qijie Zou, Kang Xiong, Qiang Fang, Bohan Jiang
2021 CAAI Transactions on Intelligence Technology  
The DIRL framework comprises two components, the perception module and the control module, using imitation learning (IL) and DDPG, respectively.  ...  In addition, a reward function for reinforcement learning is defined to improve the stability of self-driving vehicles, especially on curves.  ...  The advantage of IL is that it allows agents to quickly obtain expert driving strategies from demonstration data through supervised learning, and therefore IL is widely used for all kinds of driving tasks  ... 
doi:10.1049/cit2.12025 fatcat:d6qxys5oxjhmbfaevwj4yd3g3e

The 2009 Simulated Car Racing Championship

Daniele Loiacono, Pier Luca Lanzi, Julian Togelius, Enrique Onieva, David A Pelta, Martin V Butz, Thies D Lönneker, Luigi Cardamone, Diego Perez, Yago Sáez, Mike Preuss, Jan Quadflieg
2010 IEEE Transactions on Computational Intelligence and AI in Games  
Then, the five best teams describe the methods of computational intelligence they used to develop their drivers and the lessons they learned from the participation in the championship.  ...  Finally, we summarize the championship results, followed by a discussion about what the organizers learned about 1) the development of high-performing car racing controllers and 2) the organization of  ...  tried to develop drivers by imitating human players using forms of supervised learning.  ... 
doi:10.1109/tciaig.2010.2050590 fatcat:uqxuu55u6zgy7ne5cabbhrjeiy

A Survey of Deep Reinforcement Learning Algorithms for Motion Planning and Control of Autonomous Vehicles [article]

Fei Ye, Shen Zhang, Pin Wang, Ching-Yao Chan
2021 arXiv   pre-print
In this survey, we systematically summarize the current literature on studies that apply reinforcement learning (RL) to the motion planning and control of autonomous vehicles.  ...  Many existing contributions can be attributed to the pipeline approach, which consists of many hand-crafted modules, each with a functionality selected for the ease of human interpretation.  ...  Learned policies not only transfer directly to the real world (B), but also outperform state-of-the-art end-to-end methods trained using imitation learning.  ... 
arXiv:2105.14218v2 fatcat:27glt4i4lfhg3j4ozjrlsq6i3e

Comparative Study of End-to-end Deep Learning Methods for Self-driving Car

Fenjiro Youssef, National School of Computer Science and Systems Analysis (ENSIAS), Mohammed V University, in Rabat 8007, Morocco, Benbrahim Houda
2020 International Journal of Intelligent Systems and Applications  
, beginning by Deep imitation learning (IL) and specifically the Conditional Imitation Learning (COIL) algorithm, that learns through expert labeled demonstrations, trying to mimic their behaviors, and  ...  and error, while adopting the Markovian decision processes (MDP), to get the best policy for the driver agent.  ...  Supervised Learning Methods The end-to-end model for self-driving cars trained in supervised mode is based on an imitation learning algorithm [39, 40] called Behavioral cloning.  ... 
doi:10.5815/ijisa.2020.05.02 fatcat:d674qhcsgveivgyy22ewxavavm

Approximate Inverse Reinforcement Learning from Vision-based Imitation Learning [article]

Keuntaek Lee, Bogdan Vlahov, Jason Gibson, James M. Rehg, Evangelos A. Theodorou
2021 arXiv   pre-print
We use Imitation Learning as a means to do Inverse Reinforcement Learning in order to create an approximate cost function generator for a visual navigation challenge.  ...  The proposed methodology relies on Imitation Learning, Model Predictive Control (MPC), and an interpretation technique used in Deep Neural Networks.  ...  By utilizing the powerful feature extracting nature of CNNs, our method automates the generalizable cost function learning through Imitation Learning by linking the extracted features to the cost function  ... 
arXiv:2004.08051v3 fatcat:qmqa2uoirbhxdfrra3wrva5w6i

Learning Temporal Strategic Relationships using Generative Adversarial Imitation Learning [article]

Tharindu Fernando, Simon Denman, Sridha Sridharan, Clinton Fookes
2018 arXiv   pre-print
methods.  ...  This paper presents a novel framework for automatic learning of complex strategies in human decision making.  ...  The authors also thank QUT High Performance Computing (HPC) for providing the computational resources for this research.  ... 
arXiv:1805.04969v1 fatcat:c4ou6oa5hbeh3j2q5bbt56yidq

Generalization of TORCS car racing controllers with artificial neural networks and linear regression analysis

Kyung-Joong Kim, Jun-Ho Seo, Jung-Guk Park, Joong Chae Na
2012 Neurocomputing  
Experimental results on TORCS-based car racing simulations show that the combination of the two machine learning algorithms with the heuristic outperforms other alternatives (heuristic only, heuristic  ...  At first, we predict the desired speed only using the equations derived by the linear regression analysis for the both curved and straight track segments.  ...  There are several works on the imitative learning for the simulated car competitions.  ... 
doi:10.1016/j.neucom.2011.06.034 fatcat:3owkj75sbrcjbk4cbwtrip4zlq

Automated vehicle's behavior decision making using deep reinforcement learning and high-fidelity simulation environment [article]

Yingjun Ye, Xiaohui Zhang, Jian Sun
2018 arXiv   pre-print
In addition, theoretical analysis and experiments were conducted on setting reward function for accelerating training using deep reinforcement learning.  ...  It indicates that our framework is effective for Automated vehicle's decision-making learning.  ...  We also would like to thanks PTV to supply the VISSIM Engine for accelerating our DRL training program.  ... 
arXiv:1804.06264v1 fatcat:zf3uso7kv5e57i4lboa2l3sshi

Using Semantic Information to Improve Generalization of Reinforcement Learning Policies for Autonomous Driving

Florence Carton, David Filliat, Jaonary Rabarisoa, Quoc Cuong Pham
2021 2021 IEEE Winter Conference on Applications of Computer Vision Workshops (WACVW)  
Should we use the Linknet skip connections?) and more complex data augmentation. Figure 1 : 1 Traditional vs end-to-end learning drivers.  ...  The two most widely used simulators in the literature are TORCS [31] and CARLA [9] , and the latter is increasingly used since TORCS is a racing game and CARLA does offer various urban environments.  ...  Hyperparameters for Reinforcement Learning Training In reinforcement learning, hyperparameters tuning is crucial.  ... 
doi:10.1109/wacvw52041.2021.00020 fatcat:ovkmnoa2ljdddgep5l5nq53xk4
« Previous Showing results 1 — 15 out of 75 results