821 Hits in 9.2 sec

Reinforcement Learning based Control of Traffic Lights in Non-stationary Environments: A Case Study in a Microscopic Simulator

Denise de Oliveira, Ana L. C. Bazzan, Bruno Castro da Silva, Eduardo W. Basso, Luís Nunes
2006 European Workshop on Multi-Agent Systems  
Recently, a method was proposed, which is capable of learning in non-stationary scenarios via an approach to detect context changes.  ...  The goal of the present paper is to assess the feasibility of applying the above mentioned approach in a more realistic scenario, implemented by means of a microscopic traffic simulator.  ...  Acknowledgments The joint project "Caracterização de Estratégias de Controle em Sistemas Multiagentes Heterogêneos" is supported by the bilateral agreement between Brazil and Portugal (CAPES-GRICES).  ... 
dblp:conf/eumas/OliveiraBSBN06 fatcat:ri7vijjmgven3nerwfcqj32hjy

Development of a traffic simulation for the intelligent disposition of autonomous vehicles [article]

Käufer, Adrian Berisha, Benedikt Grau, Dimitrios Lagamtzis, Andreas Rößler
2020 Figshare  
The city of Mannheim servesas a base structure for modelling a traffic environment and thismodel is then used for testing different traffic situations andoptimizations.  ...  In this paper, a simplified way for communities andresearchers to set up a full traffic representation of an urbancity in SUMO is provided.  ...  It was selected based on the following properties: • SUMO is a microscopic traffic simulation.  ... 
doi:10.6084/m9.figshare.12229337.v1 fatcat:k6vdunmlifbrlnezro34v3px4e

To Adapt or Not to Adapt – Consequences of Adapting Driver and Traffic Light Agents [chapter]

Ana L. C. Bazzan, Denise de Oliveira, Franziska Klügl, Kai Nagel
2008 Lecture Notes in Computer Science  
We use microscopic, agent-based modelling and simulation, in opposition to the classical network analysis, as this work focuses on the effect of local adaptation.  ...  In a scenario that exhibits features comparable to real-world networks, we evaluate different types of adaptation by drivers and by traffic lights, based on local perceptions.  ...  Acknowledgments The authors would like to thank CAPES (Brazil) and DAAD (Germany) for their support to the joint, bilateral project "Large Scale Agent-based Traffic Simulation for Predicting Traffic Conditions  ... 
doi:10.1007/978-3-540-77949-0_1 fatcat:wlb2mwqxeveepevrn73vizcp4e

Quantifying the Impact of Non-Stationarity in Reinforcement Learning-Based Traffic Signal Control [article]

Lucas N. Alegre, Ana L. C. Bazzan, Bruno C. da Silva
2020 arXiv   pre-print
In reinforcement learning (RL), dealing with non-stationarity is a challenging issue. However, some domains such as traffic optimization are inherently non-stationary.  ...  In particular, when dealing with traffic signal controls, addressing non-stationarity is key since traffic conditions change over time and as a function of traffic control decisions taken in other parts  ...  The traffic environment is simulated using the open source microscopic traffic simulator SUMO (Simulation of Urban MObility) [1] and models the dynamics of a 4 × 4 grid traffic network with 16 traffic  ... 
arXiv:2004.04778v1 fatcat:gcofmmh2prcfhh6ykupzxomzpe

Quantifying the impact of non-stationarity in reinforcement learning-based traffic signal control

Lucas N. Alegre, Ana L.C. Bazzan, Bruno C. da Silva
2021 PeerJ Computer Science  
In reinforcement learning (RL), dealing with non-stationarity is a challenging issue. However, some domains such as traffic optimization are inherently non-stationary.  ...  In particular, when dealing with traffic signal controls, addressing non-stationarity is key since traffic conditions change over time and as a function of traffic control decisions taken in other parts  ...  The traffic environment is simulated using the open-source microscopic traffic simulator SUMO (Simulation of Urban MObility) (Lopez et al., 2018) and models the dynamics of a 4 × 4 grid traffic network  ... 
doi:10.7717/peerj-cs.575 pmid:34141896 pmcid:PMC8176548 fatcat:ptng2mzyfzcarkchupz4ii7che

Traffic Signal Control with Cell Transmission Model Using Reinforcement Learning for Total Delay Minimisation

Pitipong Chanloha, Jatuporn Chinrungrueng, Wipawee Usaha, Chaodit Aswakul
2015 International Journal of Computers Communications & Control  
For the practical case study conducted<br />by AIMSUN microscopic traffic simulator, the proposed CTM-based RL reveals that<br />the reduction of the average delay can be significantly decreased by 40%  ...  This paper proposes a new framework to control the traffic signal lights by<br />applying the automated goal-directed learning and decision making scheme, namely<br />the reinforcement learning (RL) method  ...  Sorawit Narupiti at transport engineering division of Department of Civil Engineering, Chulalongkorn University for technical support in transportation infrastructures and intelligence controls.  ... 
doi:10.15837/ijccc.2015.5.2025 fatcat:g3vyt4wzajhzdmbvlvae3pqb2u

Improving Urban Mobility: using artificial intelligence and new technologies to connect supply and demand [article]

Ana L. C. Bazzan
2022 arXiv   pre-print
, agent-based simulation, among others.  ...  In the present work, a survey of several works developed by our group are discussed in a holistic perspective, i.e., they cover not only the supply side (as commonly found in ITS works), but also the demand  ...  Model Based Reinforcement Learning Approach When dealing with non-stationary environments, where vehicle flow is not constant, both model-independent reinforcement learning approaches and model-based ones  ... 
arXiv:2204.03570v1 fatcat:v4geolku7bd6nb5j4in7wsdf7i

Simulation Environment for Safety Assessment of CEAV Deployment in Linden [article]

Levent Guvenc, Bilin Aksun-Guvenc, Xinchen Li, Aravind Chandradoss Arul Doss, Karina Meneses-Cime, Sukru Yaren Gelbal
2020 arXiv   pre-print
This report presents a simulation environment for pre-deployment testing of the autonomous shuttles that will operate in the Linden Residential Area.  ...  This document presents simulation testing environments in two open source simulators and a commercial simulator for this residential area route and how they can be used for model-in-the-loop and hardware-in-the-loop  ...  Acknowledgement of Support This material is based upon work supported by the U.S. Department of Transportation under Agreement No. DTFH6116H00013.  ... 
arXiv:2012.10498v1 fatcat:bocqyb4j3zhpxffzqzhfefhk2e

Computation Offloading for Vehicular Environments: A Survey

Alisson B. De Souza, Paulo A. L. Rego, Tiago Carneiro, Jardel Das C. Rodrigues, P. P. Reboucas Filho, Jose N. De Souza, Vinay Chamola, Victor Hugo C. De Albuquerque, Biplab Sikdar
2020 IEEE Access  
a: Real Some configuration parameters of road traffic simulator and mobility generator are often hard to set in simulated environments and may not provide adequate realism.  ...  Although simulators have constantly improved in reproducing realistic traffic patterns and movements, including interaction between vehicles, these simulators can still improve the microscopic modeling  ... 
doi:10.1109/access.2020.3033828 fatcat:rnnwnczolngf5bn6sfithf7ebi

Adaptive Group-Based Signal Control Using Reinforcement Learning with Eligibility Traces

Junchen Jin, Xiaoliang Ma
2015 2015 IEEE 18th International Conference on Intelligent Transportation Systems  
The simulation results demonstrate that learning-based and adaptive group-based signal control system owns its advantage in dealing with dynamic traffic environments in terms of improving traffic mobility  ...  This study, therefore, presents an adaptive groupbased signal control system capable of changing control strategies with respect to non-stationary traffic demands.  ...  Traffic light indications are interpreted by the timing actions and are sent to microscopic traffic simulator. Then signal controller in traffic simulator executes the received indications.  ... 
doi:10.1109/itsc.2015.389 dblp:conf/itsc/JinM15 fatcat:alqneaacrvg2lg35wbgc262obu

Objective-aware Traffic Simulation via Inverse Reinforcement Learning [article]

Guanjie Zheng, Hanyang Liu, Kai Xu, Zhenhui Li
2022 arXiv   pre-print
A fixed physical model tends to be less effective in a complicated environment given the non-stationary nature of traffic dynamics.  ...  In this paper, we formulate traffic simulation as an inverse reinforcement learning problem, and propose a parameter sharing adversarial inverse reinforcement learning model for dynamics-robust simulation  ...  The views and conclusions contained in this paper are those of the authors and should not be interpreted as representing any funding agencies.  ... 
arXiv:2105.09560v3 fatcat:bv5w74coyreh5kdaz7rgryvlwe

Application of Deep Reinforcement Learning in Traffic Signal Control: An Overview and Impact of Open Traffic Data

Martin Gregurić, Miroslav Vujić, Charalampos Alexopoulos, Mladen Miletić
2020 Applied Sciences  
The introduction of Reinforcement Learning (RL) in ATSC as tackled those types of congestions by using on-line learning, which is based on the trial and error approach.  ...  Furthermore, RL is prone to the dimensionality curse related to the state–action space size based on which a non-linear quality function is derived.  ...  Acknowledgments: This research was part of a project Twinning Open Data Operational that has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement  ... 
doi:10.3390/app10114011 fatcat:vnhrunfhpbgtrmky7pcvdg223a

Deterministic 3D Ray-Launching Millimeter Wave Channel Characterization for Vehicular Communications in Urban Environments

Fidel Alejandro Rodríguez-Corbo, Leyre Azpilicueta, Mikel Celaya-Echarri, Peio Lopez-Iturri, Imanol Picallo, Francisco Falcone, Ana Vazquez Alejos
2020 Sensors  
In this work, a V2X communication channel in the mmWave (28 GHz) band is analyzed by a combination of an empirical study and a deterministic simulation with an in-house 3D ray-launching algorithm.  ...  The use of these intelligent transport systems will allow the integration of efficient performance in terms of route control, fuel consumption, and traffic administration, among others.  ...  Conflicts of Interest: The authors declare no conflict of interest. The statements made herein are solely the responsibility of the authors.  ... 
doi:10.3390/s20185284 pmid:32947776 pmcid:PMC7570788 fatcat:zjvviygntngttk5bk7c76zv7j4

Large-scale traffic signal control using machine learning: some traffic flow considerations [article]

Jorge A. Laval, Hao Zhou
2019 arXiv   pre-print
This paper uses supervised learning, random search and deep reinforcement learning (DRL) methods to control large signalized intersection networks.  ...  The traffic model is Cellular Automaton rule 184, which has been shown to be a parameter-free representation of traffic flow, and is the most efficient implementation of the Kinematic Wave model with triangular  ...  ACKNOWLEDGEMENTS This study has received funding from NSF research projects # 1562536 and # 1826162.  ... 
arXiv:1908.02673v1 fatcat:ngeffbvo2jcgbebfnbhi5zizbq

Multi-Agent Deep Reinforcement Learning for Traffic optimization through Multiple Road Intersections using Live Camera Feed

Deepeka Garg, Maria Chli, George Vogiatzis
2020 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC)  
This paper presents the first application of multi-agent deep reinforcement learning (DRL) to achieve traffic optimization through multiple road intersections solely based on raw pixel input from CCTV  ...  Instead, we propose a system of multiple, coordinating traffic signal control systems.  ...  Traffic flows form a complex spatial-temporal structure, resulting in non-stationary MDP if the agents do not have access to any previous data to rely on.  ... 
doi:10.1109/itsc45102.2020.9294375 fatcat:zvmmbqvfzfhp3gevf3yjgtvfyi
« Previous Showing results 1 — 15 out of 821 results