OAM: An Option-Action Reinforcement Learning Framework for Universal Multi-Intersection Control

Enming Liang, Zicheng Su, Chilin Fang, Renxin Zhong
2022 AAAI Conference on Artificial Intelligence  
Efficient traffic signal control is an important means to alleviate urban traffic congestion. Reinforcement learning (RL) has shown great potentials in devising optimal signal plans that can adapt to dynamic traffic congestion. However, several challenges still need to be overcome. Firstly, a paradigm of state, action, and reward design is needed, especially for an optimality-guaranteed reward function. Secondly, the generalization of the RL algorithms is hindered by the varied topologies and
more » ... ysical properties of intersections. Lastly, enhancing the cooperation between intersections is needed for large network applications. To address these issues, the Option-Action RL framework for universal Multiintersection control (OAM) is proposed. Based on the wellknown cell transmission model, we first define a lane-celllevel state to better model the traffic flow propagation. Based on this physical queuing dynamics, we propose a regularized delay as the reward to facilitate temporal credit assignment while maintaining the equivalence with minimizing the average travel time. We then recapitulate the phase actions as the constrained combinations of lane options and design a universal neural network structure to realize model generalization to any intersection with any phase definition. The multipleintersection cooperation is then rigorously discussed using the potential game theory. We test the OAM algorithm under four networks with different settings, including a city-level scenario with 2,048 intersections using synthetic and real-world datasets. The results show that the OAM can outperform the state-of-the-art controllers in reducing the average travel time.
dblp:conf/aaai/LiangSFZ22 fatcat:gbbfg2ezfrarfnfzdeyexu6ld4