Deep Deterministic Policy Gradient Based on Double Network Prioritized Experience Replay

Chaohai Kang, Chuiting Rong, Weijian Ren, Fengcai Huo, Pengyun Liu
2021 IEEE Access  
The traditional deep deterministic policy gradient (DDPG) algorithm has the disadvantages of slow convergence velocity and ease of falling into the local optimum. From these two perspectives, a DDPG algorithm based on the double network prioritized experience replay mechanism (DNPER-DDPG) is proposed in this paper. Firstly, the value function is approximated by introducing the idea of two neural networks, and the minimum of the action value functions generated by the two networks is selected as
more » ... the updated value of the actor policy network, which reduces the incidence of local optimal policy. Then, the Q values obtained by the two networks and the immediate reward obtained by the environment are used as the criteria for prioritization, and the importance of the samples in the experience replay mechanism is divided to improve the convergence speed of the algorithm. Finally, the improved method is demonstrated in the classic control environment of OpenAI Gym, and the results show that the proposed method has increased convergence speed and cumulative reward compared with the comparison algorithm.
doi:10.1109/access.2021.3074535 fatcat:eyruw4ha6vh2ndxf5dm3cznrhq