Multi-machine collaborative path planning method based on A * mechanism connection depth neural network model
The naval battlefield is one of the main positions for future conflicts between major powers. The powerful naval battlefield target search capability is the last barrier to carry out maritime training and operations. At the same time, because of its complex and changeable environment and important strategic position, it has become the most difficult battlefield in joint search and rescue. the most central part. In order to reduce the time of target search in maritime battlefield, a real-time
... h planning method in maritime battlefield based on deep reinforcement learning is proposed. The method proposed in this paper has advantages for path planning in multi-machine collaborative and can meet the requirements of real-time performance. First, a mathematical planning model for target search in the naval battlefield is constructed and mapped into a reinforcement learning model; then, based on the Rainbow deep reinforcement learning algorithm, the state vector, neural network structure and algorithm framework for target search path planning in the naval battlefield are designed with process. Finally, the feasibility and effectiveness of the proposed method are verified by experiments. The sequential exchange strategy and sequential insertion strategy of the A* algorithm are added to improve the optimal path obtained by the algorithm in each iteration. The improved algorithm is applied to the target path planning problem with time window. Through experiments on the Solomon data set, the optimal solution of the path before and after the improved algorithm is compared, and it is proved that the improved algorithm has better performance. In this way, the cooperative behavior of multi-agents can be realized, and the stability of reinforcement learning can be improved. The simulation results show that the length of the path searched by the proposed algorithm and the inflection point of the path are shortened by 19% and 58% respectively compared with the benchmark algorithm, which verifies the superiority of the proposed algorithm.