Autonomous adjustment of exploration in weakly supervised reinforcement learning
弱教示的強化学習における探索性の自律調整

Kuniaki SATORI, Takumi KAMIYA, Takahashi TATSUJI
2020
Optimization in vast search spaces may be intractable, especially in reinforcement learning, and when the environment is real. On the other hand, humans seem to balance exploration and exploitation quite well in many tasks, and one reason is because they satisfice rather than optimize. That is to say, they stop exploring when a certain (aspiration) level is satisfied. Takahashi and others have introduced the risk-sensitive satisficing (RS) model that realizes efficient satisficing in the bandit
more » ... problems. To enable the application of RS to general reinforcement learning tasks, the global reference conversion (GRC) was introduced. GRC allocates local aspiration levels to individual states from the global aspiration level, based on the difference between the global goal and the actual profits. However, its performance depends sensitively on the scale parameter. In this paper, we propose a new algorithm that autonomously adjusts the allocation and evaluates the current satisfaction accurately.
doi:10.11517/pjsai.jsai2020.0_4g2gs703 fatcat:vmygkgkxuffqffk3lqcrhjeekm