Combining Counterfactual Regret Minimization with Information Gain to Solve Extensive Games with Imperfect Information [article]

Chen Qiu, Xuan Wang, Tianzi Ma, Yaojun Wen, Jiajia Zhang
2021 arXiv   pre-print
Counterfactual regret Minimization (CFR) is an effective algorithm for solving extensive games with imperfect information (IIEG). However, CFR is only allowed to apply in a known environment such as the transition functions of the chance player and reward functions of the terminal nodes are aware in IIEGs. For uncertain scenarios like the cases under Reinforcement Learning (RL), variational information maximizing exploration (VIME) provides a useful framework for exploring environments using
more » ... ormation gain. In this paper, we propose a method named VCFR that combines CFR with information gain to calculate Nash Equilibrium (NE) in the scenario of IIEG under RL. By adding information gain to the reward, the average strategy calculated by CFR can be directly used as an interactive strategy, and the exploration efficiency of the algorithm to uncertain environments has been significantly improved. Experimentally, The results demonstrate that this approach can not only effectively reduce the number of interactions with the environment, but also find an approximate NE.
arXiv:2110.07892v1 fatcat:xopyt5kxmvhbbo4tlvwpoegsnu