Cycles in adversarial regularized learning [article]

Panayotis Mertikopoulos, Christos Papadimitriou, Georgios Piliouras
2017 arXiv   pre-print
Regularized learning is a fundamental technique in online optimization, machine learning and many other fields of computer science. A natural question that arises in these settings is how regularized learning algorithms behave when faced against each other. We study a natural formulation of this problem by coupling regularized learning dynamics in zero-sum games. We show that the system's behavior is Poincar\'e recurrent, implying that almost every trajectory revisits any (arbitrarily small)
more » ... ghborhood of its starting point infinitely often. This cycling behavior is robust to the agents' choice of regularization mechanism (each agent could be using a different regularizer), to positive-affine transformations of the agents' utilities, and it also persists in the case of networked competition, i.e., for zero-sum polymatrix games.
arXiv:1709.02738v1 fatcat:dimq2dcy25dp7boai34w7wevly