Filters








2,935 Hits in 7.6 sec

Stability of Gradient Learning Dynamics in Continuous Games: Scalar Action Spaces [article]

Benjamin J. Chasnov, Daniel Calderone, Behçet Açıkmeşe, Samuel A. Burden, Lillian J. Ratliff
2020 arXiv   pre-print
Learning processes in games explain how players grapple with one another in seeking an equilibrium. We study a natural model of learning based on individual gradients in two-player continuous games.  ...  We provide a comprehensive understanding of scalar games and find that equilibria that are both stable and Nash are robust to variations in learning rates.  ...  In Section III, we analyze the spectral properties of two-player continuous games on scalar action spaces.  ... 
arXiv:2011.03650v1 fatcat:ogdi7ik23ba5xnuhitrapnl3hq

Stability of Gradient Learning Dynamics in Continuous Games: Vector Action Spaces [article]

Benjamin J. Chasnov, Daniel Calderone, Behçet Açıkmeşe, Samuel A. Burden, Lillian J. Ratliff
2021 arXiv   pre-print
Towards characterizing the optimization landscape of games, this paper analyzes the stability of gradient-based dynamics near fixed points of two-player continuous games.  ...  We introduce the quadratic numerical range as a method to characterize the spectrum of game dynamics and prove the robustness of equilibria to variations in learning rates.  ...  STABILITY OF 2-PLAYER CONTINUOUS GAMES In this section, we give stability results for 2-player continuous games on vector action spaces. Consider a game (f 1 , f 2 ).  ... 
arXiv:2011.05562v2 fatcat:b3pv7nhdbjdqbpfyfw4gmpo4ey

Policy-Gradient Algorithms Have No Guarantees of Convergence in Linear Quadratic Games [article]

Eric Mazumdar, Lillian J. Ratliff, Michael I. Jordan, S. Shankar Sastry
2019 arXiv   pre-print
We show by counterexample that policy-gradient algorithms have no guarantees of even local convergence to Nash equilibria in continuous action and state space multi-agent settings.  ...  In such games the state and action spaces are continuous and global Nash equilibria can be found be solving coupled Ricatti equations.  ...  However, we believe that such phenomena have not yet been shown to occur in the dynamics of multi-agent reinforcement learning algorithms in continuous action and state spaces.  ... 
arXiv:1907.03712v2 fatcat:givojocp2jf67amh2ld7fn44nu

Riemannian game dynamics

Panayotis Mertikopoulos, William H. Sandholm
2018 Journal of Economic Theory  
We examine the close connections between Hessian game dynamics and reinforcement learning in normal form games, extending and elucidating a well-known link between the replicator dynamics and exponential  ...  We study a class of evolutionary game dynamics under which the population state moves in the direction that agrees most closely with current payoffs.  ...  Equivalence of continuous Hessian dynamics and reinforcement learning. We now describe a common derivation of the reinforcement learning dynamics (RLD) and (HD) in the continuous regime.  ... 
doi:10.1016/j.jet.2018.06.002 fatcat:mxkvxyxcjvdidicdabdohdno7i

Differentiable Game Mechanics [article]

Alistair Letcher and David Balduzzi and Sebastien Racaniere and James Martens and Jakob Foerster and Karl Tuyls and Thore Graepel
2019 arXiv   pre-print
The behavior of gradient-based methods in games is not well understood -- and is becoming increasingly important as adversarial and multi-objective architectures proliferate.  ...  In this paper, we develop new tools to understand and control the dynamics in n-player differentiable games. The key result is to decompose the game Jacobian into two components.  ...  Nash Convergence of Gradient Dynamics in General-Sum Games. In UAI, 2000. G Stoltz and G Lugosi. Learning correlated equilibria in games with compact sets of strategies.  ... 
arXiv:1905.04926v1 fatcat:lao2jsl7f5ewffknywz3qegd5q

Riemannian game dynamics [article]

Panayotis Mertikopoulos, WIlliam H. Sandholm
2018 arXiv   pre-print
We examine the close connections between Hessian game dynamics and reinforcement learning in normal form games, extending and elucidating a well-known link between the replicator dynamics and exponential  ...  Like these representative dynamics, all Riemannian game dynamics satisfy certain basic desiderata, including positive correlation and global convergence in potential games.  ...  There is a more surprising connection between Hessian dynamics and models of reinforcement learning in normal form games.  ... 
arXiv:1603.09173v3 fatcat:ic7ginobw5adlc36ad2qcj5fjm

Approachability in Population Games [article]

Dario Bauso, Thomas W L Norman
2014 arXiv   pre-print
Second, we develop a model of two coupled partial differential equations (PDEs) in the spirit of mean-field game theory: one describing the best-response of every player given the population distribution  ...  actions.  ...  This idea of adapting the new action to the current state of the game is common to adaptive learning and evolutionary games as well, but in regret-based dynamics the state is in payoff (rather than strategy  ... 
arXiv:1407.3910v1 fatcat:jc5wmnfusbg5folz5w3xpmedgm

Convergence of Learning Dynamics in Stackelberg Games [article]

Tanner Fiez, Benjamin Chasnov, Lillian J. Ratliff
2019 arXiv   pre-print
In the class of games we consider, there is a hierarchical game being played between a leader and a follower with continuous action spaces.  ...  This paper investigates the convergence of learning dynamics in Stackelberg games.  ...  In fact, for games on scalar action spaces, it turns out that non-Nash attracting critical points of the simultaneous gradient play dynamics at which −D 2 2 f (x * ) > 0 must be differential Stackelberg  ... 
arXiv:1906.01217v3 fatcat:awbusd3qlbebvnhwnf2emjqtdu

Learning across games

Friederike Mengel
2012 Games and Economic Behavior  
Learning across games can destabilize strict Nash equilibria and stabilize equilibria in weakly dominated strategies as well as mixed equilibria in 2 2 Coordination games even for arbitrarily small reasoning  ...  Partitions of higher cardinality are more costly. A process of simultaneous learning of actions and partitions is presented and equilibrium partitions and action choices characterized.  ...  stabilized by learning across games.  ... 
doi:10.1016/j.geb.2011.08.020 fatcat:inzwjjbszjbrdbopeszmntu2ge

Convergence Analysis of Gradient-Based Learning in Continuous Games

Benjamin Chasnov, Lillian J. Ratliff, Eric Mazumdar, Samuel Burden
2019 Conference on Uncertainty in Artificial Intelligence  
In particular, we consider continuous games where agents learn in 1) deterministic settings with oracle access to their individual gradient and 2) stochastic settings with an unbiased estimator of their  ...  Considering a class of gradient-based multiagent learning algorithms in non-cooperative settings, we provide convergence guarantees to a neighborhood of a stable Nash equilibrium.  ...  The collection of costs (f 1 , . . . , f n ) on X where f i : X → R is agent i's cost function and X i is their action space defines a continuous game.  ... 
dblp:conf/uai/ChasnovRMB19 fatcat:mujzjdjyjbfhlh3cqshn3jpycy

Continuous-Time Convergence Rates in Potential and Monotone Games [article]

Bolin Gao, Lacra Pavel
2022 arXiv   pre-print
In this paper, we provide exponential rates of convergence to the interior Nash equilibrium for continuous-time dual-space game dynamics such as mirror descent (MD) and actor-critic (AC).  ...  In the first part of this paper, we provide a novel relative characterization of monotone games and show that MD and its discounted version converge with 𝒪(e^-β t) in relatively strongly and relatively  ...  Exponential stability of NE can also be shown for various continuous-time dynamics, such as extremumseeking dynamics [14] , gradient-type dynamics with consensus estimation [49] , affine nonlinear dynamics  ... 
arXiv:2011.10682v3 fatcat:ym3igjg5bzelzm4amb5wiifplu

Stable Games

Josef Hofbauer, William H. Sandholm
2007 2007 46th IEEE Conference on Decision and Control  
Finally, we show that the set of Nash equilibria of a stable game is globally asymptotically stable under a variety of evolutionary dynamics.  ...  by the improvements in the payoffs of strategies which revising players are abandoning.  ...  Stable games whose payoffs are differentiable can be characterized in terms of the action of their derivative matrices DF(x) on TX TX.  ... 
doi:10.1109/cdc.2007.4434344 dblp:conf/cdc/HofbauerS07 fatcat:y47ohgl6czdtjdyzkx4ivxpvke

A Ranking Game for Imitation Learning [article]

Harshit Sikchi, Akanksha Saran, Wonjoon Goo, Scott Niekum
2022 arXiv   pre-print
In this game, the reward agent learns to satisfy pairwise performance rankings within a set of policies, while the policy agent learns to maximize this reward.  ...  The Stackelberg game formulation allows us to use optimization methods that take the game structure into account, leading to more sample efficient and stable learning dynamics compared to existing IRL  ...  BCO (Torabi et al., 2018a) learns an inverse dynamics model, iteratively using the state-action-next state visitation in the environment and using it to predict the actions that generate the expert state  ... 
arXiv:2202.03481v1 fatcat:5eue7bjio5baflnlxlbqfbn6km

Inertial game dynamics and applications to constrained optimization [article]

Rida Laraki, Panayotis Mertikopoulos
2015 arXiv   pre-print
A similar asymptotic stability result is obtained for evolutionarily stable strategies in symmetric (single- population) games.  ...  By exploiting a well-known link between the replicator dynamics and the Shahshahani geometry on the space of mixed strategies, the dynamics are stated in a Riemannian geometric framework where trajectories  ...  Convergence and stability properties in games. We now return to game theory and examine the convergence and stability properties of (ID) with respect to Nash equilibria.  ... 
arXiv:1305.0967v2 fatcat:oduh6o5c2bca5mdclupapfll2e

Solving Zero-Sum Games through Alternating Projections [article]

Ioannis Anagnostides, Paolo Penna
2021 arXiv   pre-print
Finally, we illustrate an – in principle – trivial reduction from any game to the assumed class of instances, without altering the space of equilibria.  ...  First, we provide a precise analysis of Optimistic Gradient Descent/Ascent (OGDA) – an optimistic variant of Gradient Descent/Ascent – for unconstrained bilinear games, extending and strengthening prior  ...  Naturally, the stability of the learning algorithm has also emerged as a critical consideration in the more challenging and indeed, relevant case of constrained zero-sum -or simply zerosum -games.  ... 
arXiv:2010.00109v2 fatcat:s4g7fdc6tveirb7gxwp7nmagq4
« Previous Showing results 1 — 15 out of 2,935 results