Stability in Graphs and Games *

Radha Desharnais, Jagadeesan
Leibniz International Proceedings in Informatics Schloss Dagstuhl-Leibniz-Zentrum für Informatik   unpublished
We study graphs and two-player games in which rewards are assigned to states, and the goal of the players is to satisfy or dissatisfy certain property of the generated outcome, given as a mean payoff property. Since the notion of mean-payoff does not reflect possible fluctuations from the mean-payoff along a run, we propose definitions and algorithms for capturing the stability of the system, and give algorithms for deciding if a given mean payoff and stability objective can be ensured in the
more » ... stem. Finite-state graphs and games are used in formal verification as foundational models that capture behaviours of systems with controllable decisions and possibly with an adversarial environment. States correspond to possible configurations of a system, and edges describe how configurations can change. In a game, each state is owned by one of two players, and the player owning the state decides what edge will be taken. A graph is a game where only one of the players is present. When the choice of the edges is resolved, we obtain an outcome which is an infinite sequence of states and edges describing the execution of the system. The long-run average performance of a run is measured by the associated mean-payoff, which is the limit average reward per visited state along the run. It is well known that memoryless deterministic strategies suffice to optimize the mean payoff, and the corresponding decision problem is in NP ∩ coNP for games and in P for graphs. If the rewards assigned to the states are multi-dimensional vectors of numbers, then the problem becomes coNP-hard for games [21]. Although the mean payoff provides an important metric for the average behaviour of the system, by definition it neglects all information about the fluctuations from the mean payoff along the run. For example, a "fully stable" run where the associated sequence of rewards is 1, 1, 1, 1,. .. has the same mean payoff (equal to 1) as a run producing n, 0, 0,. .. , n, 0, 0,. .. where a state with the reward n is visited once in n transitions. In many situations, the first run is much more desirable that the second one. Consider, e.g., a video streaming *
fatcat:jkvwfamnubbstkyvyblvwizq54