The simplex method is strongly polynomial for deterministic Markov decision processes [chapter]

Ian Post, Yinyu Ye
2013 Proceedings of the Twenty-Fourth Annual ACM-SIAM Symposium on Discrete Algorithms  
We prove that the simplex method with the highest gain/most-negative-reduced cost pivoting rule converges in strongly polynomial time for deterministic Markov decision processes (MDPs) regardless of the discount factor. For a deterministic MDP with n states and m actions, we prove the simplex method runs in O(n 3 m 2 log 2 n) iterations if the discount factor is uniform and O(n 5 m 3 log 2 n) iterations if each action has a distinct discount factor. Previously the simplex method was known to
more » ... in polynomial time only for discounted MDPs where the discount was bounded away from 1 [Ye11]. Unlike in the discounted case, the algorithm does not greedily converge to the optimum, and we require a more complex measure of progress. We identify a set of layers in which the values of primal variables must lie and show that the simplex method always makes progress optimizing one layer, and when the upper layer is updated the algorithm makes a substantial amount of progress. In the case of nonuniform discounts, we define a polynomial number of "milestone" policies and we prove that, while the objective function may not improve substantially overall, the value of at least one dual variable is always making progress towards some milestone, and the algorithm will reach the next milestone in a polynomial number of steps.
doi:10.1137/1.9781611973105.105 dblp:conf/soda/PostY13 fatcat:7zn2v5my5jhvrl3ufnekvsuauy