Under-Approximating Expected Total Rewards in POMDPs

Alexander Nikolai Bork, Joost-Pieter Katoen, Tim Quatmann
2022 Tools and Algorithms for the Construction and Analysis of Systems : 28th International Conference  
We consider the problem: is the optimal expected total reward to reach a goal state in a partially observable Markov decision process (POMDP) below a given threshold? We tackle this-generally undecidable-problem by computing under-approximations on these total expected rewards. This is done by abstracting finite unfoldings of the infinite belief MDP of the POMDP. The key issue is to find a suitable under-approximation of the value function. We provide two techniques: a simple (cut-off)
more » ... that uses a good policy on the POMDP, and a more advanced technique (belief clipping) that uses minimal shifts of probabilities between beliefs. We use mixed-integer linear programming (MILP) to find such minimal probability shifts and experimentally show that our techniques scale quite well while providing tight lower bounds on the expected total reward. This work is funded by the DFG RTG 2236 "UnRAVeL".
doi:10.18154/rwth-2022-03987 fatcat:g3rl7ksiu5btflinhdn6dgfcau