Forthcoming Papers

2003 Artificial Intelligence  
On the undecidability of probabilistic planning and related stochastic optimization problems Automated planning, the problem of how an agent achieves a goal given a repertoire of actions, is one of the foundational and most widely studied problems in the AI literature. The original formulation of the problem makes strong assumptions regarding the agent's knowledge and control over the world, namely that its information is complete and correct, and that the results of its actions are
more » ... c and known. Recent research in planning under uncertainty has endeavored to relax these assumptions, providing formal and computation models wherein the agent has incomplete or noisy information about the world and has noisy sensors and effectors. This research has mainly taken one of two approaches: extend the classical planning paradigm to a semantics that admits uncertainty, or adopt another framework for approaching the problem, most commonly the Markov Decision Process (MDP) model. This paper presents a complexity analysis of planning under uncertainty. It begins with the "probabilistic classical planning" problem, showing that problem to be formally undecidable. This fundamental result is then applied to a broad class of stochastic optimization problems, in brief any problem statement where the agent (a) operates over an infinite or indefinite time horizon, and (b) has available only probabilistic information about the system's state. Undecidability is established for policy-existence problems for partially observable infinitehorizon Markov decision processes under discounted and undiscounted total reward models, averagereward models, and state-avoidance models. The results also apply to corresponding approximation problems with undiscounted objective functions. The paper answers a significant open question raised by Papadimitriou and Tsitsiklis [Math. Oper. Res. 12 (3) (1987) 441-450] about the complexity of infinite horizon POMDPs.  2003 Published by Elsevier Science B.V. A. Cimatti, M. Pistore, M. Roveri and P. Traverso, Weak, strong, and strong cyclic planning via symbolic model checking Planning in nondeterministic domains yields both conceptual and practical difficulties. From the conceptual point of view, different notions of planning problems can be devised: for instance, a plan might either guarantee goal achievement, or just have some chances of success. From the practical point of view, the problem is to devise algorithms that can effectively deal with large state spaces. In this paper, we tackle planning in nondeterministic domains by addressing conceptual and practical problems. We formally characterize different planning problems, where solutions have a chance of 0004-3702/M.L. Anderson, Embodied cognition: A field guide (Field Review) R. Chrisley, Embodied artificial intelligence M.L. Anderson, Representations, symbols, and embodiment C.-J. Liau, Belief, information acquisition, and trust in multi-agent systems-A modal logic formulation R. Ben-Eliyahu-Zohary, E. Gudes and G. Ianni, Metaqueries: Semantics, complexity, and efficient algorithms C.B. Cross, Nonmonotonic inconsistency M. Broxvall and P. Jonsson, Point algebras for temporal reasoning: Algorithms and complexity P.E. Dunne and T.J.M. Bench-Capon, Two Party Immediate Reponse Disputes: Properties and efficiency A.C.C. Say and H.L. Akın, Sound and complete qualitative simulation is impossible C. Koch, N. Leone and G. Pfeifer, Enhancing disjunctive logic programming systems by SAT checkers M. Dash and H. Liu, Consistency-based search in feature selection J.P. Delgrande and T. Schaub, A consistency-based approach for belief change
doi:10.1016/s0004-3702(03)00084-5 fatcat:ywtleopcbffkdcwj5isyi7b6dq