Decentralized control of partially observable Markov decision processes

Christopher Amato, Girish Chowdhary, Alborz Geramifard, N. Kemal Ure, Mykel J. Kochenderfer
2013 52nd IEEE Conference on Decision and Control  
Markov decision processes (MDPs) are often used to model sequential decision problems involving uncertainty under the assumption of centralized control. However, many large, distributed systems do not permit centralized control due to communication limitations (such as cost, latency or corruption). This paper surveys recent work on decentralized control of MDPs in which control of each agent depends on a partial view of the world. We focus on a general framework where there may be uncertainty
more » ... out the state of the environment, represented as a decentralized partially observable MDP (Dec-POMDP), but consider a number of subclasses with different assumptions about uncertainty and agent independence. In these models, a shared objective function is used, but plans of action must be based on a partial view of the environment. We describe the frameworks, along with the complexity of optimal control and important properties. We also provide an overview of exact and approximate solution methods as well as relevant applications. This survey provides an introduction to what has become an active area of research on these models and their solutions.
doi:10.1109/cdc.2013.6760239 dblp:conf/cdc/AmatoCGUK13 fatcat:fe5yksf4zjfnrmhswb2rspipji