Achieving Goals in Decentralized POMDPs

Christopher Amato and Shlomo Zilberstein. Achieving Goals in Decentralized POMDPs. Proceedings of the Eighth International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 593-600, Budapest, Hungary, 2009.

Abstract

Coordination of multiple agents under uncertainty in the decentralized POMDP model is known to be NEXP-complete, even when the agents have a joint set of goals. Nevertheless, we show that the existence of goals can help develop effective planning algorithms. We examine an approach to model these problems as indefinite-horizon decentralized POMDPs, suitable for many practical problems that terminate after some unspecified number of steps. Our algorithm for solving these problems is optimal under some common assumptions--that terminal actions exist for each agent and rewards for non-terminal actions are negative. We also propose an infinite-horizon approximation method that allows us to relax these assumptions while maintaining goal conditions. An optimality bound is developed for this sample-based approach and experimental results show that it is able to exploit the goal structure effectively. Compared with the state-of-the-art, our approach can solve larger problems and produce significantly better solutions.

Bibtex entry:

@inproceedings{AZaamas09,
  author	= {Christopher Amato and Shlomo Zilberstein},
  title		= {Achieving Goals in Decentralized {POMDP}s},
  booktitle     = {Proceedings of the Eighth International Conference on
                   Autonomous Agents and Multiagent Systems},
  year		= {2009},
  pages		= {593-600},
  address       = {Budapest, Hungary},
  url		= {http://rbr.cs.umass.edu/shlomo/papers/AZaamas09.html}
}

shlomo@cs.umass.edu
UMass Amherst