Monte-Carlo Expectation Maximization for Decentralized POMDPs
Feng Wu, Shlomo Zilberstein, and Nicholas R. Jennings. Monte-Carlo Expectation Maximization for Decentralized POMDPs. Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence (IJCAI), 397-403, Beijing, China, 2013.
Abstract
We address two significant drawbacks of state-of-the-art solvers of decentralized POMDPs (DEC-POMDPs): the reliance on complete knowledge of the model and limited scalability as the complexity of the domain grows. We extend a recently proposed approach for solving DEC-POMDPs via a reduction to the maximum likelihood problem, which in turn can be solved using EM. We introduce a model-free version of this approach that employs Monte-Carlo EM (MCEM). While a naive implementation of MCEM is inadequate in multi-agent settings, we introduce several improvements in sampling that produce high-quality results on a variety of DEC-POMDP benchmarks, including large problems with thousands of agents.
Bibtex entry:
@inproceedings{WZJijcai13, author = {Feng Wu and Shlomo Zilberstein and Nicholas R. Jennings}, title = {Monte-Carlo Expectation Maximization for Decentralized POMDPs}, booktitle = {Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence}, year = {2013}, pages = {397-403}, address = {Beijing, China}, url = {http://rbr.cs.umass.edu/shlomo/papers/WZJijcai13.html} }shlomo@cs.umass.edu