Rollout Sampling Policy Iteration for Decentralized POMDPs
Feng Wu, Shlomo Zilberstein, and Xiaoping Chen. Rollout Sampling Policy Iteration for Decentralized POMDPs. Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI), 666-673, Catalina Island, California, 2010.
Abstract
We present decentralized rollout sampling policy iteration (DecRSPI)--a new algorithm for multiagent decision problems formalized as DEC-POMDPs. DecRSPI is designed to improve scalability and tackle problems that lack an explicit model. The algorithm uses Monte-Carlo methods to generate a sample of reachable belief states. Then it computes a joint policy for each belief state based on the rollout estimations. A new policy representation allows us to represent solutions compactly. The key benefits of the algorithm are its linear time complexity over the number of agents, its bounded memory usage and good solution quality. It can solve larger problems that are intractable for existing planning algorithms. Experimental results confirm the effectiveness and scalability of the approach.
Bibtex entry:
@inproceedings{WZCuai10, author = {Feng Wu, Shlomo Zilberstein, and Xiaoping Chen}, title = {Rollout Sampling Policy Iteration for Decentralized {POMDP}s}, booktitle = {Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence}, year = {2010}, pages = {666-673}, address = {Catalina Island, California}, url = {http://rbr.cs.umass.edu/shlomo/papers/WZCuai10.html} }shlomo@cs.umass.edu