An Empirical Algorithm for Relative Value Iteration for Average-cost MDPs

A. Gupta, R. Jain, and P. W. Glynn

2015 IEEE 54th Annual Conference on Decision and Control (CDC) pp.5079-5084

Infinite-horizon average-cost Markov decision processes are of interest in many scenarios. A dynamic programming algorithm, called the relative value iteration, is used to compute the optimal value function. For large state spaces, this often runs into difficulties due to its computational burden. We propose a simulation-based dynamic program called empirical relative value iteration (ERVI). The idea is very simple: replace the expectation in the Bellman operator with a sample average estimate, and then use projection to ensure boundedness of the iterates. We establish that the ERVI iterates converge to the optimal value function in the span-seminorm in probability as the number of samples taken goes to infinity. Simulation results show remarkably good performance even with a small number of samples.