Stochastic Optimization by Simulation: Convergence Proofs for the GI/G/1 Queue in Steady-State
P. L’Ecuyer and P. W. Glynn
Management Science, Vol. 40, 1562-1578 (1994)
Approaches like finite differences with common random numbers, infinitesimal perturbation analysis, and the likelihood ratio method, have drawn a great deal of attention recently, as ways of estimating the gradient of a performance measure with respect to continuous parameters in a dynamic stochastic systems. In this paper, we study the use of such estimators in stochastic approximation algorithms, to perform so-called 'single-run optimizations' of steady-state systems. Under mild conditions, for an objective function that involves the mean system time in a GI/G/1 queue, we prove that many variants of these algorithms converge to the minimizer. In most cases, however, the simulation length must be increased from iteration to iteration; otherwise the algorithm may converge to the wrong value. One exception is a particular implementation of infinitesimal perturbation analysis, for which the single-run optimization converges to the optimum even with a fixed (and small) number of ends of service per iteration. As a by-product of our convergence proofs, we obtain some properties of the derivative estimators that could be of independent interest. Our analysis exploits the regenerative structure of the system, but our derivative estimation and optimization algorithms do not always take advantage of that regenerative structure. In a companion paper, we report numerical experiments with an M/M/1 queue, which illustrate the basic convergence properties and possible pitfalls of the various techniques.