Adaptive Spectral Methods for Simulation Output Analysis
by P. Heidelberger, P. D. Welch
This paper addresses two central problems in simulation methodology: the generation of confidence intervals for the steady state means of the output sequences and the sequential use of these confidence intervals to control the run length. The variance of the sample mean of a covariance stationary process is given approximately by p(0)/N, where p(f) is the spectral density at frequency f and N is the sample size. In an earlier paper we developed a method of confidence interval generation based on the estimation of p(0) through the least squares fit of a quadratic to the logarithm of the periodogram. This method was applied in a run length control procedure to a sequence of batched means. As the run length increased the batch means were rebatched into larger batch sizes so as to limit storage requirements. In this rebatching the shape of the spectral density changes, gradually becoming flat as N increases. Quadratics were chosen as a compromise between small sample bias and large sample stability.
In this paper we consider smoothing techniques which adapt to the changing spectral shape in an attempt to improve both the small and large sample behavior of the method. The techniques considered are polynomial smoothing with the degree selected sequentially using standard regression statistics, polynomial smoothing with the degree selected by cross validation, and smoothing splines with the amount of smoothing determined by cross validation. These techniques were empirically evaluated both for fixed sample sizes and when incorporated into the sequential run length control procedure. For fixed sample sizes they did not improve the small sample behavior and only marginally improved the large sample behavior when compared with the quadratic method. Their performance in the sequential procedure was unsatisfactory. Hence, the straightforward quadratic technique recommended in the earlier paper is still recommended as an effective, practical technique for simulation confidence interval generation and run length control.