A common measure of (asymptotic) optimality of a rare event simulation estimator is defined by the ratio of its second moment to its first moment on a logarithmic scale. The plain Monte Carlo estimator achieves a ratio of 1. An efficient importance sampling scheme can achieve a maximal ratio of 2. Here, a standard approach to selecting the optimal importance sampling measure relies on large deviations theory. That is, large deviation techniques are employed to derive upper and lower bounds for the first and second moments of the importance sampling estimator. This approach requires the development of an appropriate large deviations principle and is therefore, often too difficult to implement for practical problems.
We develop a new technique for proving the optimality properties of rare-event simulation estimators based on importance sampling. Our methods are based on the weak convergence approach to large deviations theory. In particular, it is observed that the efficiency of a rare event simulation estimator may be analyzed by considering only the convergent subsequences of a sufficiently large compact set. Characterizing the behavior of the estimator on each such subsequence is simpler than it is by the standard approach described earlier. The advantages of the proposed program are as follows. (1) It fully leverages the power of a large deviations theory for rare-event estimator efficiency analysis. (2) It facilitates the treatment of importance sampling estimators that are not derived from the classical exponential measure change. (3) It allows for characterization of suboptimal estimators in way that quantifies the degree of suboptimality. We illustrate the approach on classical rare-event simulation problems that involve random walks as well as more complex problems involving marked point processes.