Monte Carlo methods present a class of computational algorithms that rely on repeated random sampling to approximate some unknown quantities. They are best suited for calculation using a computer program, and they are typically used when the exact results with a deterministic algorithm are not available.
The Monte Carlo method was developed in the 1940s by John von Neumann, Stanislaw Ulam, and Nicholas Metropolis while they were working on the Manhattan Project at the Los Alamos National Laboratory. It was named after the Monte Carlo Casino, a famous casino where Ulam's uncle often gambled away his money.
We mainly deal in this book with two well-known Monte Carlo methods, called importance sampling and splitting, and in particular with their applications to combinatorial optimization, counting, and estimation of probabilities of rare events.
Importance sampling is a well-known variance reduction technique in stochastic simulation studies. The idea behind importance sampling is that certain values of the input random variables have a greater impact on the output parameters than others. If these “important” values are sampled more frequently, the variance of the output estimator can be reduced. However, such direct use of importance sampling distributions will result in a biased estimator. To eliminate the bias, the simulation outputs must be modified (weighted) by using a likelihood ratio factor, also called the Radon Nikodym derivative ...