As we saw in section 1.6, theorems 1.8 and 1.9 form the basis for statistical methods and are crucial to the validity of Monte-Carlo methods. These theorems set out the way in which empirical means converge toward statistical moments. Noting that a statistical moment is defined as the integral of a certain function, this statement says, in some ways, that you can approximate this integral using a mean based on random (or pseudo-random) sequences. Using these two theorems, we see that the convergence as a function of the number *N* of samples is of the order of *N*^{−1/2}. It is therefore interesting to compare this value to those obtained using deterministic numerical methods, such as the trapezoid method or Simpson’s method. The deterministic method can be seen to have a convergence speed of the order of *N*^{−2/d}, where *d* is the dimension of the space over which the function to integrate is defined. Consequently, Monte-Carlo methods present two advantages compared to deterministic methods, which are as follows: (i) the convergence speed does not depend on the dimension *d*, and (ii) their use does not depend on the regularity of the function being integrated.

In a less formal manner, the trapezoidal method can be seen as using a grid with a large number of points; many of them have a negligible effect on the calculated value of the integral; following the Monte-Carlo method, only the significant values are used. There is, however, one major ...

Start Free Trial

No credit card required