Our journey is tailing off. Hopefully, by now you have a good grasp of R, and appreciate how useful a tool it is for understanding probability.
Along the way, some important probability distributions have been introduced. In the discrete case, we examined the geometric, hypergeometric, binomial, and Poisson distributions, and, in the continuous case, the uniform, exponential, and normal distributions. With R, we were able to obtain probabilities, examine upper and lower bounds, calculate the quantiles and generate random numbers from these distributions using:
- the “d” function to obtain the density or point probabilities;
- the “p” function to calculate the cumulative probabilities or the distribution function;
- the “q” function to calculate the quantiles;
- the “r” function to generate random numbers (pseudo).
We remember the R names for these functions: for example dbinom, pbinom, qbinom, and rbinom for the binomial, and for the normal dnorm, pnorm, qnorm, and rnorm.
Up to now, probabilities were arrived at by assuming that the random variable followed a particular distribution, or by examining past data to obtain an approximate model. But what if the distribution of the random variable is unknown and there are no past data? In this the final chapter, we introduce tools for estimating tail probabilities, when the information on the random variable is limited: the Markov and Chebyshev inequalities.
20.1 Markov's Inequality
The Markov inequality ...