173
Appendix I: Review of SomeProbability
and Statistics Concepts
Probability Concepts
Probability begins with the ideas of “sample space” and “experiment.” An
experiment is the observation of some phenomenon whose result cannot
beperfectly predicted a priori. A sample space is the collection of all possible
results (called outcomes) from an experiment. Thus, an experiment can
bethought of as the observation of a result taken from a sample space. These
circular denitions may be a little annoying and somewhat bafing,but they
are easily illustrated. If the experiment is to observe which face of a six-sided
die lands up after throwing it across a gaming table, then the sample space
consists of six elements, namely, the array of one, two, three, four, ve, or
six dots, as they are typically arrayed on the faces of a six-sided die. Sample
spaces need not be so discrete or nite; they can be continuous and in-
nite, in that they can have an innite number of outcomes. For example, if a
sample space consists of all possible initial voltages generated by LiI batteries
made in a battery manufacturing plant, then it would have an innite (albeit
bounded) number of possible outcomes.
A random variable is a mapping from a sample space into (usually) some
subset of the real numbers (possible over the entire real line). Think of the
random variable as a “measurement” taken after the experiment is per-
formed. Thus, the number of dots in the array showing after the die is cast, or
the voltage as measured by a volt meter, would be random variables. There
are two basic classes of random variables, discrete and continuous. Discrete
random variables are mapped from the sample space to a subset of integers,
and continuous random variables are mapped to subsets of real numbers.
The die example is discrete, and the voltage example is continuous.
Every random variable has a probability distribution function that
describes the chances of observing particular ranges of values for the random
variable. In the case of discrete random variables, it also makes sense to talk
about the probability of an experiment resulting in a particular value, for
example, the probability that the number of dots in the die array showing is
four. For continuous variables, it makes sense to talk about the probability of
obtaining a value in a “small” range, but the probability of obtaining a par-
ticular value is zero. This is not to say that particular values of continuous
174 Appendix I: Review of SomeProbability and Statistics Concepts
random variables are never observed or measured; it just means that we do
not have the ability to predict a particular value with any non-zero measure
of uncertainty.
A probability distribution function describes the probability that a random
variable is less than or equal to a particular value. We will use capital
letters to represent the random variable, and lowercase letters to represent
particular values. If X is a random variable, then the probability function for
X is symbolized as
F
X
(x) = Pr {X x}.
In the case of discrete random variables, this function is a sum of probabili-
ties for particular values, p(k), up to and including the value x:
Fx
pk
X
kx
()
()
=
.
The function p(k) is referred to as the probability mass function. In the case
of continuous random variables, the summation is replaced with an integral,
and the discrete probability mass function is replaced with something called
a probability density function (usually; there are some more or less degen-
erate cases where a density function does not exist), f(x), which denes the
probability that the random variable would have values observed in a small
interval, dx:
f
X
(x) dx = Pr {x dxX x + dx}.
So the probability distribution function is:
Fx
fd
x
X
()
()
ξ
−∞
.
In general, the probability mass functions and density functions are
dened in terms of parameters that give these functions their particular
characteristics. This book involves several special classes of density func-
tions and their associated parameters.
There are some special characteristics of random variables called moments.
We will only be concerned with two such characteristics, called expecta-
tion (or mean) and variance (and its square root, called standard deviation).
Theexpectation of a random variable is given by:
()
µ=
ξξξ
−∞
+∞
xpx
fd
()
kk
k
.

Get Equivalence and Noninferiority Tests for Quality, Manufacturing and Test Engineers now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.