8
Interval Estimation
CONTENTS
8.1 Conﬁdence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
8.2 Bayesian Estimation .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
8.3 Bayesian Conﬁdence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
8.4 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
8.5 Review Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
8.6 Numerical Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
As we discussed when we talked about continuous distribution functions, the
probability of a speciﬁc number under a continuous distribution is zero. Thus,
if we conceptualize any estimator, either a nonparametric estimate of the
mean or a parametric estimate of a function, the probability that the true
value is equal to the estimated value is obviously zero. Thus, we usually talk
about estimated values in terms of conﬁdence intervals. As in the case when
we discussed the probability of a continuous variable, we deﬁne some range of
outcomes. However, this time we usually work the other way around, deﬁning a
certain conﬁdence level and then stating the values that contain this conﬁdence
interval.
8.1 Conﬁdence Intervals
Amemiya [1, p. 160] notes a diﬀerence between conﬁdence and probability.
Most troubling is our classic deﬁnition of probability as “a probabilistic state-
ment involving parameters.” This is troublesome due to our inability, without
bilities.
Example 8.1. Let X
i
be distributed as a Bernoulli distribution, i =
1, 2, ···N. Then
T =
¯
X
A
N
θ,
θ (1 θ)
N
. (8.1)
183
184 Mathematical Statistics for Applied Econometrics
TABLE 8.1
Conﬁdence Levels
k γ/2 γ
1.0000 0.1587 0.3173
1.5000 0.0668 0.1336
1.6449 0.0500 0.1000
1.7500 0.0401 0.0801
1.9600 0.0250 0.0500
2.0000 0.0228 0.0455
2.3263 0.0100 0.0200
Breaking this down a little more we will construct the estimate of the
Bernoulli parameter as
T =
¯
X =
1
N
N
X
i=1
X
i
(8.2)
where T =
ˆ
θ. If the X
i
are independent, then
V (T ) =
1
N
V (X
i
) =
1
N
θ (1 θ) . (8.3)
Therefore, we can construct a random variable Z that is the diﬀerence between
the true value of the parameter θ and the value of the observed estimate.
Z =
T θ
r
θ (1 θ)
N
A
N (0, 1) . (8.4)
Why? By the Central Limit Theory. Given this distribution, we can ask ques-
tions about the probability. Speciﬁcally, we know that if Z is distributed
N (0, 1), then we can deﬁne
γ
k
= P (|Z| < k) . (8.5)
Essentially, we can either choose a k based on a target probability or we can
deﬁne a probability based on our choice of k. Using the normal probability, the
one tailed probabilities for the normal distribution are presented in Table 8.1.
Taking a fairly standard example, suppose that I want to choose a k such
that γ/2 = 0.025, or that we want to determine the values of k such that
the probability is 0.05 that the true value of γ will lie outside the range. The
value of k for this choice is 1.96. This example is comparable to the standard
introductory example of a 0.95 conﬁdence level.
The values of γ
k
can be derived from the standard normal table as
P
|T θ|
r
θ (1 θ)
n
< k
= γ
k
. (8.6)

Get Mathematical Statistics for Applied Econometrics now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.