2Point Estimation
In this chapter we consider so‐called point estimations. The problem can be described as follows. Let the distribution Pθ of a random variable y belong to a family P = (Pθ, θ ɛ Ω) , Ω ⊆ Rp , p ≥ 1 . With the help of a realisation Y of a random sample Y = (y1, y2, … , yn)T , n ≥ 1, a statement is to be given concerning the value of a prescribed real function ψ = g(θ) ɛ Z. Often g(θ) = θ. Obviously the statement about g(θ) should be as precise as possible. What this really does mean depends on the choice of the loss function defined in Section 1.5. We define a statistic M(Y) taking the value M(Y) for Y = Y where M(Y) is called the estimate of ψ = g(θ).
The notation ‘point estimation’ reflects the fact that each realisation M(Y) of M(Y) defines a point in the space Z of possible values of g(θ).
The problem of interval estimators is discussed in Chapter 3 following the theory of testing.
By L[g(θ), M(Y)] = L(ψ, M), we denote a loss function taking the value L(ψ0, M) if ψ takes the value ψ0 and Y the value Y (i.e. M = M(Y) takes the value M = M(Y)).
Although many statements in this chapter can be generalised to arbitrary convex loss functions, we want to use mainly the most convenient loss function, the quadratic loss function without costs. If it is not explicitly stated in another way, our loss function
is the square of the L2‐norm of the vector ψ − ...