77
6
Multivariate
Test 6.1 MultivariateSingle Mean Vector (Two-Sided)
Parameters:
k = number of univariate variables being considered simultaneously
µ
= mean vector (k × 1)
Σ = covariance matrix (k × k)
µ
0
= target mean vector (k × 1)
Δ = tolerable distance between μ and μ
0
, that is, noninferiority is dened as:
µ−µ≤
0
where:
dd d
k1
2
2
22
δ= +++
where:
d
d
d
k
1
2
δ=
is a k × 1 column vector. 1 – β = power to reject the null if
0
µ−
µ>
.
Hypotheses:
µ−µ>H :
00
µ−µ≤H :
10
78 Equivalence and Noninferiority Tests
Data:
X = (n × k) matrix of observations, x
ij
= the ith observation of the jth variable.
µ=
=
x
x
x
ˆ
thesamplemeanvecto
r
k
1
2
.
ˆ
thesamplecovariancematri
x
Σ=
.
Critical value(s):
Let
Tn
ˆ
ˆ
ˆ
T
2
0
1
0
()()
−µ Σµ−µ
.
Then reject H
0
if:
F
nk
nk
TFkn kn
(1)
(, ,
ˆ
)
T2
1
1
=
−δ
Σδ
−β
where
Fknkn(, ,
ˆ
)
T
1
1
−δ
Σδ
−β
is the 100(1 − β) percentile of a noncentral F-distribution with degrees of
freedom k (numerator) and n – k (denominator) and noncentrality parameter
n
ˆ
T 1
δΣ δ
.
Discussion:
The statistic:
Tn
ˆ
ˆ
ˆ
T
2
0
1
0
()()
−µ Σµ−µ
has a distribution known as Hotelling’s T
2
. It can be shown (Anderson, 1958)
that
F
nk
nk
T
(1)
2
=
79Multivariate
has a noncentral F-distribution with degrees of freedom k (numerator) and
n – k (denominator); noncentrality parameter
n
T
0
1
δΣ δ
. ∑ 
0
is the population
covariance matrix, which generally is unknown. Thus, the critical value for
the hypothesis test is based on the sample covariance matrix. Strictly speak-
ing, the critical value should be based in part on the population covariance
matrix. However, since it is generally unknown, substituting the sample
covariance matrix is a reasonable approximation.
Dening a multivariate noninferiority region presents some difculties.
For one, it is possible that every dimension could satisfy the univariate
criteria:
μ
i0
d
i
μ
i
μ
i0
+ d
i
for i = 1, k
and not satisfy the multivariate criterion:
0
µ−µ≤
.
Example:
Suppose the population mean vector is
µ=
µ
µ
=
0.3
0.5
1
2
and
µ=
µ
µ
=
0.4
0.4
0
1,0
2,0
so that:
δ=
=
d
d
0.1
0.1
1
2
.
Furthermore, suppose that for each univariate mean, as well as in a
multivariate sense, the maximum tolerable difference was
Δ = 0.125.
Then
μ
1,0
– d
1
= 0.4 – 0.1 = 0.3 ≤ (μ
1
= 0.3) ≤ μ
1,0
+ d
1
= 0.4 + 0.1 = 0.5
μ
2,0
– d
2
= 0.4 – 0.1 = 0.3 ≤ (μ
2
= 0.5) ≤ μ
2,0
+ d
2
= 0.4 + 0.1 = 0.5.
80 Equivalence and Noninferiority Tests
However,
()()0.14142 0.125
011,0
2
22,0
2
µ−µ= µ−µ+µ−µ≈ >∆=
Thus, each univariate mean satises the univariate denitions for noninfe-
riority, but the multivariate criterion is not satised.
Condence interval formulation:
In a multivariate situation, the condence region is a k-dimensional object
and its interior. In the case of mean vectors, it is an ellipsoid together with its
interior. Specically, it is the set of all vectors, μ, such that
Tn
kn
nk
F
ˆ
ˆ
ˆ
(1)
T
kn k
21
,,
()()
−µ Σµ−µ
α−
where F
α, k, n − k
= the 100(1 − α) percentile of a (central) F-distribution with k
numerator degrees of freedom and n − k denominator degrees of freedom, and
µ=
=
x
x
x
ˆ
thesamplemeanvector
k
1
2
.
Computational considerations:
While it is possible to use JMP scripting language (JSL) or SAS Proc IML to
compute and invert covariance matrices, it is easier to do so in R. Use the R
function cov() to compute the covariance matrix, and the solve() function to
invert the covariance matrix. Recall that in R, a statement of the form:
>x <- c(1, 2, 3)
creates a column vector, not a row vector, called x.
R:
> df1 <- read.table("H:\\Personal Data\\Equivalence &
Noninferiority\\Programs & Output\\d20121109_test_6_1_example.
csv",header = TRUE,sep = ",")
> attach(df1)
> xmat <- as.matrix(df1)
> xmat
X1 X2 X3
[1,] 100.21 33.37 102.22
[2,] 101.22 33.79 100.33
[3,] 97.16 32.32 95.55
[4,] 98.72 33.05 97.60
81Multivariate
[5,] 97.26 32.41 97.01
[6,] 100.71 33.34 101.12
[7,] 101.30 33.75 101.62
[8,] 98.88 32.99 97.98
[9,] 100.25 33.55 99.88
[10,] 97.19 32.19 96.81
[11,] 105.16 35.07 105.63
[12,] 98.30 32.70 96.40
[13,] 100.74 33.38 101.39
[14,] 97.69 32.62 98.26
[15,] 100.02 33.12 101.32
[16,] 101.43 33.94 102.32
[17,] 95.40 31.76 96.44
[18,] 99.85 33.08 99.45
[19,] 97.43 32.43 98.82
[20,] 100.63 33.59 99.84
[21,] 102.00 33.90 101.51
[22,] 97.45 32.30 97.14
[23,] 102.03 34.03 100.69
[24,] 99.79 33.43 99.40
[25,] 98.61 33.00 97.65
[26,] 95.31 31.77 93.97
[27,] 98.34 32.57 99.32
[28,] 97.27 32.49 96.80
[29,] 98.46 32.82 99.50
[30,] 100.42 33.50 98.54
[31,] 99.90 33.39 99.34
[32,] 101.10 33.59 99.98
[33,] 98.40 32.72 96.44
[34,] 100.80 33.73 101.81
[35,] 95.56 31.62 94.11
[36,] 98.63 32.72 100.50
[37,] 99.13 32.80 98.35
[38,] 103.18 34.52 102.33
[39,] 101.08 33.64 100.17
[40,] 102.08 33.93 103.59
> cmat <- cov(xmat)
> cmat
X1 X2 X3
X1 4.578826 1.5819854 4.853564
X2 1.581985 0.5612728 1.654349
X3 4.853564 1.6543490 6.324297
> cmatinv <- solve(cmat)
> cmatinv
X1 X2 X3
X1 10.549891 -25.641264 -1.3890829
X2 -25.641264 70.101515 1.3407270
X3 -1.389083 1.340727 0.8734525
> mu1_est <- mean(X1)
> mu2_est <- mean(X2)

Get Equivalence and Noninferiority Tests for Quality, Manufacturing and Test Engineers now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.