CHAPTER 2

SMOOTHING METHODS AND NORMALIZING TRANSFORMATIONS

In this chapter we will describe many of the fundamental techniques that will be used to perform adaptive tests of significance. We will focus our attention on a weighting procedure that is used to transform the data so that the transformed data is approximately normally distributed. We will see in subsequent chapters how this transformed data is used in the adaptive tests.

2.1 TRADITIONAL ESTIMATORS OF THE MEDIAN AND THE INTERQUARTILE RANGE

The adaptive weighting procedure relies heavily on percentile estimators. In this section we describe the traditional estimator of a percentile and in the next section we describe an improved estimator that is based on the smoothed cumulative distribution function (c.d.f.).

Let x1, x2, …, xn be a random sample from a continuous distribution having a c.d.f. of F(x) and let x(1), x(2), …, x(n) be the set of ordered observations, in ascending order, that we will call the order statistics. If n is odd, the usual estimator of the median is x((n+1)/2); and if n is even, the usual estimator is images.

Now suppose we want to estimate the pth percentile of the continuous distribution. Let xp be the pth percentile of the distribution and let images be the traditional estimator of xp. To clearly define we need ...

Get Adaptive Tests of Significance Using Permutations of Residuals with R and SAS now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.