94
KENNETH E. BARNER AND RUSSELL C. HARDIE
It should be noted that an extended permutation filter is not necessarily a strict
selection type filter since the selected output can be a function of the input sam-
pies,
Fi
(x), which is not restricted to be selection type. Also, while augmenting the
observation set tends to increase the cardinality of the feature space, to (N + K)I in
the most general case, reduction of the feature space can be accomplished through
row-column selection or coloring. Thus, each of the SR ordering information aug-
mentation and reduction methods can be used in concert to address a specific
filtering application.
3.4 Optimization
Each of the SR ordering selection filters discussed in the previous section operate
on the same general principle: an observation feature is defined based on full, par-
tial, or extended SR ordering information and a selection rule is set that partitions
the SR feature space into N regions; each of the N regions in the SR feature space is
associated with a specific order-statistic output. Since the filters all operate on the
same general principle, we can define a unified optimization procedure. Numerous
optimization methodologies can be adapted, and statistical optimization under the
MAE has been investigated [Bar94]. The optimization methodology adopted here
is the simpler, and more widely used, least L~ normed error (LNE) strategy. This is
a training-based optimization procedure that assumes that a representative data
set is available that consists of observed and desired output samples.
Since each of the SR order selection filters operates on the same general prin-
ciple, we will address the optimization of the generic case in which the SR feature
space is represented by f~. Let the (full-partial-extended) SR matrices comprising
be indexed as ~1,
~R~2, 9 9 9
~Kll~ll, so we can write ~ = {~1,
~R~2, 9 9 9
~Kll~ll }. Also,
let
the K samples from the training set be indexed in the order that they are observed.
In this fashion, the observation vectors can be written as xe
(nl), xg(n2),..., xr (nK)
and the corresponding desired outputs as d(nx), d(n2), ..., d(nK). For the SR se-
lection filter F(.) defined by the selection rule S(.), the LNE over the training
sequence is
K K
Id(ni)
-
F(xg(ni))[ 0
= Z
Id(ni) -
X~S~R~n,)))I '7,
(3.42)
i=i i=1
where ~(n~) ~ fl is the SR feature at window location n~. The selection rule that
minimizes Eq. (3.42) is referred to as the optimal selection rule and is denoted as
Sopt ( 9 ).
The LNE can be partitioned according to the SR feature matrices. Let
ai
be the
index of the feature matrix in f~ corresponding to observation vector xe(ni), i.e.,
~a~ = ~(n/). Additionally, define
Fj,K = {i ~ {1,2,...,K}[ai
= j} to be the set
of indexes that corresponds to observation samples with SR feature ~j. The total
LNE incurred over the training sequence by estimating the desired signal with the
kth order statistic, given that the SR feature vector ~j is observed, can be written

Get Nonlinear Image Processing now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.