A New Feature Selection Technique ◾ 357
Assume we have seven features {f
1
, f
2
, f
3
, f
4
, f
5
, f
6
, and f
7
}, were f
2
and f
5
are
highly similar, but not identical. We consider the two following rank lists from
two dierent lters: list one is given by {f
3
, f
2
, f
7
, f
5
} and list 2 is given by {f
2
, f
7
,
f
3
, f
4
, f
1
}.
If we have no preference of one lter over the other, then standard methods
of rank aggregation may interrupt the rankings in the following ways: {f
2
, f
3
, f
7
,
f
5
, f
4
, f
1
}. In this standard aggregation the features f
2
and f
5
are given divergent
rankings, in spite of their high similarity, which make this kind of aggregation
unacceptable. A more acceptable ranking that verify our vision and give a closer
ranking to similar features will be { ...