2 Limitation of the Individual Classifiers

Esteban Alfaro Matías Gámez and Noelia García

2.1 Introduction

In this chapter, some aspects related to the behavior and properties of individual classifiers will be analyzed. The focus will be on some of the problems or difficulties derived from the use of these classifiers, such as lack of accuracy or instability.

The chapter is structured into four sections, in addition to the introduction. First, in section 2.2 we study the generalization error of individual classifiers. This error will be disaggregated into the sum of three non‐negative terms that are Bayes' risk, bias, and variance. The first term collects the error inherent in the data set that no classification method can reduce. The bias measures the persistent error of the classification method, that is, the error that would be maintained even if there were an infinite set of classifiers, independently trained. The term corresponding to the variance measures the error caused by the fluctuations that occur when generating an individual classifier. Therefore, from this approach the idea is that by averaging several classifiers one can reduce the variance and, thus, also decrease the expected or generalization error.

Second, in section 2.3 the issue of instability of some classifiers is addressed. A classifier is unstable if it undergoes major changes for small modifications in the training set. In this sense classification trees and neural networks, for example, are ...

Get Ensemble Classification Methods with Applications in R now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.