Chapter 11
Support Vector Machines
Support vector machines (SVM) are a good algorithm for starters who enter the field of machine learning from an applied angle. As we have demonstrated in previous chapters, easy-to-use software will usually give good classification performance without any tedious parameter tuning. Thus all the effort can be put into the development of new features.
In this chapter, we will investigate the SVM algorithm in more depth, both to extend it to one-class and multi-class classification, and to cover the various components that can be generalised. In principle, SVM is a linear classifier, so we will start by exploring linear classifiers in general. The trick used to classify non-linear problems is the so-called kernel trick, which essentially maps the feature vectors into a higher-dimensional space where the problem becomes linear. This kernel trick is the second key component, which can generalise to other algorithms.
11.1 Linear Classifiers
An object i is characterised by two quantities, a feature vector which can be observed, and a class label yi ∈ { − 1, + 1} which cannot normally be observed, but which we attempt to deduce from the observed . Thus we have two sets of points in n-space, namely and , as illustrated in Figure 11.1. Classification aims ...
Get Machine Learning in Image Steganalysis now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.