#### Classification

In this case, a typical choice is to project on the half-space, Section 8.6.2, formed by the hyperplane

${y}_{n}\u2329{f}_{n-1},\kappa (\cdot ,{\mathit{x}}_{n})\u232a=\rho ,$

and the corresponding projection operator becomes

${P}_{k}({f}_{n-1})={f}_{n-1}+{\beta}_{k}\kappa (\cdot ,{\mathit{x}}_{k}),$

where

$\begin{array}{l}\hfill {\beta}_{k}=\left\{\begin{array}{ll}\frac{\rho -{y}_{k}\langle {f}_{n-1},\kappa (\cdot ,{\mathit{x}}_{k})\rangle}{\kappa ({\mathit{x}}_{k},{\mathit{x}}_{k})},& \text{if}\rho -{y}_{k}\u2329{f}_{n-1},\kappa (\cdot ,{\mathit{x}}_{k})\u232a>0,\\ 0,& \text{otherwise}.\end{array}\right.\end{array}$

(11.110)

Recall that the corresponding half-space is the 0-level set of the hinge loss function, _{ρ}(y_{n},f(x_{n})) defined in (11.59).

Thus, for both cases, regression ...

Get *Machine Learning* now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.