Chapter 17

Going a Step beyond Using Support Vector Machines

IN THIS CHAPTER

Revisiting the nonlinear separability problem

Getting into the basic formulation of a linear SVM

Grasping what kernels are and what they can achieve

Discovering how R and Python implementations of SVMs work

Sometimes ideas sprout from serendipity, and sometimes from the urgent necessity to solve relevant problems. Knowing the next big thing in machine learning isn’t possible yet because the technology continues to evolve. However, you can discover the tools available that will help machine learning capabilities grow, allowing you to solve more problems and feed new intelligent applications.

Support vector machines (SVMs) were the next big thing two decades ago, and when they appeared, they initially left many scholars wondering whether they’d really work. Many questioned whether the kind of representation SVMs were capable of could perform useful tasks. The representation is the capability of machine learning to approximate certain kinds of target functions expected at the foundations of a data problem. Having a good representation of a problem implies being able to produce reliable predictions on any new data.

SVMs not only demonstrate an incredible capability of representation but also a useful one, allowing the algorithm to find its niche applications in the large (and growing) world of machine learning applications. You use SVMs to drive the algorithm, not the other way around. This chapter helps ...

Get Machine Learning For Dummies now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.