## Book description

This bestseller helps students understand the algorithms of machine learning. It puts them on a path toward mastering the relevant mathematics and statistics as well as the necessary programming and experimentation. Along with improved Python code, this second edition includes two new chapters on deep belief networks and Gaussian processes. It incorporates new material on the support vector machine, random forests, the perceptron convergence theorem, filters, and more. All of the code is available on the author's website.

1. Preliminaries
2. Prologue to 2nd Edition
3. Prologue to 1st Edition
4. Chapter 1: Introduction
1. 1.1 If Data Had Mass, The Earth would be A Black Hole
2. 1.2 Learning
3. 1.3 Types of Machine Learning
4. 1.4 Supervised Learning
5. 1.5 The Machine Learning Process
6. 1.6 A Note on Programming
7. 1.7 A Roadmap to the Book
5. Chapter 2: Preliminaries
1. 2.1 Some Terminology
2. 2.2 Knowing What You Know: Testing Machine Learning Algorithms
3. 2.3 Turning Data into Probabilities
4. 2.4 Some Basic Statistics
7. Practice Questions
6. Chapter 3: Neurons, Neural Networks, and Linear Discriminants
1. 3.1 The Brain and The Neuron
2. 3.2 Neural Networks
3. 3.3 The Perceptron
4. 3.4 Linear Separability
5. 3.5 Linear Regression
7. Practice Questions
7. Chapter 4: The Multi-layer Perceptron
1. 4.1 Going Forwards
2. 4.2 Going Backwards: Back-Propagation of Error
3. 4.3 The Multi-Layer Perceptron in Practice
4. 4.4 Examples of Using the MLP
5. 4.5 A Recipe for Using the MLP
6. 4.6 Deriving Back-Propagation
8. Practice Questions
8. Chapter 5: Radial Basis Functions and Splines
1. 5.1 Receptive Fields
2. 5.2 The Radial Basis Function (RBF) Network
3. 5.3 Interpolation and Basis Functions
5. Practice Questions
9. Chapter 6: Dimensionality Reduction
1. 6.1 Linear Discriminant Analysis (LDA)
2. 6.2 Principal Components Analysis (PCA)
3. 6.3 Factor Analysis
4. 6.4 Independent Components Analysis (ICA)
5. 6.5 Locally Linear Embedding
6. 6.6 ISOMAP
8. Practice Questions
10. Chapter 7: Probabilistic Learning
1. 7.1 Gaussian Mixture Models
2. 7.2 Nearest Neighbour Methods
4. Practice Questions
11. Chapter 8: Support Vector Machines
1. 8.1 Optimal Separation
2. 8.2 Kernels
3. 8.3 The Support Vector Machine Algorithm
4. 8.4 Extensions To The SVM
6. Practice Questions
12. Chapter 9: Optimisation and Search
1. 9.1 Going Downhill
2. 9.2 Least-Squares Optimisation
4. 9.4 Search: Three Basic Approaches
5. 9.5 Exploitation and Exploration
6. 9.6 Simulated Annealing
8. Practice Questions
13. Chapter 10: Evolutionary Learning
1. 10.1 The Genetic Algorithm (GA)
2. 10.2 Generating Offspring: Genetic Operators
3. 10.3 Using Genetic Algorithms
4. 10.4 Genetic Programming
5. 10.5 Combining Sampling with Evolutionary Learning
7. Practice Questions
14. Chapter 11: Reinforcement Learning
1. 11.1 Overview
2. 11.2 Example: Getting Lost
3. 11.3 Markov Decision Processes
4. 11.4 Values
5. 11.5 Back on Holiday: Using Reinforcement Learning
6. 11.6 The Difference Between Sarsa and Q-Learning
7. 11.7 Uses of Reinforcement Learning
9. Practice Questions
15. Chapter 12: Learning with Trees
1. 12.1 Using Decision Trees
2. 12.2 Constructing Decision Trees
3. 12.3 Classification and Regression Trees (CART)
4. 12.4 Classification Example
6. Practice Questions
16. Chapter 13: Decision by Committee: Ensemble Learning
1. 13.1 Boosting
2. 13.2 Bagging
3. 13.3 Random Forests
4. 13.4 Different Ways to Combine Classifiers
6. Practice Questions
17. Chapter 14: Unsupervised Learning
1. 14.1 The K-Means Algorithm
2. 14.2 Vector Quantisation
3. 14.3 The Self-Organising Feature Map
5. Practice Questions
18. Chapter 15: Markov Chain Monte Carlo (MCMC) Methods
1. 15.1 Sampling
2. 15.2 Monte Carlo or Bust
3. 15.3 The Proposal Distribution
4. 15.4 Markov Chain Monte Carlo
6. Practice Questions
19. Chapter 16: Graphical Models
1. 16.1 Bayesian Networks
2. 16.2 Markov Random Fields
3. 16.3 Hidden Markov Models (HMMS)
4. 16.4 Tracking Methods
6. Practice Questions
20. Chapter 17: Symmetric Weights and Deep Belief Networks
1. 17.1 Energetic Learning: The Hopfield Network
2. 17.2 Stochastic Neurons — The Boltzmann Machine
3. 17.3 Deep Learning
5. Practice Questions
21. Chapter 18: Gaussian Processes
1. 18.1 Gaussian Process Regression
2. 18.2 Gaussian Process Classification
4. Practice Questions
22. Appendix A: Python
1. A.1 Installing Python and Other Packages
2. A.2 Getting Started
3. A.3 Code Basics
4. A.4 Using Numpy And Matplotlib