Grokking Machine Learning, video edition

Video description

In Video Editions the narrator reads the book while the content, figures, code listings, diagrams, and text appear on the screen. Like an audiobook that you can also watch as a video.

Did you think machine learning is complicated and hard to master? It’s not! Read this book! Serrano demystifies some of the best-held secrets of the machine learning society.
Sebastian Thrun, Founder, Udacity

Discover valuable machine learning techniques you can understand and apply using just high-school math.

In Grokking Machine Learning you will learn:
  • Supervised algorithms for classifying and splitting data
  • Methods for cleaning and simplifying data
  • Machine learning packages and tools
  • Neural networks and ensemble methods for complex datasets

Grokking Machine Learning teaches you how to apply ML to your projects using only standard Python code and high school-level math. No specialist knowledge is required to tackle the hands-on exercises using Python and readily available machine learning tools. Packed with easy-to-follow Python-based exercises and mini-projects, this book sets you on the path to becoming a machine learning expert.

about the technology

Discover powerful machine learning techniques you can understand and apply using only high school math! Put simply, machine learning is a set of techniques for data analysis based on algorithms that deliver better results as you give them more data. ML powers many cutting-edge technologies, such as recommendation systems, facial recognition software, smart speakers, and even self-driving cars. This unique book introduces the core concepts of machine learning, using relatable examples, engaging exercises, and crisp illustrations.

about the book

Grokking Machine Learning presents machine learning algorithms and techniques in a way that anyone can understand. This book skips the confused academic jargon and offers clear explanations that require only basic algebra. As you go, you’ll build interesting projects with Python, including models for spam detection and image recognition. You’ll also pick up practical skills for cleaning and preparing data.

about the audience

No machine learning knowledge necessary, but basic Python required.

about the author

Luis G. Serrano is a research scientist in quantum artificial intelligence. Previously, he was a Machine Learning Engineer at Google and Lead Artificial Intelligence Educator at Apple.

The first step to take on your machine learning journey.
Millad Dagdoni, Norwegian Labour and Welfare Administration

A nicely written guided introduction, especially for those who want to code but feel shaky in their mathematics.
Erik D. Sapper, California Polytechnic State University

The most approachable introduction to machine learning I’ve had the pleasure to read in recent years. Highly recommended.
Kay Engelhardt, devstats

NARRATED BY MARIANNE SHEEHAN

Table of contents

  1. Chapter 1. What is machine learning? It is common sense, except done by a computer
  2. Chapter 1. What is machine learning?
  3. Chapter 1. Some examples of models that humans use
  4. Chapter 1. Example 4: More?
  5. Chapter 2. Types of machine learning
  6. Chapter 2. Supervised learning: The branch of machine learning that works with labeled data
  7. Chapter 2. Unsupervised learning: The branch of machine learning that works with unlabeled data
  8. Chapter 2. Dimensionality reduction simplifies data without losing too much information
  9. Chapter 2. What is reinforcement learning?
  10. Chapter 3. Drawing a line close to our points: Linear regression
  11. Chapter 3. The remember step: Looking at the prices of existing houses
  12. Chapter 3. Some questions that arise and some quick answers
  13. Chapter 3. Crash course on slope and y-intercept
  14. Chapter 3. Simple trick
  15. Chapter 3. The linear regression algorithm: Repeating the absolute or square trick many times to move the line closer to the points
  16. Chapter 3. How do we measure our results? The error function
  17. Chapter 3. Gradient descent: How to decrease an error function by slowly descending from a mountain
  18. Chapter 3. Real-life application: Using Turi Create to predict housing prices in India
  19. Chapter 3. Parameters and hyperparameters
  20. Chapter 4. Optimizing the training process: Underfitting, overfitting, testing, and regularization
  21. Chapter 4. How do we get the computer to pick the right model? By testing
  22. Chapter 4. A numerical way to decide how complex our model should be: The model complexity graph
  23. Chapter 4. Another example of overfitting: Movie recommendations
  24. Chapter 4. Modifying the error function to solve our problem: Lasso regression and ridge regression
  25. Chapter 4. An intuitive way to see regularization
  26. Chapter 4. Polynomial regression, testing, and regularization with Turi Create
  27. Chapter 4. Polynomial regression, testing, and regularization with Turi Create :The testing RMSE for the models follow:
  28. Chapter 5. Using lines to split our points: The perceptron algorithm
  29. Chapter 5. The problem: We are on an alien planet, and we don’t know their language!
  30. Chapter 5. Sentiment analysis classifier
  31. Chapter 5. The step function and activation functions: A condensed way to get predictions
  32. Chapter 5. The bias, the y-intercept, and the inherent mood of a quiet alien
  33. Chapter 5. Error function 3: Score
  34. Chapter 5. Pseudocode for the perceptron trick (geometric)
  35. Chapter 5. Bad classifier
  36. Chapter 5. Pseudocode for the perceptron algorithm
  37. Chapter 5. Coding the perceptron algorithm using Turi Create
  38. Chapter 6. A continuous approach to splitting points: Logistic classifiers
  39. Chapter 6. The dataset and the predictions
  40. Chapter 6. Error function 3: log loss
  41. Chapter 6. Formula for the log loss
  42. Chapter 6. Pseudocode for the logistic trick
  43. Chapter 6. Coding the logistic regression algorithm
  44. Chapter 6. Classifying into multiple classes: The softmax function
  45. Chapter 7. How do you measure classification models? Accuracy and its friends
  46. Chapter 7. False positives and false negatives: Which one is worse?
  47. Chapter 7. Recall: Among the positive examples, how many did we correctly classify?
  48. Chapter 7. Combining recall and precision as a way to optimize both: The F-score
  49. Chapter 7. A useful tool to evaluate our model: The receiver operating characteristic (ROC) curve
  50. Chapter 7. The receiver operating characteristic (ROC) curve: A way to optimize sensitivity and specificity in a model
  51. Chapter 7. A metric that tells us how good our model is: The AUC (area under the curve)
  52. Chapter 7. Recall is sensitivity, but precision and specificity are different
  53. Chapter 7. Summary
  54. Chapter 8. Using probability to its maximum: The naive Bayes model
  55. Chapter 8. Sick or healthy? A story with Bayes’ theorem as the hero: Let’s calculate this probability.
  56. Chapter 8. Prelude to Bayes’ theorem: The prior, the event, and the posterior
  57. Chapter 8. What the math just happened? Turning ratios into probabilities
  58. Chapter 8. What the math just happened? Turning ratios into probabilities:Product rule of probabilities
  59. Chapter 8. What about two words? The naive Bayes algorithm
  60. Chapter 8. What about more than two words?
  61. Chapter 8. Implementing the naive Bayes algorithm
  62. Chapter 9. Splitting data by asking questions: Decision trees
  63. Chapter 9. Picking a good first question
  64. Chapter 9. The solution: Building an app-recommendation system
  65. Chapter 9. Gini impurity index: How diverse is my dataset?
  66. Chapter 9. Entropy: Another measure of diversity with strong applications in information theory
  67. Chapter 9. Classes of different sizes? No problem: We can take weighted averages
  68. Chapter 9. Beyond questions like yes/no
  69. Chapter 9. The graphical boundary of decision trees
  70. Chapter 9. Setting hyperparameters in Scikit-Learn
  71. Chapter 9. Applications
  72. Chapter 10. Combining building blocks to gain more power: Neural networks
  73. Chapter 10. Why two lines? Is happiness not linear?
  74. Chapter 10. The boundary of a neural network
  75. Chapter 10. Potential problems: From overfitting to vanishing gradients
  76. Chapter 10. Neural networks with more than one output: The softmax function
  77. Chapter 10. Training the model
  78. Chapter 10. Other architectures for more complex datasets
  79. Chapter 10. How neural networks paint paintings: Generative adversarial networks (GAN)
  80. Chapter 11. Finding boundaries with style: Support vector machines and the kernel method
  81. Chapter 11. Distance error function: Trying to separate our two lines as far apart as possible
  82. Chapter 11. Training SVMs with nonlinear boundaries: The kernel method
  83. Chapter 11. Going beyond quadratic equations: The polynomial kernel
  84. Chapter 11. A measure of how close points are: Similarity
  85. Chapter 11. Overfitting and underfitting with the RBF kernel: The gamma parameter
  86. Chapter 12. Combining models to maximize results: Ensemble learning
  87. Chapter 12. Fitting a random forest manually
  88. Chapter 12. Combining the weak learners into a strong learner
  89. Chapter 12. Gradient boosting: Using decision trees to build strong learners
  90. Chapter 12. XGBoost similarity score: A new and effective way to measure similarity in a set
  91. Chapter 12. Building the weak learners: Split at 25
  92. Chapter 12. Tree pruning: A way to reduce overfitting by simplifying the weak learners
  93. Chapter 13. Putting it all in practice: A real-life example of data engineering and machine learning
  94. Chapter 13. Using Pandas to study our dataset
  95. Chapter 13. Turning categorical data into numerical data: One-hot encoding
  96. Chapter 13. Feature selection: Getting rid of unnecessary features
  97. Chapter 13. Testing each model’s accuracy
  98. Chapter 13. Tuning the hyperparameters to find the best model: Grid search

Product information

  • Title: Grokking Machine Learning, video edition
  • Author(s): Luis Serrano
  • Release date: December 2021
  • Publisher(s): Manning Publications
  • ISBN: None