Video description
In Video Editions the narrator reads the book while the content, figures, code listings, diagrams, and text appear on the screen. Like an audiobook that you can also watch as a video.
Bayesian optimization helps pinpoint the best configuration for your machine learning models with speed and accuracy. Put its advanced techniques into practice with this hands-on guide.
In Bayesian Optimization in Action you will learn how to:
- Train Gaussian processes on both sparse and large data sets
- Combine Gaussian processes with deep neural networks to make them flexible and expressive
- Find the most successful strategies for hyperparameter tuning
- Navigate a search space and identify high-performing regions
- Apply Bayesian optimization to cost-constrained, multi-objective, and preference optimization
- Implement Bayesian optimization with PyTorch, GPyTorch, and BoTorch
Bayesian Optimization in Action shows you how to optimize hyperparameter tuning, A/B testing, and other aspects of the machine learning process by applying cutting-edge Bayesian techniques. Using clear language, illustrations, and concrete examples, this book proves that Bayesian optimization doesn’t have to be difficult! You’ll get in-depth insights into how Bayesian optimization works and learn how to implement it with cutting-edge Python libraries. The book’s easy-to-reuse code samples let you hit the ground running by plugging them straight into your own projects.
About the Technology
In machine learning, optimization is about achieving the best predictions—shortest delivery routes, perfect price points, most accurate recommendations—in the fewest number of steps. Bayesian optimization uses the mathematics of probability to fine-tune ML functions, algorithms, and hyperparameters efficiently when traditional methods are too slow or expensive.
About the Book
Bayesian Optimization in Action teaches you how to create efficient machine learning processes using a Bayesian approach. In it, you’ll explore practical techniques for training large datasets, hyperparameter tuning, and navigating complex search spaces. This interesting book includes engaging illustrations and fun examples like perfecting coffee sweetness, predicting weather, and even debunking psychic claims. You’ll learn how to navigate multi-objective scenarios, account for decision costs, and tackle pairwise comparisons.
What's Inside
- Gaussian processes for sparse and large datasets
- Strategies for hyperparameter tuning
- Identify high-performing regions
- Examples in PyTorch, GPyTorch, and BoTorch
About the Reader
For machine learning practitioners who are confident in math and statistics.
About the Author
Quan Nguyen is a research assistant at Washington University in St. Louis. He writes for the Python Software Foundation and has authored several books on Python programming.
Quotes
Using a hands-on approach, clear diagrams, and real-world examples, Quan lifts the veil off the complexities of Bayesian optimization.
- From the Foreword by Luis Serrano, Author of Grokking Machine Learning
This book teaches Bayesian optimization, starting from its most basic components. You’ll find enough depth to make you comfortable with the tools and methods and enough code to do real work very quickly.
- From the Foreword by David Sweet, Author of Experimentation for Engineers
Combines modern computational frameworks with visualizations and infographics you won’t find anywhere else. It gives readers the confidence to apply Bayesian optimization to real world problems!
- Ravin Kumar, Google
Table of contents
- Chapter 1. Introduction to Bayesian optimization
- Chapter 1. Introducing Bayesian optimization
- Chapter 1. What will you learn in this book?
- Chapter 1. Summary
- Part 1. Modeling with Gaussian processes
- Chapter 2. Gaussian processes as distributions over functions
- Chapter 2. Modeling correlations with multivariate Gaussian distributions and Bayesian updates
- Chapter 2. Going from a finite to an infinite Gaussian
- Chapter 2. Implementing GPs in Python
- Chapter 2. Exercise
- Chapter 2. Summary
- Chapter 3. Customizing a Gaussian process with the mean and covariance functions
- Chapter 3. Incorporating what you already know into a GP
- Chapter 3. Defining the functional behavior with the mean function
- Chapter 3. Defining variability and smoothness with the covariance function
- Chapter 3. Exercise
- Chapter 3. Summary
- Part 2. Making decisions with Bayesian optimization
- Chapter 4. Refining the best result with improvement-based policies
- Chapter 4. Finding improvement in BayesOpt
- Chapter 4. Optimizing the expected value of improvement
- Chapter 4. Exercises
- Chapter 4. Summary
- Chapter 5. Exploring the search space with bandit-style policies
- Chapter 5. Being optimistic under uncertainty with the Upper Confidence Bound policy
- Chapter 5. Smart sampling with the Thompson sampling policy
- Chapter 5. Exercises
- Chapter 5. Summary
- Chapter 6. Using information theory with entropy-based policies
- Chapter 6. Entropy search in BayesOpt
- Chapter 6. Exercises
- Chapter 6. Summary
- Part 3. Extending Bayesian optimization to specialized settings
- Chapter 7. Maximizing throughput with batch optimization
- Chapter 7. Computing the improvement and upper confidence bound of a batch of points
- Chapter 7. Exercise 1: Extending TS to the batch setting via resampling
- Chapter 7. Computing the value of a batch of points using information theory
- Chapter 7. Exercise 2: Optimizing airplane designs
- Chapter 7. Summary
- Chapter 8. Satisfying extra constraints with constrained optimization
- Chapter 8. Constraint-aware decision-making in BayesOpt
- Chapter 8. Exercise 1: Manual computation of constrained EI
- Chapter 8. Implementing constrained EI with BoTorch
- Chapter 8. Exercise 2: Constrained optimization of airplane design
- Chapter 8. Summary
- Chapter 9. Balancing utility and cost with multifidelity optimization
- Chapter 9. Multifidelity modeling with GPs
- Chapter 9. Balancing information and cost in multifidelity optimization
- Chapter 9. Measuring performance in multifidelity optimization
- Chapter 9. Exercise 1: Visualizing average performance in multifidelity optimization
- Chapter 9. Exercise 2: Multifidelity optimization with multiple low-fidelity approximations
- Chapter 9. Summary
- Chapter 10. Learning from pairwise comparisons with preference optimization
- Chapter 10. Formulating a preference optimization problem and formatting pairwise comparison data
- Chapter 10. Training a preference-based GP
- Chapter 10. Preference optimization by playing king of the hill
- Chapter 10. Summary
- Chapter 11. Optimizing multiple objectives at the same time
- Chapter 11. Finding the boundary of the most optimal data points
- Chapter 11. Seeking to improve the optimal data boundary
- Chapter 11. Exercise: Multiobjective optimization of airplane design
- Chapter 11. Summary
- Part 4. Special Gaussian process models
- Chapter 12. Scaling Gaussian processes to large datasets
- Chapter 12. Automatically choosing representative points from a large dataset
- Chapter 12. Optimizing better by accounting for the geometry of the loss surface
- Chapter 12. Exercise
- Chapter 12. Summary
- Chapter 13. Combining Gaussian processes with neural networks
- Chapter 13. Capturing similarity within structured data
- Chapter 13. Using neural networks to process complex structured data
- Chapter 13. Summary
Product information
- Title: Bayesian Optimization in Action, Video Edition
- Author(s):
- Release date: December 2023
- Publisher(s): Manning Publications
- ISBN: None
You might also like
book
Bayesian Optimization in Action
Bayesian optimization helps pinpoint the best configuration for your machine learning models with speed and accuracy. …
video
Designing Deep Learning Systems, Video Edition
A vital guide to building the platforms and systems that bring deep learning models to production. …
video
Deep Learning with PyTorch video edition
With this publication, we finally have a definitive treatise on PyTorch. It covers the basics and …
article
Use Github Copilot for Prompt Engineering
Using GitHub Copilot can feel like magic. The tool automatically fills out entire blocks of code--but …