Introduction to Hyperparameter Optimization
Discussion of HPO principles and collaboration on an interactive use case
Experimentation is critical to developing models but can be a messy process. Modelers often spend significant time on tasks like tracking runs, creating visualizations, and troubleshooting hyperparameter optimization jobs, all of which could be supported or automated with software.
Join expert Jim Blomo to learn best practices for tracking, training, and tuning models and using the information from these processes to make the best decisions around the model development process. You’ll focus specifically on hyperparameter optimization (HPO): selecting the best method, executing tuning jobs, and analyzing the results of these jobs to select the best model for production. Along the way, you’ll see firsthand just how useful HPO is through an anomaly detection problem (based on a Kaggle financial dataset) that uses an XGBoost classification model. You’ll then use SigOpt to perform your own tuning jobs and cover open source alternatives and how to implement them.
What you'll learn-and how you can apply it
By the end of this live online course, you’ll understand:
- How to use training in conjunction with tuning
- Best practices for visualizing training runs and hyperparameter tuning jobs
- The basic concepts of model hyperparameters
- Model parametrization and hyperparameter optimization
- How to use experiment insights to gain knowledge about your model
And you’ll be able to:
- Track training runs
- Customize visualizations and comparisons of runs
- Parametrize your model
- Optimize your model using a tool like SigOpt
- Analyze model optimization insights
- Configure a multimetric optimization problem
This training course is for you because...
- You’re a modeler or analyst.
- You work with machine learning or deep learning models.
- You need to develop models that make it into production.
- You want to become an expert in optimizing your models’ parameters.
- Familiarity with Python, the Jupyter Notebook, and XGBoost
- Kaggle financial dataset exploration
In order to follow along with the course tutorials, it is recommended (but optional and not required) that you create free, trial accounts the week that the course starts for the following:
- SigOpt free trial account
About your instructor
Jim Blomo is an executive engineering leader at SigOpt. He’s achieved strong business results in technology companies by creating a culture of performance, innovation, and teamwork. Previously, Jim led data-mining efforts as a director of engineering at Yelp, operations at the startup PBWorks, and search infrastructure at Amazon’s A9 subsidiary. He enjoys speaking and travel; he’s lectured on data mining and web architecture at UC Berkeley's School of Information and presented at conferences such as AWS re:Invent, O’Reilly OSCON, Wolfram Data Summit, and RecSys. He loves exploring the food and outdoors of the Bay Area with his family.
The timeframes are only estimates and may vary according to how the class is progressing
Data import and preprocessing (30 minutes)
- Presentation: Importing libraries; importing the dataset; defining your feature and label sets; objective metric selection; splitting the dataset
Experiment tracking (30 minutes)
- Presentation: Experiment tracking with SigOpt—training runs, experiments, an alternative to SigOpt experiment management, MLflow; setting your baseline
- Hands-on exercise: Set up SigOpt
Break (5 minutes)
Hyperparameter optimization (75 minutes)
- Presentation: Defining your parameter space; configuring your experiment; multimetric experimentation; an alternative to SigOpt hyperparameter optimization, hyperopt
- Hands-on exercise: Instrument your model and run your experiment
Wrap-up and Q&A (10 minutes)