Amazon SageMaker Best Practices

Book description

Overcome advanced challenges in building end-to-end ML solutions by leveraging the capabilities of Amazon SageMaker for developing and integrating ML models into production

Key Features

  • Learn best practices for all phases of building machine learning solutions - from data preparation to monitoring models in production
  • Automate end-to-end machine learning workflows with Amazon SageMaker and related AWS
  • Design, architect, and operate machine learning workloads in the AWS Cloud

Book Description

Amazon SageMaker is a fully managed AWS service that provides the ability to build, train, deploy, and monitor machine learning models. The book begins with a high-level overview of Amazon SageMaker capabilities that map to the various phases of the machine learning process to help set the right foundation. You'll learn efficient tactics to address data science challenges such as processing data at scale, data preparation, connecting to big data pipelines, identifying data bias, running A/B tests, and model explainability using Amazon SageMaker. As you advance, you'll understand how you can tackle the challenge of training at scale, including how to use large data sets while saving costs, monitoring training resources to identify bottlenecks, speeding up long training jobs, and tracking multiple models trained for a common goal. Moving ahead, you'll find out how you can integrate Amazon SageMaker with other AWS to build reliable, cost-optimized, and automated machine learning applications. In addition to this, you'll build ML pipelines integrated with MLOps principles and apply best practices to build secure and performant solutions.

By the end of the book, you'll confidently be able to apply Amazon SageMaker's wide range of capabilities to the full spectrum of machine learning workflows.

What you will learn

  • Perform data bias detection with AWS Data Wrangler and SageMaker Clarify
  • Speed up data processing with SageMaker Feature Store
  • Overcome labeling bias with SageMaker Ground Truth
  • Improve training time with the monitoring and profiling capabilities of SageMaker Debugger
  • Address the challenge of model deployment automation with CI/CD using the SageMaker model registry
  • Explore SageMaker Neo for model optimization
  • Implement data and model quality monitoring with Amazon Model Monitor
  • Improve training time and reduce costs with SageMaker data and model parallelism

Who this book is for

This book is for expert data scientists responsible for building machine learning applications using Amazon SageMaker. Working knowledge of Amazon SageMaker, machine learning, deep learning, and experience using Jupyter Notebooks and Python is expected. Basic knowledge of AWS related to data, security, and monitoring will help you make the most of the book.

Table of contents

  1. Amazon SageMaker Best Practices
  2. Contributors
  3. About the authors
  4. About the reviewers
  5. Preface
    1. Who this book is for
    2. What this book covers
    3. To get the most out of this book
    4. Download the example code files
    5. Download the color images
    6. Conventions used
    7. Get in touch
    8. Share your thoughts
  6. Section 1: Processing Data at Scale
  7. Chapter 1: Amazon SageMaker Overview
    1. Technical requirements
    2. Preparing, building, training and tuning, deploying, and managing ML models
    3. Discussion of data preparation capabilities
      1. SageMaker Ground Truth
      2. SageMaker Data Wrangler
      3. SageMaker Processing
      4. SageMaker Feature Store
      5. SageMaker Clarify
    4. Feature tour of model-building capabilities
      1. SageMaker Studio
      2. SageMaker notebook instances
      3. SageMaker algorithms
      4. BYO algorithms and scripts
    5. Feature tour of training and tuning capabilities
      1. SageMaker training jobs
      2. Autopilot
      3. HPO
      4. SageMaker Debugger
      5. SageMaker Experiments
    6. Feature tour of model management and deployment capabilities
      1. Model Monitor 
      2. Model endpoints
      3. Edge Manager
    7. Summary
  8. Chapter 2: Data Science Environments
    1. Technical requirements
    2. Machine learning use case and dataset
    3. Creating data science environment
      1. Creating repeatability through IaC/CaC
      2. Amazon SageMaker notebook instances
      3. Amazon SageMaker Studio
      4. Providing and creating data science environments as IT services
      5. Creating a portfolio in AWS Service Catalog
      6. Amazon SageMaker notebook instances
      7. Amazon SageMaker Studio
    4. Summary
    5. References
  9. Chapter 3: Data Labeling with Amazon SageMaker Ground Truth
    1. Technical requirements
    2. Challenges with labeling data at scale
    3. Addressing unique labeling requirements with custom labeling workflows
      1. A private labeling workforce
      2. Listing the data to label
      3. Creating the workflow
    4. Improving labeling quality using multiple workers
    5. Using active learning to reduce labeling time
    6. Security and permissions
    7. Summary
  10. Chapter 4: Data Preparation at Scale Using Amazon SageMaker Data Wrangler and Processing
    1. Technical requirements
    2. Visual data preparation with Data Wrangler
      1. Data inspection
      2. Data transformation
      3. Exporting the flow
    3. Bias detection and explainability with Data Wrangler and Clarify
    4. Data preparation at scale with SageMaker Processing
      1. Loading the dataset
      2. Drop columns
      3. Converting data types
      4. Scaling numeric fields
      5. Featurizing the date
      6. Simulating labels for air quality
      7. Encoding categorical variables
      8. Splitting and saving the dataset
    5. Summary
  11. Chapter 5: Centralized Feature Repository with Amazon SageMaker Feature Store
    1. Technical requirements
    2. Amazon SageMaker Feature Store essentials
    3. Creating feature groups
    4. Populating feature groups
    5. Retrieving features from feature groups
    6. Creating reusable features to reduce feature inconsistencies and inference latency
    7. Designing solutions for near real-time ML predictions
    8. Summary
    9. References
  12. Section 2: Model Training Challenges
  13. Chapter 6: Training and Tuning at Scale
    1. Technical requirements
    2. ML training at scale with SageMaker distributed libraries
      1. Choosing between data and model parallelism
      2. Scaling the compute resources
      3. SageMaker distributed libraries
    3. Automated model tuning with SageMaker hyperparameter tuning
    4. Organizing and tracking training jobs with SageMaker Experiments
    5. Summary
    6. References
  14. Chapter 7: Profile Training Jobs with Amazon SageMaker Debugger
    1. Technical requirements
    2. Amazon SageMaker Debugger essentials
      1. Configuring a training job to use SageMaker Debugger
      2. Analyzing the collected tensors and metrics
      3. Taking action
    3. Real-time monitoring of training jobs using built-in and custom rules
    4. Gaining insight into the training infrastructure and training framework
      1. Training a PyTorch model for weather prediction
      2. Analyzing and visualizing the system and framework metrics generated by the profiler
      3. Analyzing the profiler report generated by SageMaker Debugger
      4. Analyzing and implementing recommendations from the profiler report
      5. Comparing the two training jobs
    5. Summary
    6. Further reading
  15. Section 3: Manage and Monitor Models
  16. Chapter 8: Managing Models at Scale Using a Model Registry
    1. Technical requirements
    2. Using a model registry
    3. Choosing a model registry solution
      1. Amazon SageMaker model registry
      2. Building a custom model registry
      3. Utilizing a third-party or OSS model registry
    4. Managing models using the Amazon SageMaker model registry
      1. Creating a model package group
      2. Creating a model package
    5. Summary
  17. Chapter 9: Updating Production Models Using Amazon SageMaker Endpoint Production Variants
    1. Technical requirements
    2. Basic concepts of Amazon SageMaker Endpoint Production Variants
    3. Deployment strategies for updating ML models with SageMaker Endpoint Production Variants
      1. Standard deployment
      2. A/B deployment
      3. Blue/Green deployment
      4. Canary deployment
      5. Shadow deployment
    4. Selecting an appropriate deployment strategy
      1. Selecting a standard deployment
      2. Selecting an A/B deployment
      3. Selecting a Blue/Green deployment
      4. Selecting a Canary deployment
      5. Selecting a Shadow deployment
    5. Summary
  18. Chapter 10: Optimizing Model Hosting and Inference Costs
    1. Technical requirements
    2. Real-time inference versus batch inference
      1. Batch inference
      2. Real-time inference
      3. Cost comparison
    3. Deploying multiple models behind a single inference endpoint
      1. Multiple versions of the same model
      2. Multiple models
    4. Scaling inference endpoints to meet inference traffic demands
      1. Setting the minimum and maximum capacity
      2. Choosing a scaling metric
      3. Setting the scaling policy
      4. Setting the cooldown period
    5. Using Elastic Inference for deep learning models
    6. Optimizing models with SageMaker Neo
    7. Summary
  19. Chapter 11: Monitoring Production Models with Amazon SageMaker Model Monitor and Clarify
    1. Technical requirements
    2. Basic concepts of Amazon SageMaker Model Monitor and Amazon SageMaker Clarify
    3. End-to-end architectures for monitoring ML models
      1. Data drift monitoring
      2. Model quality drift monitoring
      3. Bias drift monitoring
      4. Feature attribution drift monitoring
    4. Best practices for monitoring ML models
    5. Summary
    6. References
  20. Section 4: Automate and Operationalize Machine Learning
  21. Chapter 12: Machine Learning Automated Workflows
    1. Considerations for automating your SageMaker ML workflows
      1. Typical ML workflows
      2. Considerations and guidance for building SageMaker workflows and CI/CD pipelines
      3. AWS-native options for automated workflow and CI/CD pipelines
    2. Building ML workflows with Amazon SageMaker Pipelines
      1. Building your SageMaker pipeline
      2. Data preparation step
      3. Model build step
      4. Model evaluation step
      5. Conditional step
      6. Register model step(s)
      7. Creating the pipeline
      8. Executing the pipeline
      9. Pipeline recommended practices
    3. Creating CI/CD pipelines using Amazon SageMaker Projects
      1. SageMaker projects recommended practices
    4. Summary
  22. Chapter 13:Well-Architected Machine Learning with Amazon SageMaker
    1. Best practices for operationalizing ML workloads
      1. Ensuring reproducibility
      2. Tracking ML artifacts
      3. Automating deployment pipelines
      4. Monitoring production models
    2. Best practices for securing ML workloads
      1. Isolating the ML environment
      2. Disabling internet and root access
      3. Enforcing authentication and authorization
      4. Securing data and model artifacts
      5. Logging, monitoring, and auditing
      6. Meeting regulatory requirements
    3. Best practices for reliable ML workloads
      1. Recovering from failure
      2. Tracking model origin
      3. Automating deployment pipelines
      4. Handling unexpected traffic patterns
      5. Continuous monitoring of deployed model
      6. Updating model with new versions
    4. Best practices for building performant ML workloads
      1. Rightsizing ML resources
      2. Monitoring resource utilization
      3. Rightsizing hosting infrastructure
      4. Continuous monitoring of deployed model
    5. Best practices for cost-optimized ML workloads
      1. Optimizing data labeling costs
      2. Reducing experimentation costs with models from AWS Marketplace
      3. Using AutoML to reduce experimentation time
      4. Iterating locally with small datasets
      5. Rightsizing training infrastructure
      6. Optimizing hyperparameter-tuning costs
      7. Saving training costs with Managed Spot Training
      8. Using insights and recommendations from Debugger
      9. Saving ML infrastructure costs with SavingsPlan
      10. Optimizing inference costs
      11. Stopping or terminating resources
    6. Summary
  23. Chapter 14: Managing SageMaker Features across Accounts
    1. Examining an overview of the AWS multi-account environment
    2. Understanding the benefits of using multiple AWS accounts with Amazon SageMaker
    3. Examining multi-account considerations with Amazon SageMaker
      1. Considerations for SageMaker features
    4. Summary
    5. References
    6. Why subscribe?
  24. Other Books You May Enjoy
    1. Packt is searching for authors like you
    2. Share your thoughts

Product information

  • Title: Amazon SageMaker Best Practices
  • Author(s): Sireesha Muppala, Randy DeFauw, Shelbee Eigenbrode
  • Release date: September 2021
  • Publisher(s): Packt Publishing
  • ISBN: 9781801070522