Deploy Machine Learning Projects in Production with Open Standard Models
Use PMML, PFA, or ONNX to make your models more manageable and tool and language independent
It’s challenging for businesses to deploy machine learning models uniformly and without tying them to specific technologies. Aside from leading to complex dependencies and hard-to-maintain heterogeneous environments, this pattern often violates separation of concerns. Luckily, a solution is in sight.
Join expert Adam Breindel to learn how to use Predictive Modeling Markup Language (PMML), Portable Format for Analytics (PFA), and Open Neural Network Exchange (ONNX)—industry collaborations to create open, standard, language- and platform-neutral models that can be produced by many tools and deployed in many other tools. You’ll discover how they contribute to manageability by allowing easy versioning, diffing, and updating without complex and expensive-to-maintain dependencies and then put them to work in example projects.
What you'll learn-and how you can apply it
By the end of this live online course, you’ll understand:
- The difficulties in productionizing multiple proprietary ML model “flavors”
- Tools and options for exporting models and feature pipelines as PMML, PFA, or ONNX implementations
- How to deploy models in platform-agnostic standard runtimes
And you’ll be able to:
- Evaluate the benefits of different open standard formats for your projects
- Select an appropriate open standard format and open source runtime environment for your deployment
- Create and run a service that performs inference (makes predictions) using these industry-standard models
This training course is for you because...
- You’re a data scientist, data engineer, or MLOps engineer.
- You work with ML models that need to be put into production and managed.
- You want to become an engineer, architect, or leader who can deploy and operate ML models in production.
- Familiarity with the basic ideas of machine learning, such as continuous versus categorical variables, basic models like linear and logistic regression, and at least one tool for training ML models and making predictions
- A basic understanding of operations principles, especially in enterprise settings (useful but not required)
- Dive deeper into the documentation on your platform of choice (e.g., Kubeflow, MLflow, or SeldonCore) and other cloud-specific offerings (e.g., AWS SageMaker or Azure MLOps)
About your instructor
Adam Breindel consults and teaches courses on Apache Spark, data engineering, machine learning, AI, and deep learning. He supports instructional initiatives as a senior instructor at Databricks, has taught classes on Apache Spark and deep learning for O'Reilly, and runs a business helping large firms and startups implement data and ML architectures. Adam’s first full-time job in tech was neural net–based fraud detection, deployed at North America's largest banks back; since then, he's worked with numerous startups, where he’s enjoyed getting to build things like mobile check-in for two of America's five biggest airlines years before the iPhone came out. He’s also worked in entertainment, insurance, and retail banking; on web, embedded, and server apps; and on clustering architectures, APIs, and streaming analytics.
The timeframes are only estimates and may vary according to how the class is progressing
Understanding the model deployment challenge (30 minutes)
- Lecture: Managing model operations independent of training platform; multiple delivery platforms (mobile, web, REST service, etc.); the challenges of model deployments
- Group discussion: Breaking the dependency between model building and model deployment
- Hands-on exercise: Explore what can go wrong with model deployments today
Break (5 minutes)
Common but (sometimes) less desirable approaches (20 minutes)
- Lecture: Amalgamation and single-product model formats
- Group discussion: Your deployment mechanisms
- Hands-on exercise: Deploy a model with TensorFlow Serving
Break (5 minutes)
An open standard approach: PMML (35 minutes)
- Lecture: The PMML origin; which products create and/or consume PMML
- Group discussion: When does PMML make sense today?
- Hands-on exercise: Create a PMML model and inspect it
Newer, better approach: PFA (35 minutes)
- Lecture: PFA design goals; where it works
- Group discussion: Using the PFA scoring open source reference implementations
- Hands-on exercise: Create a PFA scoring service using Hadrian (the Java scoring implementation)
Break (5 minutes)
Latest technology: ONNX format (45 minutes)
- Lecture: The origin of ONNX; extending ONNX beyond neural nets to ML; ONNX runtimes available today
- Group discussion: Creating an ONNX representation of a model
- Hands-on exercise: Serve predictions using Python and Microsoft’s open source onnxruntime