Video description
Get started with MLOps and Github Actions to package a container with an ONNX model that does live inferencing with a Flask application. By using Azure ML, learn how to download the large ONNX model
into the Github Action workflow, package it as a container and then push it to a container registry. For reference use the https://github.com/alfredodeza/flask-roberta repository
Topics include:
* Create a container that does live inferencing with Flask and the ONNX runtime
* Package the model and verify it works locally
* Setup a Github Action to authenticate to Azure ML and download a previously registered model
* Build the new container as a Github Action, authenticate to Docker Hub or Github Packages
* Push the new container to the Github registry or any other registry like Docker Hub
Table of contents
Product information
- Title: MLOps workflow with Github Actions
- Author(s):
- Release date: March 2021
- Publisher(s): Pragmatic AI Labs
- ISBN: 50108VIDEOPAIML
You might also like
video
MLOps for Containers with AWS and GCP
Learn to build MLOps predictions with containers * AWS App Runner * AWS ECR (EC2 Container …
video
An emerging architecture pattern for Agile integration: Cell-based architecture (sponsored by WSO2)
The number of microservices running in enterprises increases daily. As a result, service composition, governance, security, …
book
Gestionar sin Prisas: Únete a la revolución Slow
¿Estás teniendo hoy un día estresado? ¿Cómo fue ayer, cómo será mañana? Si vives bajo presión …
video
MLOps deployment with Azure Container Apps
MLOps deployment to Azure Container Apps Take advantage of insta-scaling for live inferencing Learn how to …