Chapter 5: Running DL Pipelines in Different Environments
It is critical to have the flexibility of running a deep learning (DL) pipeline in different execution environments such as local or remote, on-premises, or in the cloud. This is because, during different stages of the DL development, there may be different constraints or preferences to either improve the velocity of the development or ensure security compliance. For example, it is desirable to do small-scale model experimentation in a local or laptop environment, while for a full hyperparameter tuning, we need to run the model on a cloud-hosted GPU cluster to get a quick turn-around time. Given the diverse execution environments in both hardware and software configurations, it used to ...
Get Practical Deep Learning at Scale with MLflow now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.