A key step in the data science workflow is rapid model development in order to create, test, and identify the best models to put into production. However, large gaps exist in this workflow, and the data science tool set is rapidly changing to fill those gaps. Large teams and enterprises are quickly moving from using individual siloed notebooks like Zeppelin and Jupyter to wanting to share and reuse models, code, and results. Challenges also exist in deploying models into production and model serving using tools like Kubeflow and TensorFlow. Moon Soo Lee and Louis Huard explore real-world examples of how companies are solving these problems, and how you can use these best practices in your own workflow.
What you'll learn
- Learn how companies are solving the problem of the gaps in the data science workflow
This session is from the 2019 O'Reilly Artificial Intelligence Conference in San Jose, CA.
- Title: The holy grail of data science: Rapid model development and deployment (sponsored by Zepl)
- Release date: February 2020
- Publisher(s): O'Reilly Media, Inc.
- ISBN: 0636920369936
You might also like
Building Effective Data Science Infrastructure in 30 Minutes
Ville Tuulos demonstrates how to develop and deploy a production-grade machine learning application on the fly, …
Building Data Science Infrastructure
Presented by Caitlin Hudon – Lead Data Scientist at OnlineMedEd Before AI, before machine learning and …
Mastering Big Data Analytics with PySpark
PySpark helps you perform data analysis at-scale; it enables you to build more scalable analyses and …
Data Science on AWS
With this practical book, AI and machine learning practitioners will learn how to successfully build and …