Chapter 9: Machine Learning Life Cycle Management

In the previous chapters, we explored the basics of scalable machine learning using Apache Spark. Algorithms dealing with supervised and unsupervised learning were introduced and their implementation details were presented using Apache Spark MLlib. In real-world scenarios, it is not sufficient to just train one model. Instead, multiple versions of the same model must be built using the same dataset by varying the model parameters to get the best possible model. Also, the same model might not be suitable for all applications, so multiple models are trained. Thus, it is necessary to track various experiments, their parameters, their metrics, and the version of the data they were trained on. Furthermore, ...

Get Essential PySpark for Scalable Data Analytics now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.