Chapter 10: Scaling Out Single-Node Machine Learning Using PySpark
In Chapter 5, Scalable Machine Learning with PySpark, you learned how you could use the power of Apache Spark's distributed computing framework to train and score machine learning (ML) models at scale. Spark's native ML library provides good coverage of standard tasks that data scientists typically perform; however, there is a wide variety of functionality provided by standard single-node Python libraries that were not designed to work in a distributed manner. This chapter deals with techniques for horizontally scaling out standard Python data processing and ML libraries such as pandas, scikit-learn, XGBoost, and more. It also covers scaling out of typical data science tasks ...
Get Essential PySpark for Scalable Data Analytics now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.