Preprocessing big data through Spark EMR

The design pattern to execute models in SageMaker is to read the data placed in S3. The data may not be readily consumable most of the time. If the datasets required are large, then wrangling the data in the Jupyter notebook may not be practical. In such cases, Spark EMR clusters can be employed to conduct operations on big data.

Wrangling a big dataset in Jupyter notebooks results in out-of-memory errors. Our solution is to employ AWS EMR (Elastic MapReduce) clusters to conduct distributed data processing. Hadoop will be used as the underlying distributed filesystem while Spark will be used as the distributed computing framework.

Now, to run commands against the EMR cluster to process big data, AWS ...

Get Hands-On Artificial Intelligence on Amazon Web Services now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.