Normally, in order to take advantage of Spark, data is stored in a Hadoop Distributed File System (HDFS), which is a distributed filesystem designed to store large volumes of data, and computation occurs over multiple nodes on clusters. For demonstration purposes, we are keeping the data on a local machine and running Spark locally. It is no different from running it on a distributed computing cluster.
Learning on massive click logs with Spark
Get Python Machine Learning By Example - Second Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.