Write-only accumulator variables

The other variables that can be shared in a Spark cluster are accumulators. Accumulators are write-only variables that can be added together and are typically used to implement sums or counters. Only the driver node, which is the one that is running the IPython Notebook, can read its value; all the other nodes can't read this. Let's see how it works using an example: we want to process a text file and understand how many lines are empty while processing it. Of course, we can do this by scanning the dataset twice (using two Spark jobs), with the first one counting the empty lines and the second one doing the real processing, but this solution is not very effective. Following this, you will take all the steps ...

Get Python Data Science Essentials - Third Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.