Our first script reads in a text file and sees how much the line lengths add up to:
import pyspark if not 'sc' in globals(): sc = pyspark.SparkContext() lines = sc.textFile("Spark File Words.ipynb") lineLengths = lines.map(lambda s: len(s)) totalLength = lineLengths.reduce(lambda a, b: a + b) print(totalLength)
In the script, we are first initializing Spark-only if we have not done already. Spark will complain if you try to initialize it more than once, so all Spark scripts should have this
if prefix statement.
The script reads in a text file (the source of this script), takes every line, and computes its length; then it adds all the lengths together.
lambda function is an anonymous (not named) function that takes arguments ...