Now, let's redesign the preceding test case by returning only the RDD of the texts in the document, as follows:
package com.chapter16.SparkTestingimport org.apache.spark._import org.apache.spark.rdd.RDDimport org.apache.spark.sql.SparkSessionclass wordCountRDD { def prepareWordCountRDD(file: String, spark: SparkSession): RDD[(String, Int)] = { val lines = spark.sparkContext.textFile(file) lines.flatMap(_.split(" ")).map((_, 1)).reduceByKey(_ + _) }}
So, the prepareWordCountRDD() method in the preceding class returns an RDD of string and integer values. Now, if we want to test the prepareWordCountRDD() method's functionality, we can do it more explicit by extending the test class with FunSuite and ...