January 2018
Intermediate to advanced
470 pages
11h 9m
English
Let's use Spark's textFile method to read a text file from your preferred storage such as HDFS or the local filesystem. However, it's up to us to specify how to split the fields. While reading the input dataset, we do groupBy first and transform after the join with a flatMap operation to get the required fields:
val TRAIN_FILENAME = "data/ua.base"
val TEST_FIELNAME = "data/ua.test"
val MOVIES_FILENAME = "data/u.item"
// get movie names keyed on id
val movies = spark.sparkContext.textFile(MOVIES_FILENAME)
.map(line => {
val fields = line.split("\|")
(fields(0).toInt, fields(1))
})
val movieNames = movies.collectAsMap()
// extract (userid, movieid, rating) from ratings data
val ratings = spark.sparkContext.textFile(TRAIN_FILENAME) ...Read now
Unlock full access