Any data scientist worth her salt knows that one of the biggest challenges (and time sinks) in advanced analytics is preprocessing. It’s not that it’s particularly complicated programming, but rather that it requires deep knowledge of the data you are working with and an understanding of what your model needs in order to successfully leverage this data. This chapter covers the details of how you can use Spark to perform preprocessing and feature engineering. We’ll walk through the core requirements you’ll need to meet in order to train an MLlib model in terms of how your data is structured. We will then discuss the different tools Spark makes available for performing this kind of work.
To preprocess data for Spark’s different advanced analytics tools, you must consider your end objective. The following list walks through the requirements for input data structure for each advanced analytics task in MLlib:
In the case of most classification and regression algorithms, you want to get your data into a column of type
Double to represent the label and a column of type
Vector (either dense or sparse) to represent the features.
In the case of recommendation, you want to get your data into a column of users, a column of items (say movies or books), and a column of ratings.
In the case of unsupervised learning, a column of type
Vector (either dense or sparse) is needed to represent the ...