Model estimation
Once the feature sets get finalized in our last section, what follows is to estimate all the parameters of the selected models, for which we have adopted a dynamic approach of using SPSS on Spark, R notebooks in the Databricks environment, and MLlib directly on Spark. For the purpose of organizing workflows better, we focused our effort on organizing all the codes into R notebooks and also coding SPSS Modeler nodes.
For this project, as mentioned earlier, we need to conduct some exploratory analysis for descriptive statistics and for visualization. For this, we can take the MLlib codes and implement them directly. Also, with R codes, we obtained quick and good results.
For the best modelling, we need to arrange distributed computing, ...
Get Apache Spark Machine Learning Blueprints now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.