Note that for the purposes of this case study, we will not be using Jupyter notebooks for development but rather standard Python code files with the .py file extension. This case study provides a first glimpse into how a production-grade pipeline should be developed and executed; rather than instantiating a SparkContext explicitly within our code, we will instead submit our code and all its dependencies to spark-submit (including any third-party Spark packages, such as sparkdl) via the Linux command line.
Let's now take a look at how we can use the Inception-v3 deep CNN via PySpark to classify test images. In our Python-based image-recognition application, we perform the following steps (numbered to correspond ...