How to do it...

  1. Start a new project in IntelliJ or in an IDE of your choice. Make sure that the necessary JAR files are included.
  1. Set up the package location where the program will reside:
package spark.ml.cookbook.chapter13
  1. Import the necessary packages for the Spark context to get access to the cluster and log4j.Logger to reduce the amount of output produced by Spark:
import org.apache.log4j.{Level, Logger}import org.apache.spark.sql.SparkSessionimport java.io.{BufferedOutputStream, PrintWriter}import java.net.Socketimport java.net.ServerSocketimport java.util.concurrent.TimeUnitimport scala.util.Randomimport org.apache.spark.sql.streaming.ProcessingTime
  1. Define a Scala class to generate voting data onto a client socket:
class CountSreamThread(socket: ...

Get Apache Spark 2: Data Processing and Real-Time Analytics now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.