O'Reilly logo

Apache Spark for Data Science Cookbook by Padma Priya Chitturi

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Simulating real-time data

In this recipe, we'll see how to simulate real-time data.

Getting ready

To step through this recipe, you will need Kafka and Zookeeper running on the cluster. Install Scala and Java.

How to do it…

  1. Since the data is available in files, let's simulate the data in real time using a producer which writes the data into Kafka. Here is the code:
     import java.util.{Date, Properties} import kafka.javaapi.producer.Producer import kafka.producer.KeyedMessage import kafka.producer.ProducerConfig import org.apache.spark.mllib.linalg.Vectors import scala.io.{BufferedSource, Source} import scala.util.Random object KafkaProducer { def main(args:Array[String]): Unit ={ val random:Random = new Random val props = new Properties props.put("metadata.broker.list","172.22.128.16:9092") ...

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required