Chapter 25. Monitoring Spark Streaming
Monitoring in streaming applications is required to gain operational confidence of the deployed applications and should include a holistic view of the resources used by the application, such as CPU, memory, and secondary storage. As a distributed application, the number of factors to monitor is multiplied by the number of nodes that are part of a clustered deployment.
To manage this complexity, we need a comprehensive and smart monitoring system. It needs to collect metrics from all the key moving parts that participate in the streaming application runtime and, at the same time, it needs to provide them in an understandable and consumable form.
In the case of Spark Streaming, next to the general indicators just discussed, we are mainly concerned with the relationship between the amount of data received, the batch interval chosen for our application, and the actual execution time of every microbatch. The relation between these three parameters is key for a stable Spark Streaming job in the long run. To ensure that our job performs within stable boundaries, we need to make performance monitoring an integral part of the development and production process.
Spark offers several monitoring interfaces that cater to the different stages of that process:
- The Streaming UI
-
A web interface that provides charts of key indicators about the running job
- The Monitoring REST API
-
A set of APIs that can be consumed by an external monitoring system to obtain ...
Get Stream Processing with Apache Spark now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.