Chapter 8. Real-Time Intelligence
Once upon a time, handling streaming data was considered an avant-garde approach in the data processing world. Since the introduction of relational database management systems in the 1970s and traditional data warehouse systems in the late 1980s, all data workloads began and ended with so-called batch processing. Batch processing relies on the concept of collecting numerous tasks in a group (or batch) and processing these tasks in a single operation.
On the flip side, there is a concept of streaming data. Although streaming data is still sometimes considered a cutting-edge technology, it already has a solid history. The roots of streaming data processing go as far back as 2002, when Stanford University researchers published a paper called “Models and Issues in Data Stream Systems”. However, it wasn’t until almost a decade later (2011) that streaming data systems started to reach a wider audience with the release of the open source Apache Kafka platform for storing and processing streaming data. The rest is history, as people say. Nowadays, processing streaming data is considered not a luxury but a necessity.
Microsoft recognized the growing need to process data as soon as it arrives. Microsoft Fabric doesn’t disappoint in that regard: Real-Time Intelligence is at the core of the entire Fabric platform and offers a whole range of capabilities to handle streaming data efficiently.
Before we dive deep into explaining each component of Real-Time Intelligence, ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access