Let's build a descriptive statistics server. These pop up often in one way or another inside organizations: a thing is needed that consumes events, does some kind of descriptive statistic computation over those things, and then multiplexes the descriptions out to other systems. The very reason my work project, postmates/cernan (https://crates.io/crates/cernan), exists is to service this need at scale on resource constrained devices without tying operations staff into any kind of pipeline. What we'll build here now is a kind of mini-cernan, something whose flow is as follows:
_--> high_filter --> cma_egress / telemetry -> ingest_point --- (udp) \_--> low_filter --> ckms_egress
The idea is to take telemetry from a simple ...