268 ◾ PRAGMATIC Security Metrics
security,* they are highly mature, tracing their histories back literally thousands
of years rather than mere decades. As information security professionals with an
interest in metrics, we have a lot to learn from our learned colleagues in other
9.1 High-Reliability Metrics
Metrics, like other processes, tools, and controls, sometimes fail. Unreliable instru-
ments or measurement processes are annoying at best, misleading us with inac-
curate, imprecise, or sporadic readings, implying that something is under control
when, in fact, it is not or failing to alert us to conditions that require our atten-
tion. At worst, they can be a liability, occasionally creating grave risks and cata-
Consequently, every metric of any importance
considered in terms of whether, when, and how it might fail and ideally engineered
to make failure either extremely unlikely or conspicuously obvious. is section
concerns the application of fail-safe and related reliability engineering concepts to
information security metrics.
Safety-critical systems are the classic example. Many machines must operate
within certain ranges for safety reasons: operating parameters exceeding accept-
able limits would constitute a safety hazard, jeopardizing life and limb. e asso-
ciated measurements are not only used to operate/manage the machines but also
to conﬁrm that they remain within safe limits, and hence, just like the machines
themselves, the measurements must be more than just ordinarily reliable. Ideally,
safety-critical machines and the associated measures and processes should fail safe,
for example, if a nuclear reactor core temperature exceeds a limit value (indicating
a control failure) or if the temperature readings don’t make sense or stop altogether
for some reason (indicating a measurement failure), the control rods are dropped
automatically into the core to dampen the reaction. Approaches like these have
developed over many decades of industrial design, applied engineering, and trial
and error, learning from accidents, incidents, and near misses, and the learning pro-
cess continues every day. We are presently behind the curve in information security.
Broadly similar principles can be applied to the design of business-critical
processes, systems, controls, and measurements. High-reliability metrics are—or
rather should be—an integral part of that mix.
A few information security controls arguably fall into the safety-critical cat-
egory where the consequences of security failures are extreme hazards that threaten
e Caesar cypher, for instance, is about 2000 years old. Hail Caesar!
Speaking as someone who once ran out of fuel on an isolated stretch of road in the depths of
winter because of the car’s fuel gauge icing up and sticking at part-full when the tank was, in
fact, empty, I have a healthy respect for the reliability of measurements and instruments.
…and metrics of no importance are about as much use as ashtrays on motorbikes.