O'Reilly logo

PRAGMATIC Security Metrics by W. Krag Brotby, Gary Hinson

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

481
Appendix K: Observer
Calibration
Appendix J raises the thorny issue of how to deal with observer biases in such a way
that they don’t derail efforts to arrive at reasonably objective, reliable, and repeat-
able answers. Certainly, being aware of them can help. Most of us can identify some
of the tendencies and susceptibilities within us. But often we need to do more.
One possible solution comes from Hubbard (2010) in the form of training or
calibrating” observers to gauge probabilities more objectively, counteracting their
tendency to be either underconfident or overconfident. Hubbard suggests trainees
should practice on a series of trivial questions, providing feedback to each other to
fine-tune their ability to assess probabilities. is is obviously relevant when con-
sidering the probability of information security incidents or interpreting the result
of some metric and deals with the issue of uncertainty.
Research by Hubbard and others has shown that experts tend to be overconfi-
dent with their ability to determine probabilities. Because they may be either provid-
ing the crucial metrics on which managers base vital decisions or, at least, strongly
influencing those decisions, the experts are gambling with their own credibility.
Calibration is also worthwhile in situations where teams of observers, assessors,
or auditors are independently measuring relatively subjective factors in different
parts of a large organization or in separate organizations. Assuming the entire team
is supposed to be applying the same criteria (e.g., all using the same maturity metric
scales described in Appendix H), calibration can be achieved as follows:
1. First, the team assembles for training on the assessment method with plenty
of time to discuss and agree on the objectives, the process, and the scoring
criteria.
2. Next, junior team members are paired up with their more experienced col-
leagues to undertake one or more initial assessments together, discussing and,
if appropriate, adjusting the scores and learning as they go.
482 ◾  Appendix K
3. e bulk of the assessments can be performed by the assessors working alone,
utilizing their training and the documented criteria to the best of their abili-
ties and keeping notes on any issues or doubts.
4. Finally, the team reassembles to discuss, consider, and, where necessary, nor-
malize the scores prior to preparing the metricated report.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required