Appendix K: Observer
Appendix J raises the thorny issue of how to deal with observer biases in such a way
that they don’t derail eﬀorts to arrive at reasonably objective, reliable, and repeat-
able answers. Certainly, being aware of them can help. Most of us can identify some
of the tendencies and susceptibilities within us. But often we need to do more.
One possible solution comes from Hubbard (2010) in the form of training or
“calibrating” observers to gauge probabilities more objectively, counteracting their
tendency to be either underconﬁdent or overconﬁdent. Hubbard suggests trainees
should practice on a series of trivial questions, providing feedback to each other to
ﬁne-tune their ability to assess probabilities. is is obviously relevant when con-
sidering the probability of information security incidents or interpreting the result
of some metric and deals with the issue of uncertainty.
Research by Hubbard and others has shown that experts tend to be overconﬁ-
dent with their ability to determine probabilities. Because they may be either provid-
ing the crucial metrics on which managers base vital decisions or, at least, strongly
inﬂuencing those decisions, the experts are gambling with their own credibility.
Calibration is also worthwhile in situations where teams of observers, assessors,
or auditors are independently measuring relatively subjective factors in diﬀerent
parts of a large organization or in separate organizations. Assuming the entire team
is supposed to be applying the same criteria (e.g., all using the same maturity metric
scales described in Appendix H), calibration can be achieved as follows:
1. First, the team assembles for training on the assessment method with plenty
of time to discuss and agree on the objectives, the process, and the scoring
2. Next, junior team members are paired up with their more experienced col-
leagues to undertake one or more initial assessments together, discussing and,
if appropriate, adjusting the scores and learning as they go.