Principle I: Risk Duality

The first principle of modern quantitative risk management is to split the analysis into two parts. For normal, common events you have plenty of data to make reliable quantitative conclusions using historical statistics. But long-term success requires at least surviving the abnormal, unexpected, uncommon events you know will also occur. The greatest long-term success requires exploiting—not just surviving—them. So you need to take them into account as well. You must manage them even though you don't know what might happen, you don't know how your actions will influence what might happen, and you certainly can't assign probabilities.

This may seem like an obvious insight, and perhaps it is. But many people disagree. A common approach by quantitative professionals is to deal only with the first kind of analysis, based on the assumption that events you know nothing about shouldn't affect your decisions. At the other extreme, Nassim Taleb in Fooled by Randomness and The Black Swan has argued that only the second kind of analysis matters; the first is fraudulent, because long-term outcomes are dominated by unexpected, high-impact events. A more popular but less intellectual version of Nassim's argument is to make choices according to whim or tradition or gut instinct because careful analysis and planning seem to fail so often.

This idea happens to be the topic of my dissertation written in 1982 at the University of Chicago Graduate School of Business (now the ...

Get Red-Blooded Risk: The Secret History of Wall Street now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.