Philippe Jorion
The recent derivatives disasters have focused the attention of the finance industry on the need to control financial risks better. This search has led to a uniform measure of risk called value at risk (VAR), which is the expected worst loss over a given horizon at a given confidence level. VAR numbers, however, are themselves affected by sampling variation, or “estimation risk”—thus, the risk in value at risk itself. Nevertheless, given these limitations, VAR is an indispensable tool to control financial risks. This article lays out the statistical methodology for analyzing estimation error in VAR and shows how to improve the accuracy of VAR estimates.
The need to improve control of financial risks has led to a uniform measure of risk called value at risk (VAR), which the private sector is increasingly adopting as a first line of defense against financial risks. Regulators and central banks also provided the impetus behind VAR. The Basle Committee on Banking Supervision announced in April 1995 that capital adequacy requirements for commercial banks are to be based on VAR.1 In December 1995, the Securities and Exchange Commission issued a proposal that requires publicly traded U.S. corporations to disclose information about derivatives activity, with a VAR measure as one of three possible methods for making such disclosures. Thus, the unmistakable trend is toward more-transparent financial risk reporting based ...

Get Risk Management: Foundations for a Changing Financial World now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.