In this chapter, we review the main risk-measurement tool used in banking, known as value-at-risk (VaR). The review looks at the three main methodologies used to calculate VaR, as well as some of the key assumptions used in the calculations, including those on the normal distribution of returns, volatility levels, and correlations. The methodology is worth understanding, irrespective of its flaws, because national regulators have adopted it as the tool with which bank regulatory capital is calculated. We also discuss the use of the VaR methodology with respect to credit risk.
The introduction of value-at-risk as an accepted methodology for quantifying market risk and its adoption by bank regulators was a part of the evolution of risk management. The application of VaR has been extended from its initial use in securities houses to commercial banks and corporates, following its introduction in October 1994 when JPMorgan launched RiskMetrics free over the Internet.
VaR is a measure of the quantile loss that a firm may suffer over a period of time that has been specified by the user, under normal market conditions and a specified level of confidence. This measure may be obtained in a number of ways, using a statistical model or by computer simulation. We define VaR as follows:
VaR is a measure of market risk. It is the minimum loss that can occur with X percent confidence over a holding period of n days. Put another ...