A financial manager is often concerned with answering a question of the following form: “What is the maximum amount that I can expect to lose with a certain probability over a given horizon?” At the same time, regulators’ concern is to ensure that banks hold sufficient quantities of reserves that would cover most of their material losses arising from financial risks. The concerns of both parties can be reconciled by estimating value-at-risk (VaR). In the context of operational risk, VaR is, informally speaking, the total one-year amount of capital that would be sufficient to cover all unexpected losses with a high level of confidence.
VaR is a powerful statistical tool that has gained popularity within the financial community and has become a benchmark for measuring and forecasting market, credit, operational, and other risks. This chapter discusses the notion of VaR and its alternatives and its role in quantifying and managing of operational risk.
INTUITIVELY, WHAT IS VaR?
Intuitively, VaR determines the worst possible loss that may occur with a given confidence level and for a given timeframe. (1 - α) × 100% VaR is defined as the (1 - α)th percentile of the loss distribution over a target time horizon Δt. 1 - α is called the confidence level. For example, a one-year 95% VaR is the total amount of loss that may be exceeded by the total of all potential losses that may occur over a one-year period no more than 5% of the time.
Three parameters need to ...