VaR is typically used by firms and regulators in the financial industry to gauge the amount of assets needed to cover possible losses.
In some extreme financial events it can be impossible to determine losses, either because market prices are unavailable or because the loss-bearing institution breaks up.
Some longer-term consequences of disasters, such as lawsuits, loss of market confidence and employee morale and impairment of brand names can take a long time to play out, and may be hard to allocate among specific prior decisions.
The system is run periodically (usually daily) and the published number is compared to the computed price movement in opening positions over the time horizon.
Essentially, trustees adopt portfolio Values-at-Risk metrics for the entire pooled account and the diversified parts individually managed.
VaR utilized in this manner adds relevance as well as an easy way to monitor risk measurement control far more intuitive than Standard Deviation of Return.
For example, if an institution holds a loan that declines in market price because interest rates go up, but has no change in cash flows or credit quality, some systems do not recognize a loss.
Also some try to incorporate the economic cost of harm not measured in daily financial statements, such as loss of market confidence or employee morale, impairment of brand names or lawsuits.
Supporters of VaR-based risk management claim the first and possibly greatest benefit of VaR is the improvement in systems and modeling it forces on an institution.
In 1997, Philippe Jorion wrote:[19][T]he greatest benefit of VAR lies in the imposition of a structured methodology for critically thinking about risk.
Positions that are reported, modeled or priced incorrectly stand out, as do data feeds that are inaccurate or late and systems that are too-frequently down.
[27] A comparison of a number of strategies for VaR prediction is given in Kuester et al.[28] A McKinsey report[29] published in May 2012 estimated that 85% of large banks were using historical simulation.
Early examples of backtests can be found in Christoffersen (1998),[30] later generalized by Pajhede (2017),[31] which models a "hit-sequence" of losses greater than the VaR and proceed to tests for these "hits" to be independent from one another and with a correct probability of occurring.
A number of other backtests are available which model the time between hits in the hit-sequence, see Christoffersen and Pelletier (2004),[32] Haas (2006),[33] Tokpavi et al.
[34] and Pajhede (2017)[31] As pointed out in several of the papers, the asymptotic distribution is often poor when considering high levels of coverage, e.g. a 99% VaR, therefore the parametric bootstrap method of Dufour (2006)[35] is often used to obtain correct size properties for the tests.
This was the first major financial crisis in which a lot of academically-trained quants were in high enough positions to worry about firm-wide survival.
[1] The crash was so unlikely given standard statistical models, that it called the entire basis of quant finance into question.
A reconsideration of history led some quants to decide there were recurring crises, about one or two per decade, that overwhelmed the statistical assumptions embedded in models used for trading, investment management and derivative pricing.
These affected many markets at once, including ones that were usually not correlated, and seldom had discernible economic cause or warning (although after-the-fact explanations were plentiful).
It was well established in quantitative trading groups at several financial institutions, notably Bankers Trust, before 1990, although neither the name nor the definition had been standardized.
J. P. Morgan CEO Dennis Weatherstone famously called for a "4:15 report" that combined all firm risk on one page, available within 15 minutes of the market close.
Development was most extensive at J. P. Morgan, which published the methodology and gave free access to estimates of the necessary underlying parameters in 1994.
[10] In 1997, the U.S. Securities and Exchange Commission ruled that public corporations must disclose quantitative information about their derivatives activity.
Major banks and dealers chose to implement the rule by including VaR information in the notes to their financial statements.
[1] Worldwide adoption of the Basel II Accord, beginning in 1999 and nearing completion today, gave further impetus to the use of VaR.
A powerful tool for professional risk managers, VaR is portrayed as both easy to misunderstand, and dangerous when misunderstood.
A single-branch bank has about 0.0004% chance of being robbed on a specific day, so the risk of robbery would not figure into one-day 1% VaR.
The whole point of insurance is to aggregate risks that are beyond individual VaR limits, and bring them into a large enough portfolio to get statistical predictability.
A sizable in-house security department is in charge of prevention and control, the general risk manager just tracks the loss like any other cost of doing business.
As portfolios or institutions get larger, specific risks change from low-probability/low-predictability/high-impact to statistically predictable losses of low individual impact.