Generated by DeepSeek V3.2| value at risk | |
|---|---|
| Name | Value at Risk |
| Inventor | J.P. Morgan |
| Year | Late 1980s–1990s |
| Related measures | Expected shortfall, Stress testing |
value at risk is a statistical technique used to measure and quantify the level of financial risk within a firm, portfolio, or position over a specific time frame. It is most commonly employed by investment banks, commercial banks, and regulatory bodies like the Basel Committee on Banking Supervision to gauge the potential for loss in trading books. The metric provides a threshold value such that the probability of a loss exceeding that amount, under normal market conditions, is at a given confidence level.
Formally, value at risk is defined as the maximum loss not exceeded with a specified probability, known as the confidence level, over a given period of time. For example, a one-day 5% VaR of $1 million suggests that there is only a 5% chance that the portfolio will lose more than $1 million in a single day. The calculation requires selecting three parameters: the confidence level (often 95% or 99%), the time horizon (such as one day or ten days), and the method for modeling the probability distribution of returns. Common approaches include the variance-covariance method, which often assumes returns follow a normal distribution, historical simulation, which uses actual past data, and Monte Carlo simulation, which generates a large number of hypothetical scenarios based on statistical models.
The development of value at risk is closely associated with the financial industry's need for consolidated risk reporting in the late 20th century. Pioneering work was conducted at J.P. Morgan in the late 1980s under the leadership of then-CEO Dennis Weatherstone, who demanded a single daily number summarizing the firm's overall market risk. This effort culminated in the 1994 publication of the RiskMetrics technical document, which provided free methodologies and data for calculating VaR, significantly popularizing its use. The concept was further institutionalized following major financial disasters like the collapse of Barings Bank and the 1998 failure of Long-Term Capital Management, which highlighted the perils of unmanaged market risk.
The primary methodologies for computing value at risk are the parametric method, historical simulation, and Monte Carlo simulation. The parametric, or variance-covariance, method, popularized by RiskMetrics, assumes asset returns are normally distributed and calculates VaR using the portfolio's standard deviation and correlations. Historical simulation involves applying historical changes in market factors to the current portfolio to construct a distribution of potential outcomes, making no parametric assumptions about return distributions. Monte Carlo simulation, the most computationally intensive, involves randomly generating thousands of potential future price paths for assets based on chosen stochastic processes and then calculating the portfolio's value under each scenario to determine the loss distribution.
Value at risk is extensively used for internal risk management by institutions like Goldman Sachs, Morgan Stanley, and Deutsche Bank to set position limits and assess the performance of trading desks. It is a cornerstone for regulatory capital requirements under international accords like the Basel II and Basel III frameworks, which mandate banks to hold capital against market risk based on VaR models. Furthermore, asset managers and pension funds, such as the California Public Employees' Retirement System, utilize VaR to communicate risk exposures to trustees and stakeholders, while corporate treasuries apply it to hedge foreign exchange and interest rate risk.
Despite its widespread adoption, value at risk has been heavily criticized, particularly following the 2007–2008 financial crisis. A major limitation is that it does not measure the severity of losses beyond the VaR threshold, ignoring tail risk. Critics like economist Nassim Nicholas Taleb, author of *The Black Swan*, argue that VaR gives a false sense of security as it fails to account for extreme, low-probability events. The assumption of normal market conditions and static correlations often breaks down during periods of market stress, such as the 1997 Asian financial crisis or the 2008 bankruptcy of Lehman Brothers. Additionally, VaR is not a coherent risk measure as it can discourage diversification under certain conditions.
The regulatory embrace of value at risk began with the 1996 Market Risk Amendment to the Basel I Accord, which allowed banks to use internal VaR models to calculate capital charges for market risk. This was expanded under Basel II, which incorporated VaR for both market and operational risk calculations. In response to the financial crisis, Basel III introduced more stringent requirements, including stressed VaR, which must be calculated based on a continuous twelve-month period of significant financial stress. Regulatory bodies like the U.S. Securities and Exchange Commission and the European Banking Authority continue to scrutinize VaR models, and complementary measures like Expected Shortfall have been adopted to address VaR's shortcomings in capturing tail risk.
Category:Financial risk