Backtesting Value-at-Risk (VaR): The Basics

Investing News

Value-at-risk (VaR) is a widely used measure of downside investment risk for a single investment or a portfolio of investments. VaR gives the minimum loss in value or percentage on a portfolio or asset over a specific period of time for a certain level of confidence. The confidence level is often chosen so as to give an indication of tail risk; that is, the risk of rare, extreme market events. 

For example, a VaR calculation that suggests an asset 5% chance 3% loss over a period of one day would tell an investor with $100 invested into that asset that they should expect a 5% chance that their portfolio will drop at least $3 on any given day. The VaR ($3 in this example) can be measured using three different methodologies. Each methodology relies on creating a distribution of investment returns; put another way, all possible investment returns are assigned a probability of occurrence over a specified time period.

How Accurate Is VaR?

Once a VaR methodology is chosen, calculating a portfolio’s VaR is a fairly straightforward exercise. The challenge lies in assessing the accuracy of the measure and, thus, the accuracy of the distribution of returns. Knowing the accuracy of the measure is particularly important for financial institutions because they use VaR to estimate how much cash they need to reserve to cover potential losses. Any inaccuracies in the VaR model may mean that the institution is not holding sufficient reserves and could lead to significant losses, not only for the institution but potentially for its depositors, individual investors and corporate clients. In extreme market conditions such as those that VaR attempts to capture, the losses may be large enough to cause bankruptcy.

How to Backtest a VaR Model for Accuracy

Risk managers use a technique known as backtesting to determine the accuracy of a VaR model. Backtesting involves the comparison of the calculated VaR measure to the actual losses (or gains) achieved on the portfolio. A backtest relies on the level of confidence that is assumed in the calculation. 

For example, the investor who calculated a one-day VaR of $3 on a $100 investment with 95% confidence will expect the one-day loss on his portfolio to exceed $3 only 5% of the time. If the investor recorded the actual losses over 100 days, the loss would exceed $3 on exactly five of those days if the VaR model is accurate. A simple backtest stacks up the actual return distribution against the model return distribution by comparing the proportion of actual loss exceptions to the expected number of exceptions. The backtest must be performed over a sufficiently long period to ensure that there are enough actual return observations to create an actual return distribution. For a one-day VaR measure, risk managers typically use a minimum period of one year for backtesting.

The simple backtest has a major drawback: it’s dependent on the sample of actual returns used. Consider again the investor who calculated a $3 one-day VaR with 95% confidence. Suppose the investor performed a backtest over 100 days and found exactly five exceptions. If the investor uses a different 100-day period, there may be fewer or a greater number of exceptions. This sample dependence makes it difficult to ascertain the accuracy of the model. To address this weakness, statistical tests can be implemented to shed greater light on whether a backtest has failed or passed.

What to Do If the Backtest Fails

When a backtest fails, there are a number of possible causes that need to be taken into consideration:

The Wrong Return Distribution

If the VaR methodology assumes a return distribution (e.g., a normal distribution of returns), it’s possible that the model distribution is not a good fit to the actual distribution. Statistical goodness-of-fit tests can be used to check that the model distribution fits the actual observed data. Alternatively, a VaR methodology that does not require a distribution assumption can be used.

A Misspecified VaR Model

If the VaR model captures, say, only equity market risk while the investment portfolio is exposed to other risks such as interest rate risk or foreign exchange risk, the model is misspecified. In addition, if the VaR model fails to capture the correlations between the risks, it is considered to be misspecified. This can be rectified by including all the applicable risks and associated correlations in the model. It is important to reevaluate the VaR model whenever new risks are added to a portfolio.

Measurement of Actual Losses

The actual portfolio losses must be representative of risks that can be modeled. More specifically, the actual losses must exclude any fees or other such costs or income. Losses that represent only risks that can be modeled are referred to as “clean losses.” Those that include fees and other such items are known as “dirty losses.” Backtesting must always be done using clean losses to ensure a like-for-like comparison.

Other Considerations

It’s important not to rely on a VaR model simply because it passes a backtest. Although VaR offers useful information about worst case risk exposure, it is heavily reliant on the return distribution employed, particularly the tail of the distribution. Since tail events are so infrequent, some practitioners argue that any attempts to measure tail probabilities based on historical observation are inherently flawed. According to Reuters, “VaR came in for heated criticism following the financial crisis as many models failed to predict the extent of the losses that devastated many large banks in 2007 and 2008.”

The reason? The markets had not experienced a similar event, so it wasn’t captured in the tails of the distributions that were used. After the 2007 financial crisis, it also became clear that VaR models are incapable of capturing all risks; for example, basis risk. These additional risks are referred to as “risk not in VaR” or RNiV.

In an attempt to address these inadequacies, risk managers supplement the VaR measure with other risk measures and other techniques such as stress testing.

The Bottom Line

Value-at-Risk (VaR) is a measure of worst case losses over a specified time period with a certain level of confidence. The measurement of VaR hinges on the distribution of investment returns. In order to test whether or not the model accurately represents reality, backtesting can be carried out. A failed backtest means that the VaR model must be reevaluated. However, a VaR model that passes a backtest should still be supplemented with other risk measures due to the shortcomings of VaR modeling.

Articles You May Like

Autonomous Vehicles: Why 2025 Will Usher in the Self-Driving Car
Quantum Computing: The Key to Unlocking AI’s Full Potential?
Dental supply stock surges on RFK’s anti-fluoride stance, activist involvement
5 More Trump Stocks to Trade
Data centers powering artificial intelligence could use more electricity than entire cities