scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Risk in 2002"



Journal ArticleDOI
TL;DR: In this article, the authors provide a balanced view on when and when not to use the Cornish-Fisher expansion in the context of Delta-Gamma-Normal approaches to the computation of value at risk.
Abstract: Qualitative and quantitative properties of the Cornish-Fisher-Expansion in the context of Delta-Gamma-Normal approaches to the computation of Value at Risk are presented Some qualitative deficiencies of the Cornish-Fisher-Expansion – the monotonicity of the distribution function as well as convergence are not guaranteed – make it seem unattractive In many practical situations, however, its actual accuracy is more than sufficient and the Cornish-Fisher-approximation can be computed faster (and simpler) than other methods like numerical Fourier inversion This paper tries to provide a balanced view on when and when not to use Cornish-Fisher in this context

113 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provide an empirical implementation of a reduced form credit risk model that incorporates both liquidity risk and correlated defaults, where liquidity risk is modeled as a convenience yield and default correlation is modeled via an intensity process that depends on market factors.
Abstract: This paper provides an empirical implementation of a reduced form credit risk model that incorporates both liquidity risk and correlated defaults. Liquidity risk is modeled as a convenience yield and default correlation is modeled via an intensity process that depends on market factors. Various different liquidity risk and intensity process models are investigated. Firstly, the evidence supports a non-zero liquidity premium that is firm specific, reflecting idiosyncratic and not systematic risk. Secondly, the credit risk model with correlated defaults fits the data quite well with an average R of .87 and a pricing error of only 1.1 percent.

68 citations


Journal ArticleDOI
TL;DR: In this article, the authors examined whether implied volatility is an unbiased information-ally efficient predictor of actual future volatility and its predictive power and found that implied volatility has strong predictive power.
Abstract: This paper examines (1) whether implied volatility is an unbiased informationally efficient predictor of actual future volatility and (2) its predictive power. If markets are efficient and the option pricing model is correct, then the implied volatility calculated from option prices should be an unbiased and informationally efficient estimator of future volatility, that is, it should correctly impound all available information including the asset's price history. However, numerous studies have found that implied volatility is not informationally efficient and that historical volatilities have incremental predictive power -- often out-predicting implied volatilities. For the S&P 500 options on futures we find the following. One, at least part of the apparent inefficiency of implied volatility from past studies stems from measurement error which biases estimates of the importance of implied volatility downward and of the importance of historical volatility upward. Once we correct for this error, there is no significant inefficiency. Two, implied volatility has strong predictive power -- considerably stronger than found by previous equity index studies. Three, stock market volatility prediction results are quite sensitive to (1) the forecasting horizon and (2) whether the data period covers the October 1987 stock market crash.

63 citations



Journal ArticleDOI
TL;DR: In this article, the authors present an analysis and survey regarding the validity of VaR risk measures in comparison to traditional risk measures and conclude that although VaR is an inadequate measure within the expected utility framework, it is at least as good as other traditional risk measure.
Abstract: The article presents an analysis and survey regarding the validity of VaR risk measures in comparison to traditional risk measures. Individuals are assumed to either maximize their expected utility or possess a lexicographic utility function. The analysis is carried out for generally distributed functions and for the normal and lognormal distributions. The main conclusion is that although VaR is an inadequate measure within the expected utility framework, it is at least as good as other traditional risk measures. Moreover, it can be improved by modified versions such as the Accumulated-VaR (Mean-Shortfall) Assuming a lexicographic expected utility strengthens the argument for using AVaR as a legitimate risk measure especially in the case of a regulated firm.

43 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a comprehensive empirical analysis of a set of left-tail measures (LTMs): the mean and standard deviation of a loss larger than the VAR (MLL and SDLL) and investigate the empirical dynamics of the LTMs.
Abstract: losses such as value-at-risk may not contain enough, or the right, information for risk managers. This paper presents a comprehensive empirical analysis of a set of left-tail measures (LTMs): the mean and standard deviation of a loss larger than the VAR (MLL and SDLL) and the VAR. We investigate the empirical dynamics of the LTMs. We present a robust and unified framework, the Arch quantile regression approach, in estimating the LTMs. Our Monte Carlo simulation shows that the VAR is appropriate for risk management when returns follow Gaussian processes, but the MLL strategy and strategies accounting for the SDLL are useful in reducing the risk of large losses under non-normal distributions and when there are jumps in asset prices.

34 citations








Journal ArticleDOI
TL;DR: In this article, the authors explore practical methods to specify and compute the conditional value-at-risk, a coherent risk measure, in the presence of certain tractable collections of probability measures (risk measurement).
Abstract: Despite the widespread realization that risk measures depend critically on the underlying probability measure, most risk measures are based on single probability measures – asset price probabilities are assumed to be known with certainty. We explore practical methods to specify and compute the conditional value-at-risk, a coherent risk measure, in the presence of certain tractable collections of probability measures (risk measurement). Our methods "discover" the most dangerous measure in the set of measures. We also explore practical methods to compute optimal trading strategies with respect to a particular probability measure, under the constraint that CVAR is bounded under multiple probability measures (risk management).