scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Risk in 2022"


Journal ArticleDOI
TL;DR: In this paper , the impact of oil market uncertainty on the level, slope and curvature factors derived from the term structure of US interest rates covering maturities from 1 to 30 years was analyzed.
Abstract: Using daily data from January 3, 2001 to July 17, 2020 we analyze the impact of oil market uncertainty (computed based on the realized volatility of five-minute intraday oil returns) on the level, slope and curvature factors derived from the term structure of US interest rates covering maturities from 1 to 30 years. The results of the linear Granger causality tests show no evidence of the predictive ability of oil uncertainty for the three latent factors. However, evidence of nonlinearity and structural breaks indicates misspecification of the linear model. Accordingly, we use a data-driven approach: the nonparametric causality-in-quantiles test, which is robust to misspecification due to nonlinearity and regime change. Notably, this test allows us to model the entire conditional distribution of the level, slope and curvature factors, and hence accommodate, via the lower quantiles, the zero lower bound situation observed in our sample period. Using this robust test, we find overwhelming evidence of causality from oil uncertainty for the entire conditional distribution of the three factors, suggesting the predictability of the entire US term structure based on information contained in oil market volatility. Our results have important implications for academics, investors and policy makers.

1 citations


Journal ArticleDOI
TL;DR: In this paper , the authors used multivariate portfolio sorts, firm-level cross-sectional regressions and spanning tests to show that, in the cross section of stock returns, most commonly used risk measures in academia and in practice are separate return predictors with negative slopes.
Abstract: Using multivariate portfolio sorts, firm-level cross-sectional regressions and spanning tests, we show that, in the cross section of stock returns, most commonly used risk measures in academia and in practice are separate return predictors with negative slopes. That is, in contrast to what many researchers might expect, there are multiple risk anomalies that are independent of each other. This implies that, in empirical asset pricing models, even different forms of total risk can be simultaneously relevant. Further, it suggests that investors trading based on one risk measure can obtain significant gains when also trading based on another. For example, an investor selecting stocks based on volatility can earn a significant monthly alpha by also considering the information contained in the maximum drawdown.

1 citations



Journal ArticleDOI
TL;DR: In this article , a two-component realized exponential generalized autoregressive conditional heteroscedasticity (EGARCH) model is proposed for the joint modeling of asset returns and realized measures of volatility.
Abstract: This paper proposes a two-component realized exponential generalized autoregressive conditional heteroscedasticity (EGARCH) model – an extension of the realized EGARCH model – for the joint modeling of asset returns and realized measures of volatility. The proposed model assumes that the volatility of asset returns consists of two components: a long-run component and a short-run component. The model’s unique ability to capture the long-memory property of volatility distinguishes it from the standard realized EGARCH model. The model is convenient to implement within the framework of maximum likelihood estimation. We apply the two-component realized EGARCH model and a restricted version of the model (the two-component realized EGARCH model with only short-run leverage) to four stock indexes: the Standard & Poor’s 500 index, the Hang Seng index, the Nikkei 225 index and the Deutscher Aktienindex. The empirical study suggests that the two-component realized EGARCH model and its restricted version outperform the realized GARCH model, the realized EGARCH model and the realized heterogeneous autoregressive GARCH model in terms of in-sample fit. Further, an out-of-sample predictive analysis demonstrates that the two-component realized EGARCH model and its restricted version yield more accurate volatility forecasts than the alternatives.

1 citations


Journal ArticleDOI
TL;DR: In this paper , the authors present several methods that can be used to predict future value-at-risk, including the nested Monte Carlo empirical quantile method and quantile regressions.
Abstract: Predicting future value-at-risk is an important problem in finance. It arises in the modeling of future initial margin requirements for counterparty credit risk and future market value-at-risk. We are also interested in derived quantities such as both dynamic initial margin and margin value adjustment in the counterparty risk context and risk-weighted assets and capital value adjustment for market risk. This paper describes several methods that can be used to predict future value-at-risk. We begin with the nested Monte Carlo empirical quantile method as a benchmark, but it is too computationally intensive for routine use. We review several known methods and discuss their novel applications to the problem at hand. The techniques that are considered include computing percentiles from distributions (normal and Johnson) that were matched to estimates of moments and percentiles, quantile regressions methods and others with more specific assumptions or requirements. We also consider how limited inner simulations can be used to improve the performance of these techniques. The paper also provides illustrations, results and visualizations of intermediate and final results for the various approaches and methods.

1 citations



Journal ArticleDOI
TL;DR: In this article , the authors identify violations of the weak-form market efficiency hypothesis for comparable crypto assets that are conditional on market segmentation and those conditional on benchmarks, using daily frequency data of 57 cryptocurrencies that account for more than 90% of the total market capitalization (market cap).
Abstract: With initial coin offerings and token offerings remaining at the forefront of alternative investments, the study of peer groups can be important for comparing investors’ tastes and preferences for particular classes of cryptocurrency on a more equal footing. The aim of this paper is to identify violations of the weak-form market efficiency hypothesis for comparable cryptoassets that are conditional on market segmentation and those conditional on benchmarks. We use daily frequency data of 57 cryptocurrencies that account for more than 90% of the total market capitalization (market cap). We construct seven thematic market cap indexes that are able to represent the whole cryptocurrency universe. Against this background, we test for the presence of four empirical anomalies: risk premium, leverage, regime switch and calendar effects, both across and within these benchmark indexes. The main results support the existence of a switch between two states and positive excess returns toward the end of the week for both cases. Our methodology and findings contribute to the emerging literature on introducing active and passive portfolio management strategies that track benchmark crypto indexes.

Journal ArticleDOI
TL;DR: In this article , the authors apply a measure of statistical unusualness, called the Mahalanobis distance, to assess the plausibility of the Fed's stress scenarios, and show how the Fed can minimally modify their scenarios to render them marginally plausible in a Gaussian world.
Abstract: In light of the Covid-19 crisis, the Federal Reserve (Fed) has carried out stress tests to assess whether major banks have sufficient capital to ensure their viability should a new and perhaps unprecedented crisis emerge. The Fed argues that the scenarios underpinning these stress tests are severe but plausible, yet they have not offered any evidence or framework for measuring the plausibility of their scenarios. If the scenarios are indeed plausible, it makes sense for banks to retain enough capital to with-stand their occurrence. If, however, the scenarios are not reasonably plausible, banks will have deployed capital less productively than they otherwise could have, thereby impairing credit expansion and economic growth. The authors apply a measure of statistical unusualness, called the Mahalanobis distance, to assess the plausibility of the Fed’s stress scenarios. A first pass of this analysis, based on conventional statistical assumptions, reveals that the Fed’s scenarios are not even remotely plausible. However, the authors offer two modifications to their initial analysis that increase the scenarios’ plausibility. First, they show how the Fed can minimally modify their scenarios to render them marginally plausible in a Gaussian world. And second, they show how to evaluate the plausibility of the Fed’s scenarios by replacing the theoretical world of normality with a distribution that is empirically grounded. © 2022 Infopro Digital Risk (IP) Limited.

Journal ArticleDOI
TL;DR: In this paper , the authors compared the performance of regression trees with univariate linear regression models and regression tree techniques and found that regression trees do not show better forecasting ability than a first-order autoregressive benchmark model and univariate Linear Regression Models (LRMs).
Abstract: This paper investigates whether classification and regression trees ensemble algorithms such as bagging, random forests and boosting improve on traditional parametric models for forecasting the equity risk premium. In particular, we work with European Monetary Union (EMU) data for the period from its foundation in 2000 to 2020. The paper first compares the monthly out-of-sample forecasting ability of multiple economic and technical variables using univariate linear regression models and regression tree techniques. The results obtained suggest that regression trees do not show better forecasting ability than a first-order autoregressive benchmark model and univariate linear regressions. The paper then analyses asset allocation strategies with regression trees and checks whether these can select the best economic predictors to form dynamic portfolios composed of two assets: a risk-free asset and an equity index. The results indicate that trading strategies built with two or three economic predictors selected with boosting and random forest algorithms can generate economic value for a risk-averse investor with a quadratic utility function.

Journal ArticleDOI
TL;DR: In this paper , the authors shrink correlation and volatility separately and evaluate the predictive power of this approach, finding economically and statistically significant gains from applying more shrinkage to correlations than to volatilities.
Abstract: Beta is used in many applications, ranging from asset pricing tests to cost of capital estimation, investment management and risk management. Beta needs to be estimated, and shrinkage to its cross-sectional average value of 1 is often applied to reduce estimation error. Since beta is the product of the return correlation of a security with the market and its return volatility relative to that of the market, we shrink correlation and volatility separately and evaluate the predictive power of this approach. We find economically and statistically significant gains from applying more shrinkage to correlations than to volatilities.

Journal ArticleDOI
TL;DR: In this paper , the authors introduce several measures of systemic fragility of euro area banks and sovereigns using a procedure for a consistent estimation of individual and joint default risk, which not only capture the dynamic dependence between sovereigns and banks but also provide rankings of banks according to their systemic risk contribution by applying a leave-one-out approach and quantifying cross-sectoral contagion.
Abstract: This paper introduces several measures of systemic fragility of euro area banks and sovereigns using a procedure for a consistent estimation of individual and joint default risk. Our measures not only capture the dynamic dependence between sovereigns and banks but also provide rankings of banks and sovereigns according to their systemic risk contribution by applying a leave-one-out approach and quantifying cross-sectoral contagion. Our analysis documents a rise in banking systemic fragility in the euro area from the onset of the subprime crisis, which was followed by an increase in sovereign systemic risk after Lehman Brothers filed for bankruptcy in September 2008. Our ranking measure manages to detect that Greece, Portugal and Ireland made the largest systemic fragility contributions during the sovereign debt crisis in the euro area. We also find that a hypothetical default of one of the safest countries in the euro area (eg, Germany) would have far-reaching detrimental consequences for the stability of the banking system. These results have important policy implications and add to our understanding of systemic risk of sovereigns and banks.

Journal ArticleDOI
TL;DR: In this article , the authors introduce four nonparametric estimators that are applicable given a bivariate random sample, three of which employ results on concomitants of order statistics, while the fourth is novel in the way it uses saddlepoint approximations to invert the empirical (bivariate) moment generating function in order to recover the conditional distribution.
Abstract: Two forms of CoVaR have recently been introduced in the literature for measuring systemic risk, differing on whether or not the conditioning is on a set of measure zero. We focus on the former, and make allusions to the possibility of analogous results holding for the latter. After reviewing maximum likelihood estimation (MLE) and quantile regression methods, we introduce four new nonparametric estimators that are applicable given a bivariate random sample. Three of these employ results on concomitants of order statistics, while the fourth is novel in the way it uses saddlepoint approximations to invert the empirical (bivariate) moment generating function in order to recover the conditional distribution. All estimators are shown to be consistent under mild regularity conditions, and asymptotic normality is established for the saddlepoint-based estimator using M-estimation arguments. Simulations shed light on the quality of the finite-sample-based estimators, and the methodology is illustrated on a real data set. One surprising result to emerge is that, in spite of its asymptotic optimality, the MLE does not always dominate the remaining estimators in terms of basic accuracy measures such as absolute relative error. This finding may have important implications for practitioners seeking to make accurate CoVaR inferences.



Journal ArticleDOI
TL;DR: In this paper , the authors present a cross-sectional risk model using the stock return betas and a small number of style factors and macro-sector indicator functions as explanatory variables in a crosssectional regression, leading to a covariance structure that incorporates both the stocks' characteristics and good conditioning properties that lead to robust optimization problems.
Abstract: This paper presents a novel, practical approach to risk management for multifactor equity investment strategies. Our approach lies in the construction of a cross-sectional risk model using the stock return betas and a small number of style factors and macro-sector indicator functions as explanatory variables in a cross-sectional regression. The model leads to a covariance structure that incorporates in an intuitive fashion both the stocks’ characteristics and good conditioning properties that lead to robust optimization problems. Various portfolio constructions are analyzed in detail, and some concrete examples are provided.

Journal ArticleDOI
TL;DR: In this article , the authors analyzed the realized exit cashflows of individual portfolio companies in a joint modeling framework that describes both the exit timing and the exit performance in a Monte Carlo simulation example to demonstrate the suitability of their approach in a risk management context.
Abstract: Risk perception in private equity is notoriously difficult, as the cashflow patterns associated with private capital funds are not well understood at the underlying deal level. This paper analyzes the realized exit cashflows of individual portfolio companies in a joint modeling framework that describes both the exit timing and the exit performance. Specifically, we choose an exit timing model suited to the interval-censored nature of private equity deal data and an approach for the exit multiple (ie, the performance) appropriate for the high numbers of company defaults observed in private equity. The corresponding parametric joint model is estimated using the maximum likelihood method for a buyout and venture capital data set and applied in a Monte Carlo simulation example to demonstrate the suitability of our approach in a risk management context. The improved insights offered by risk analysis tools that can incorporate detailed company-level information may be of particular benefit to undiversified private equity fund investors.


Journal ArticleDOI
TL;DR: In this article , a nonlinear analysis of the log-arithmic returns of the S&P 500 index is performed and the moving Lyapunov exponent is used as a dynamic indicator of stability.
Abstract: Predicting major downturns in financial markets is a popular topic among re-searchers. Improving the models used for this could benefit individuals, investment banks and financial institutions. The latest developments in econophysics provide additional forecasting tools that may aid this endeavor. This paper introduces an innovative method to identify early warnings for major declines in the Standard & Poor’s 500 (S&P 500) index. This method performs a nonlinear analysis of the log-arithmic returns of the index and then uses the moving Lyapunov exponent as a dynamic indicator of stability. The results show that the fluctuating behavior of the moving Lyapunov exponent forms spikes, which may act as warning signals since they precede all significant events that have caused major drops in the S&P 500 index over the past 20 years, including the dot-com bubble, the Great Recession and the Covid-19 pandemic. © 2022 Infopro Digital Risk (IP) Limited.


Journal ArticleDOI
TL;DR: In this paper , the authors study the statistical problem of estimating the capture ratio based on a finite number of observations of a fund's returns and derive the asymptotic distribution of the estimator and use it for testing whether one fund has a capture ratio that is statistically significantly higher than another.
Abstract: The capture ratio is a widely used investment performance measure. We study the statistical problem of estimating the capture ratio based on a finite number of observations of a fund’s returns. We derive the asymptotic distribution of the estimator and use it for testing whether one fund has a capture ratio that is statistically significantly higher than another’s. We also perform hypothesis tests with real-world hedge fund data. Our analysis raises concerns regarding the models and sample sizes used for estimating capture ratios in practice.