scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Financial Econometrics in 2014"


Journal ArticleDOI
TL;DR: In this article, the authors study the price impact of order book events and show that price changes are mainly driven by the imbalance between supply and demand at the best bid and ask prices, with a slope inversely proportional to the market depth.
Abstract: We study the price impact of order book events - limit orders, market orders and cancelations - using the NYSE TAQ data for 50 U.S. stocks. We show that, over short time intervals, price changes are mainly driven by the order flow imbalance, defined as the imbalance between supply and demand at the best bid and ask prices. Our study reveals a linear relation between order flow imbalance and price changes, with a slope inversely proportional to the market depth. These results are shown to be robust to seasonality effects, and stable across time scales and across stocks. We argue that this linear price impact model, together with a scaling argument, implies the empirically observed "square-root" relation between price changes and trading volume. However, the relation between price changes and trade volume is found to be noisy and less robust than the one based on order flow imbalance.

225 citations


Journal ArticleDOI
TL;DR: In this article, a semiparametric specification test is proposed to model serially dependent positive-valued variables, which realize a non-trivial proportion of zero outcomes.
Abstract: We propose a novel approach to model serially dependent positive-valued variables which realize a non-trivial proportion of zero outcomes. This is a typical phenomenon in financial time series observed at high frequencies, such as cumulated trading volumes. We introduce a flexible point-mass mixture distribution and develop a semiparametric specification test explicitly tailored for such distributions. Moreover, we propose a new type of multiplicative error model based on a zero-augmented distribution, which incorporates an autoregressive binary choice component and thus captures the (potentially different) dynamics of both zero occurrences and of strictly positive realizations. Applying the proposed model to high-frequency cumulated trading volumes of both liquid and illiquid NYSE stocks, we show that the model captures the dynamic and distributional properties of the data well and is able to correctly predict future distributions. Copyright The Author, 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com, Oxford University Press.

50 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider the hedging of options when the underlying asset price is exposed to the possibility of jumps of random size and derive a new, static spanning relation between a given option and a continuum of shorter-term options written on the same asset.
Abstract: We consider the hedging of options when the underlying asset price is exposed to the possibility of jumps of random size. Working in a single factor Markovian setting, we derive a new, static spanning relation between a given option and a continuum of shorter-term options written on the same asset. We implement this static relation using a finite set of shorte r-term options and use Monte Carlo simulation to determine the hedging error. We compare this hedging error to that of a delta hedging strategy based on daily rebalancing in the underlying futures. The simulation results show that the two types of hedging strategies generate comparable performance under purely continuous asset price dynamics, but that our static hedge strongly outperforms delta hedging when the underlying asset price process contains random jumps. When we compare the hedging effectiveness of the two types of strategies using over six years of data on S&P 500 index options, we find that a static hedge using just five call options outperforms daily delta hedging with the underlying futures. This result lends empirical support for the existence of random jumps in the movement of the S&P 500 index.

41 citations


Journal ArticleDOI
TL;DR: In this paper, the conditional autoregressive class of model, used to implicitly model ES, is generalized to incorporate information on the intraday range, leading to direct estimation and one-step-ahead forecasts of expectiles and, subsequently, of expected shortfall.
Abstract: Intraday sources of data have proved to be effective for dynamic volatility and tail risk estimation. Expected shortfall (ES) is a tail risk measure, which is now recommended by the Basel Committee, involving a conditional expectation that can be semi-parametrically estimated via an asymmetric sum of squares function. The conditional autoregressive expectile class of model, used to implicitly model ES, is generalized to incorporate information on the intraday range. An asymmetric Gaussian density model error formulation allows a likelihood to be developed that leads to direct estimation and one-step-ahead forecasts of expectiles and, subsequently, of ES. Adaptive Markov chain Monte Carlo sampling schemes are employed for estimation, while their performance is assessed via a simulation study. The proposed models compare favorably with a large range of competitors in an empirical study forecasting seven financial return series over a 10-year period.

37 citations


Journal ArticleDOI
TL;DR: In this paper, an infinite hidden Markov model (iHMM) is proposed to detect, date stamp, and estimate speculative bubbles in the NASDAQ stock market, which can capture the complex nonlinear dynamics of bubble behaviors by allowing for an infinite number of regimes.
Abstract: This article proposes an infinite hidden Markov model (iHMM) to detect, date stamp, and estimate speculative bubbles. Three features make this new approach attractive to practitioners. First, the iHMM is capable of capturing the complex nonlinear dynamics of bubble behaviors because it allows for an infinite number of regimes. Second, implementing this procedure is straightforward because bubbles are detected, dated, and estimated simultaneously in a coherent Bayesian framework. Third, because the iHMM assumes hierarchical structures, it is parsimonious and superior in out-of-sample forecasts. This model and extensions of this model are applied to the NASDAQ stock market. The in-sample posterior analysis and out-of-sample predictions find evidence of explosive dynamics during the dot-com bubble period. A model comparison shows that the iHMM is strongly supported by the data compared with finite hidden Markov models.

36 citations


Journal ArticleDOI
Ai Deng1
TL;DR: In this paper, a new asymptotic framework is used to provide finite sample approximations for various statistics in the spurious return predictive regression analyzed by Ferson, Sarkissian, and Simin (2003a).
Abstract: A new asymptotic framework is used to provide finite sample approximations for various statistics in the spurious return predictive regression analyzed by Ferson, Sarkissian, and Simin (2003a). Our theory explains all the findings of Ferson, Sarkissian, and Simin (2003a) and confirms the theoretical possibility of a spurious regression bias. The theory developed in the article has important implications with respect to existing inferential theories in predictive regressions. We also propose a simple diagnostic test to detect potential spurious regression bias in empirical analysis. The test is applied to four variants of the SP500 monthly stock returns and the six Fama-French benchmark portfolio monthly returns. Copyright The Author, 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com, Oxford University Press.

33 citations


Journal ArticleDOI
TL;DR: In this paper, an extension of the Heterogeneous Autoregressive model was proposed to incorporate jumps into the dynamics of the ex post volatility measures. But the model was not extended to incorporate the CDS and the variance risk premium (VRP).
Abstract: The volatility of financial returns is characterized by rapid and large increments. We propose an extension of the Heterogeneous Autoregressive model to incorporate jumps into the dynamics of the ex post volatility measures. Using the realized range measures of 36 NYSE stocks, we show that there is a positive probability of jumps in volatility. A common factor in the volatility jumps is shown to be related to a set of financial covariates (such as variance risk premium (VRP), S&P500 volume, credit default swap (CDS), and federal fund rates). The CDS on U.S. banks and VRP have predictive power on expected jump moves, thus confirming the common interpretation that sudden and large increases in equity volatility can be anticipated by credit deterioration of the U.S. bank sector as well as changes in the market expectations of future risks. Finally, the model is extended to incorporate the CDS and the VRP in the dynamics of the jump size and intensity.

33 citations


Journal ArticleDOI
TL;DR: In this article, the conditional quantiles of future returns and volatility of financial assets vary with various measures of ex post variation in asset prices as well as option-implied volatility, and the results for the S&P 500 and WTI Crude Oil futures contracts show that simple linear quantile regressions for returns and heterogenous quantile autoregressions for realized volatility perform very well in capturing the dynamics of the respective conditional distributions.
Abstract: This paper investigates how the conditional quantiles of future returns and volatility of financial assets vary with various measures of ex post variation in asset prices as well as option-implied volatility. We work in the flexible quantile regression framework and rely on recently developed model-free measures of integrated variance, upside and downside semivariance, and jump variation. Our results for the S&P 500 and WTI Crude Oil futures contracts show that simple linear quantile regressions for returns and heterogenous quantile autoregressions for realized volatility perform very well in capturing the dynamics of the respective conditional distributions, both in absolute terms as well as relative to a couple of well-established benchmark models. The models can therefore serve as useful risk management tools for investors trading the futures contracts themselves or various derivative contracts written on realized volatility.

32 citations


Journal ArticleDOI
Fuchun Li1
TL;DR: In this article, the authors provide a nonparametric test for asymmetric comovements between stock market returns, in the sense that stock market downturns will lead to stronger COMovements than market upturns.
Abstract: Based on a new approach for measuring the comovements between stock market returns, we provide a nonparametric test for asymmetric comovements in the sense that stock market downturns will lead to stronger comovements than market upturns.

32 citations


Journal ArticleDOI
TL;DR: In this paper, the authors introduce cointegrating mixed data sampling regressions and derive asymptotic limits under substantially more general conditions than the extant theoretical literature allows, allowing for regressors and an error term with general correlation patterns, both serially and mutually.
Abstract: Parsimoniously specified distributed lag models have enjoyed a resurgence under the MIDAS moniker (Mixed Data Sampling) as a feasible way to model time series observed at very different sampling frequencies. I introduce cointegrating mixed data sampling regressions. I derive asymptotic limits under substantially more general conditions than the extant theoretical literature allows. In addition to the possibility of cointegrated series, I allow for regressors and an error term with general correlation patterns, both serially and mutually. The nonlinear least squares estimator still obtains consistency to the minimum mean-squared forecast error parameter vector, and the asymptotic distribution of the coefficient vector is Gaussian with a possibly singular variance. I propose a novel test of a MIDAS null against a more general and possibly infeasible alternative mixed-frequency specification. An empirical application to nowcasting global real economic activity using monthly financial covariates illustrates the utility of the approach.

26 citations


Journal ArticleDOI
TL;DR: In this article, a stepwise test called step-SPA(κ) was proposed for multiple inequalities testing, which has asymptotic control of a generalized familywise error rate: the probability of at least κ false rejections.
Abstract: We propose a stepwise test, Step-SPA(κ), for multiple inequalities testing. This test is analogous to the Step-SPA test of Hsu, Hsu, and Kuan (2010, J. Empirical Econ., 17, 471–484) but has asymptotic control of a generalized familywise error rate: the probability of at least κ false rejections. This test improves Step-RC(κ) of Romano and Wolf (2007, Ann. Stat., 35, 1378–1408) by avoiding the least favorable configuration used in Step-RC(κ). We show that the proposed Step-SPA(κ) test is consistent and more powerful than Step-RC(κ) under any power notion defined in Romano and Wolf (2005, Econometrica, 73, 1237–1282). An empirical study on Commodity Trading Advisor fund performance is then provided to illustrate the Step-SPA(κ) test. Finally, we extend Step-SPA(κ) to a procedure that asymptotically controls the false discovery proportion, the ratio of the number of false rejections over the number of total rejections, and show that it is more powerful than the corresponding procedure proposed by Romano and Wolf (2007, Ann. Stat., 35, 1378–1408).

Journal ArticleDOI
TL;DR: In this paper, an econometric framework was developed to investigate the structure of dependence between random variables and to test whether it changes over time, based on the computation of the conditional probability that a random variable is lower than a given quantile, when another random variable was also lower than its corresponding quantile.
Abstract: This article develops an econometric framework to investigate the structure of dependence between random variables and to test whether it changes over time. Our approach is based on the computation—over both a test and a benchmark period—of the conditional probability that a random variable is lower than a given quantile, when another random variable is also lower than its corresponding quantile, for any set of prespecified quantiles. Timevarying conditional quantiles are modeled via regression quantiles. The conditional probability is estimated through a simple OLS regression. We illustrate the methodology by investigating the impact of the crises of the 1990s and 2000s on the major Latin American equity markets returns. Our results document significant increases in equity return comovements during crisis times. ( JEL: C14, C22, G15)

Journal ArticleDOI
TL;DR: In this article, the authors consider three alternative sources of information about volatility potentially useful in predicting daily asset returns: past daily returns, past intraday returns, and a volatility index based on observed option prices.
Abstract: This study considers three alternative sources of information about volatility potentially useful in predicting daily asset returns: past daily returns, past intraday returns, and a volatility index based on observed option prices. For each source of information the study begins with several alternative models, and then works from the premise that all of these models are false to construct a single improved predictive distribution for daily S&P 500 index returns. The criterion for improvement is the log predictive score, equivalent to the average probability ascribed ex ante to observed returns. The rst implication of the premise is that conventional models within each class can be improved. The paper accomplishes this by introducing exibility in the conditional distribution of returns, in volatility dynamics, and in the relationship between observed and latent volatility. The second implication of the premise is that model pooling will provide prediction superior to the best of the improved models. The paper accomplishes this by constructing ex ante optimal pools, which place a premium on diversication in much the same way as optimal portfolios. All procedures are strictly out-of-sample, recapitulating one-step-ahead predictive distributions that could have been constructed for daily returns beginning January 2, 1992, and ending March 31, 2010. The prediction probabilities of the optimal pool exceed those of the conventional models by as much as 7.75%. The optimal pools place substantial weight on models using each of the three sources of information about volatility.

Journal ArticleDOI
TL;DR: In this paper, a semiparametric density forecast for daily financial returns from high-frequency intraday data is proposed, which is based on the theory of unifractal processes.
Abstract: In this article we propose a new method for producing semiparametric density forecasts for daily financial returns from high-frequency intraday data. The daily return density is estimated directly from intraday observations that have been appropriately rescaled using results from the theory of unifractal processes. The method preserves information concerning both the magnitude and sign of the intraday returns and allows them to influence all properties of the daily return density via the use of nonparametric specifications for the daily return distribution. The out-of-sample density forecasting performance of the method is shown to be competitive with existing methods based on intraday data for exchange rate and equity index data.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the large sample properties of general L-statistics based on weakly dependent data and applied them to their estimator, and proved that the estimator is strongly consistent and asymptotically normal.
Abstract: For the class of distortion risk measures, a natural estimator has the form of L-statistics. In this article, we investigate the large sample properties of general L-statistics based on weakly dependent data and apply them to our estimator. Under certain regularity conditions, which are somewhat weaker than the ones found in the literature, we prove that the estimator is strongly consistent and asymptotically normal. Furthermore, we give a consistent estimator for its asymptotic variance using spectral density estimators of a related stationary sequence. The behavior of the estimator is examined using simulation in two examples: inverse-gamma autoregressive stochastic volatility model and GARCH(1,1). Copyright The Author, 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com, Oxford University Press.

Journal ArticleDOI
TL;DR: In this article, a new class of conditional skewness models based on conditional quantiles regressions is proposed, which is much inspired by work of Hal White and considers quantile MIDAS regressions which amount to direct forecasting, rather than iterated forecasting.
Abstract: We study a new class of conditional skewness models based on conditional quantiles regressions. The approach is much inspired by work of Hal White. To handle multiple horizons I consider quantile MIDAS regressions which amount to direct forecasting—as opposed to iterated forecasting—conditional skewness. Using this quantile-based approach I document that the conditional asymmetry of returns varies significantly over time. The asymmetry is most relevant for the characterization of downside risk. Besides empirical evidence, I also report simulation results which highlight the costs associated with mis-specifying downside risk in the presence of conditional skewness.

Journal ArticleDOI
Nick Taylor1
TL;DR: In this paper, the authors investigate the economic value of multivariate volatility forecasting ability using a testing framework that assesses the quality of competing methods from a conditional investment perspective, and find that investors are willing to pay a significant premium for knowledge of the dynamics of volatility, though the magnitude of this premium varies over time and depends on risk preferences and economic conditions.
Abstract: We investigate the economic value of multivariate volatility forecasting ability using a testing framework that assesses the quality of competing methods from a conditional investment perspective. This approach provides a novel means of assessing the benefits of using a particular set of volatility forecasts. Applying the framework to U.S. bond and stock futures markets, we find that investors are willing to pay a significant premium for knowledge of the dynamics of volatility, though the magnitude of this premium varies over time and depends on risk preferences and economic conditions. The latter variation implies that selection of appropriate forecasting methods should be a conditional exercise.

Journal ArticleDOI
TL;DR: In this article, an overview of the usefulness of the regime switching approach for building various kinds of bond pricing models and of the roles played by the regimes in these models is presented.
Abstract: This article proposes an overview of the usefulness of the regime switching approach for building various kinds of bond pricing models and of the roles played by the regimes in these models. Both default-free and defaultable bonds are considered. The regimes can be used to capture stochastic drifts and/or volatilities, to represent discrete target rates, to incorporate business cycles or crises, to introduce contagion, to reproduce zero lower bound spells, or to evaluate the impact of standard or nonstandard monetary policies. From a technical point of view, we stress the key role of Markov chains, Compound Autoregressive (Car) processes, Regime Switching Car processes and multihorizon Laplace transforms.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the impact of business cycle-related market volatility on expected returns and developed a model that enables them to decompose the market volatility into two components: business cycle related volatility and unrelated volatility.
Abstract: This article investigates the impact of business cycle-related market volatility on expected returns. We develop a model that enables us to decompose the market volatility into two components: business cycle-related volatility and unrelated volatility. Then, the risk–return relation is assessed based on these two components. Our empirical results demonstrate that business cycle-related market volatility is priced in the stock market, whereas the unrelated component is not. Furthermore, our procedure identifies a few periods of high volatility that are not related to recessions, including the 1987 crash and the 1998 Russian default.

Journal ArticleDOI
TL;DR: In this article, the authors show that continuous volatility is a key driver of medium to long-term risk-return relationships and jumps do not predict future medium or longterm excess returns.
Abstract: Realized variance can be broken down into a continuous volatility and a jump components. We show that these two components have very different degrees of power of prediction on future long-term excess stock market returns. Namely, continuous volatility is a key driver of medium to long-term risk–return relationships. In contrast, jumps do not predict future medium or long-term excess returns. We use inference methods that are robust to persistent predictors in a multi-horizon setup. Specifically, we use a rescaled Student-t to test for significant risk–return relationship and simulate its exact behavior under the null in the case of multiple regressors with different levels of persistence. We also perform tests of equality of the risk–return relationship at multiple horizons. We do not find evidence against a proportional relationship between long-term continuous volatility and future returns.

Journal ArticleDOI
TL;DR: In this article, a technique for online estimation of spot volatility for high-frequency data is developed, which works directly on the transaction data and updates the volatility estimate immediately after the occurrence of a new transaction.
Abstract: A technique for online estimation of spot volatility for high-frequency data is developed. The algorithm works directly on the transaction data and updates the volatility estimate immediately after the occurrence of a new transaction. Furthermore, a nonlinear market microstructure noise model is proposed that reproduces several stylized facts of high-frequency data. A computationally efficient particle filter is used that allows for the approximation of the unknown efficient prices and, in combination with a recursive EM algorithm, for the estimation of the volatility curve. We neither assume that the transaction times are equidistant nor do we use interpolated prices. We also make a distinction between volatility per time unit and volatility per transaction and provide estimators for both. More precisely we use a model with random time change where spot volatility is decomposed into spot volatility per transaction times the trading intensity--thus highlighting the influence of trading intensity on volatility. Copyright The Author, 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com, Oxford University Press.

Journal ArticleDOI
TL;DR: In this paper, the wild bootstrap method is applied to realized volatility estimators defined on pre-averaged returns, where the preaveraging is done over all possible non-overlapping blocks of consecutive observations.
Abstract: The main contribution of this paper is to propose bootstrap methods for realized volatility-like estimators defined on pre-averaged returns. In particular, we focus on the pre-averaged realized volatility estimator proposed by Podolskij and Vetter (2009). This statistic can be written (up to a bias correction term) as the (scaled) sum of squared pre-averaged returns, where the pre-averaging is done over all possible non-overlapping blocks of consecutive observations. Pre-averaging reduces the influence of the noise and allows for realized volatility estimation on the pre-averaged returns. The non-overlapping nature of the pre-averaged returns implies that these are asymptotically independent, but possibly heteroskedastic. This motivates the application of the wild bootstrap in this context. We provide a proof of the first order asymptotic validity of this method for percentile and percentile-t intervals. Our Monte Carlo simulations show that the wild bootstrap can improve the finite sample properties of the existing first order asymptotic theory provided we choose the external random variable appropriately. We use empirical work to illustrate its use in practice.

Journal ArticleDOI
TL;DR: In this article, a model for the joint dynamics of the S&P 100 index and the VXO implied volatility index is proposed. But the model is based on a simple continuous-time no-arbitrage asset pricing framework that combines semi-analytic pricing with a nonlinear specification for the market price of risk.
Abstract: We introduce a new model for the joint dynamics of the S&P 100 index and the VXO implied volatility index. The nonlinear specification of the variance process is designed to simultaneously accommodate extreme persistence and strong mean reversion. This grants superior forecasting power over the standard (linear) specifications for implied variance forecasting. We obtain statistically significant predictions in an out-of-sample exercise spanning several market crashes starting 1986 and including the recent subprime crisis. The model specification is possible through a simple continuous-time no-arbitrage asset pricing framework that combines semi-analytic pricing with a nonlinear specification for the market price of risk.

Journal ArticleDOI
TL;DR: In this article, a one-stage estimator with no initial input is proposed, and the estimation procedure is illustrated via a generalized autoregressive conditional heteroskedasticity (GARCH) model.
Abstract: In maximum likelihood estimation, the real but unknown innovation distribution is often replaced by a nonparametric estimate, and thus the estimation procedure becomes semiparametric. These semiparametric approaches generally involve two steps: the first step that incorporate an initial estimate of the model parameter to produce a residual sample, and the second step that uses the residuals to estimate the likelihood, which is subsequently maximized to obtain the final estimate of the model parameter. Therefore, the characteristics of the initial input estimator may be carried over to the final semiparametric estimator, and the performance of the semiparametric estimator will be impaired if the input estimate is deficient. In this article we have studied a onestep semiparametric estimator where no initial input is necessary. The estimation procedure is illustrated via a generalized autoregressive conditional heteroskedasticity (GARCH) model. Asymptotic properties of the estimator are established, and finite sample performance of the estimator is evaluated via simulation. The results suggest that the proposed one-step semiparametric estimator avoids significant drawbacks of its two-step counterparts. (JEL: C02, C22, C51)

Journal ArticleDOI
TL;DR: In this paper, the Spanish Secretary of Education (SEJ2011-0001) and the Spanish Plan Nacional de I+D+I (SeJ2007-2908 and ECO2012-31748) are gratefully acknowledged.
Abstract: Research support from Spanish Secretary of Education (SEJ2011-0001) is gratefully acknowledged. Research support from the Spanish Plan Nacional de I+D+I (SEJ2007-2908 and ECO2012-31748) is gratefully acknowledged


Journal ArticleDOI
TL;DR: In this paper, the authors proposed new tests for long memory in stationary and nonstationary time series possibly perturbed by short-run noise, all based on semiparametric estimators and exploit the self-similarity property of long memory processes.
Abstract: In this article, we propose new tests for long memory in stationary and nonstationary time series possibly perturbed by short-run noise. The tests are all based on semiparametric estimators and exploit the self-similarity property of long memory processes. We offer simulation results that show good size properties of the tests, with power against spurious long memory. To improve the potential size distortion in small samples from using temporal aggregation we use a bootstrap procedure. An empirical study of daily log-squared returns series of exchange rates and DJIA30 stocks shows that indeed there is long memory in exchange rate volatility and stock return volatility. (JEL C14, C22, C43)