scispace - formally typeset
Search or ask a question
Author

Eric Ghysels

Bio: Eric Ghysels is an academic researcher from University of North Carolina at Chapel Hill. The author has contributed to research in topics: Volatility (finance) & Stochastic volatility. The author has an hindex of 66, co-authored 374 publications receiving 20523 citations. Previous affiliations of Eric Ghysels include Pennsylvania State University & Université catholique de Louvain.


Papers
More filters
Book ChapterDOI
TL;DR: The Black-Scholes model predicts a flat term structure of volatilities as mentioned in this paper, which is typically upward sloping when short term volatility is low and the reverse when they are high.
Abstract: Publisher Summary The class of stochastic volatility (SV) models has its roots in both, mathematical finance and financial econometrics. In fact, several variations of SV models originated from research looking at very different issues. Volatility plays a central role in the pricing of derivative securities. The Black-Scholes model for the pricing of an European option is by far the most widely used formula even when the underlying assumptions are known to be violated. The Black-Scholes model predicts a flat term structure of volatilities. In reality, the term structure of at-the-money implied volatilities is typically upward sloping when short term volatilities are low and the reverse when they are high. The Black-Scholes model is taken as a reference point from which several notions of volatility are presented. Several stylized facts regarding volatility and option prices are also presented. Both sections set the scene for a formal framework defining stochastic volatility. The chapter introduces the statistical models of stochastic volatility.

1,466 citations

Journal ArticleDOI
TL;DR: In this article, the role of various volatility specifications, such as multiple stochastic volatility (SV) factors and jump components, in appropriate modeling of equity return distributions is evaluated.

974 citations

Journal ArticleDOI
TL;DR: The authors explore mixed data sampling (henceforth MIDAS) regression models, which involve time series data sampled at different frequencies, and provide empirical evidence on microstructure noise and volatility forecasting.
Abstract: We explore mixed data sampling (henceforth MIDAS) regression models The regressions involve time series data sampled at different frequencies Volatility and related processes are our prime focus, though the regression method has wider applications in macroeconomics and finance, among other areas The regressions combine recent developments regarding estimation of volatility and a not-so-recent literature on distributed lag models We study various lag structures to parameterize parsimoniously the regressions and relate them to existing models We also propose several new extensions of the MIDAS framework The paper concludes with an empirical section where we provide further evidence and new results on the risk–return trade-off We also report empirical evidence on microstructure noise and volatility forecasting

807 citations

Journal ArticleDOI
TL;DR: In this paper, a new estimator that forecasts monthly variance with past daily squared returns is introduced, the Mixed Data Sampling (or MIDAS) approach, which finds that there is a significantly positive relation between risk and return in the stock market.

703 citations

Journal ArticleDOI
TL;DR: This paper revisited the relation between stock market volatility and macroeconomic activity using a new class of component models that distinguish short-run from long-run movements and found that macroeconomic fundamentals play a significant role even at short horizons.
Abstract: We revisit the relation between stock market volatility and macroeconomic activity using a new class of component models that distinguish short-run from long-run movements. We formulate models with the long-term component driven by inflation and industrial production growth that are in terms of pseudo out-of-sample prediction for horizons of one quarter at par or outperform more traditional time series volatility models at longer horizons. Hence, imputing economic fundamentals into volatility models pays off in terms of long-horizon forecasting. We also find that macroeconomic fundamentals play a significant role even at short horizons.

696 citations


Cited by
More filters
Book ChapterDOI
TL;DR: This paper provides a concise overview of time series analysis in the time and frequency domains with lots of references for further reading.
Abstract: Any series of observations ordered along a single dimension, such as time, may be thought of as a time series. The emphasis in time series analysis is on studying the dependence among observations at different points in time. What distinguishes time series analysis from general multivariate analysis is precisely the temporal order imposed on the observations. Many economic variables, such as GNP and its components, price indices, sales, and stock returns are observed over time. In addition to being interested in the contemporaneous relationships among such variables, we are often concerned with relationships between their current and past values, that is, relationships over time.

9,919 citations

Posted Content
TL;DR: The authors describes the advantages of these studies and suggests how they can be improved and also provides aids in judging the validity of inferences they draw, such as multiple treatment and comparison groups and multiple pre- or post-intervention observations.
Abstract: Using research designs patterned after randomized experiments, many recent economic studies examine outcome measures for treatment groups and comparison groups that are not randomly assigned. By using variation in explanatory variables generated by changes in state laws, government draft mechanisms, or other means, these studies obtain variation that is readily examined and is plausibly exogenous. This paper describes the advantages of these studies and suggests how they can be improved. It also provides aids in judging the validity of inferences they draw. Design complications such as multiple treatment and comparison groups and multiple pre- or post-intervention observations are advocated.

7,222 citations

Journal ArticleDOI
TL;DR: Convergence of Probability Measures as mentioned in this paper is a well-known convergence of probability measures. But it does not consider the relationship between probability measures and the probability distribution of probabilities.
Abstract: Convergence of Probability Measures. By P. Billingsley. Chichester, Sussex, Wiley, 1968. xii, 253 p. 9 1/4“. 117s.

5,689 citations

ReportDOI
TL;DR: In this article, explicit tests of the null hypothesis of no difference in the accuracy of two competing forecasts are proposed and evaluated, and asymptotic and exact finite-sample tests are proposed, evaluated and illustrated.
Abstract: We propose and evaluate explicit tests of the null hypothesis of no difference in the accuracy of two competing forecasts. In contrast to previously developed tests, a wide variety of accuracy measures can be used (in particular, the loss function need not be quadratic and need not even be symmetric), and forecast errors can be non-Gaussian, nonzero mean, serially correlated, and contemporaneously correlated. Asymptotic and exact finite-sample tests are proposed, evaluated, and illustrated.

5,628 citations

Journal ArticleDOI
TL;DR: This paper found that the majority of managers would avoid initiating a positive NPV project if it meant falling short of the current quarter's consensus earnings, and more than three-fourths of the surveyed executives would give up economic value in exchange for smooth earnings.

4,341 citations