scispace - formally typeset
Search or ask a question
Author

Francis X. Diebold

Bio: Francis X. Diebold is an academic researcher from University of Pennsylvania. The author has contributed to research in topics: Volatility (finance) & Exchange rate. The author has an hindex of 110, co-authored 368 publications receiving 74723 citations. Previous affiliations of Francis X. Diebold include International Monetary Fund & Duke University.


Papers
More filters
Book ChapterDOI
TL;DR: This paper provides a concise overview of time series analysis in the time and frequency domains with lots of references for further reading.
Abstract: Any series of observations ordered along a single dimension, such as time, may be thought of as a time series. The emphasis in time series analysis is on studying the dependence among observations at different points in time. What distinguishes time series analysis from general multivariate analysis is precisely the temporal order imposed on the observations. Many economic variables, such as GNP and its components, price indices, sales, and stock returns are observed over time. In addition to being interested in the contemporaneous relationships among such variables, we are often concerned with relationships between their current and past values, that is, relationships over time.

9,919 citations

Posted Content
TL;DR: The authors describes the advantages of these studies and suggests how they can be improved and also provides aids in judging the validity of inferences they draw, such as multiple treatment and comparison groups and multiple pre- or post-intervention observations.
Abstract: Using research designs patterned after randomized experiments, many recent economic studies examine outcome measures for treatment groups and comparison groups that are not randomly assigned. By using variation in explanatory variables generated by changes in state laws, government draft mechanisms, or other means, these studies obtain variation that is readily examined and is plausibly exogenous. This paper describes the advantages of these studies and suggests how they can be improved. It also provides aids in judging the validity of inferences they draw. Design complications such as multiple treatment and comparison groups and multiple pre- or post-intervention observations are advocated.

7,222 citations

ReportDOI
TL;DR: In this article, explicit tests of the null hypothesis of no difference in the accuracy of two competing forecasts are proposed and evaluated, and asymptotic and exact finite-sample tests are proposed, evaluated and illustrated.
Abstract: We propose and evaluate explicit tests of the null hypothesis of no difference in the accuracy of two competing forecasts. In contrast to previously developed tests, a wide variety of accuracy measures can be used (in particular, the loss function need not be quadratic and need not even be symmetric), and forecast errors can be non-Gaussian, nonzero mean, serially correlated, and contemporaneously correlated. Asymptotic and exact finite-sample tests are proposed, evaluated, and illustrated.

5,628 citations

Posted Content
TL;DR: In this article, the authors provide a general framework for integration of high-frequency intraday data into the measurement, modeling and forecasting of daily and lower frequency volatility and return distributions.
Abstract: This paper provides a general framework for integration of high-frequency intraday data into the measurement, modeling and forecasting of daily and lower frequency volatility and return distributions. Most procedures for modeling and forecasting financial asset return volatilities, correlations, and distributions rely on restrictive and complicated parametric multivariate ARCH or stochastic volatility models, which often perform poorly at intraday frequencies. Use of realized volatility constructed from high-frequency intraday returns, in contrast, permits the use of traditional time series procedures for modeling and forecasting. Building on the theory of continuous-time arbitrage-free price processes and the theory of quadratic variation, we formally develop the links between the conditional covariance matrix and the concept of realized volatility. Next, using continuously recorded observations for the Deutschemark/Dollar and Yen /Dollar spot exchange rates covering more than a decade, we find that forecasts from a simple long-memory Gaussian vector autoregression for the logarithmic daily realized volatitilies perform admirably compared to popular daily ARCH and related models. Moreover, the vector autoregressive volatility forecast, coupled with a parametric lognormal-normal mixture distribution implied by the theoretically and empirically grounded assumption of normally distributed standardized returns, gives rise to well-calibrated density forecasts of future returns, and correspondingly accurate quintile estimates. Our results hold promise for practical modeling and forecasting of the large covariance matrices relevant in asset pricing, asset allocation and financial risk management applications.

2,898 citations

Journal ArticleDOI
TL;DR: In this article, the authors provide a general framework for integration of high-frequency intraday data into the measurement, modeling, and forecasting of daily and lower frequency volatility and return distributions.
Abstract: This paper provides a general framework for integration of high-frequency intraday data into the measurement, modeling, and forecasting of daily and lower frequency volatility and return distributions. Most procedures for modeling and forecasting financial asset return volatilities, correlations, and distributions rely on restrictive and complicated parametric multivariate ARCH or stochastic volatility models, which often perform poorly at intraday frequencies. Use of realized volatility constructed from high-frequency intraday returns, in contrast, permits the use of traditional time series procedures for modeling and forecasting. Building on the theory of continuous-time arbitrage-free price processes and the theory of quadratic variation, we formally develop the links between the conditional covariance matrix and the concept of realized volatility. Next, using continuously recorded observations for the Deutschemark / Dollar and Yen / Dollar spot exchange rates covering more than a decade, we find that forecasts from a simple long-memory Gaussian vector autoregression for the logarithmic daily realized volatilities perform admirably compared to popular daily ARCH and related models. Moreover, the vector autoregressive volatility forecast, coupled with a parametric lognormal-normal mixture distribution implied by the theoretically and empirically grounded assumption of normally distributed standardized returns, gives rise to well-calibrated density forecasts of future returns, and correspondingly accurate quantile estimates. Our results hold promise for practical modeling and forecasting of the large covariance matrices relevant in asset pricing, asset allocation and financial risk management applications.

2,823 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, the authors consider pooling cross-section time series data for testing the unit root hypothesis, and they show that the power of the panel-based unit root test is dramatically higher, compared to performing a separate unit-root test for each individual time series.

10,792 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
TL;DR: In this paper, a test of the null hypothesis that an observable series is stationary around a deterministic trend is proposed, where the series is expressed as the sum of deterministic trends, random walks, and stationary error.

10,068 citations

Journal ArticleDOI
TL;DR: A ordered sequence of events or observations having a time component is called as a time series, and some good examples are daily opening and closing stock prices, daily humidity, temperature, pressure, annual gross domestic product of a country and so on.
Abstract: Preface1Difference Equations12Lag Operators253Stationary ARMA Processes434Forecasting725Maximum Likelihood Estimation1176Spectral Analysis1527Asymptotic Distribution Theory1808Linear Regression Models2009Linear Systems of Simultaneous Equations23310Covariance-Stationary Vector Processes25711Vector Autoregressions29112Bayesian Analysis35113The Kalman Filter37214Generalized Method of Moments40915Models of Nonstationary Time Series43516Processes with Deterministic Time Trends45417Univariate Processes with Unit Roots47518Unit Roots in Multivariate Time Series54419Cointegration57120Full-Information Maximum Likelihood Analysis of Cointegrated Systems63021Time Series Models of Heteroskedasticity65722Modeling Time Series with Changes in Regime677A Mathematical Review704B Statistical Tables751C Answers to Selected Exercises769D Greek Letters and Mathematical Symbols Used in the Text786Author Index789Subject Index792

10,011 citations

Book ChapterDOI
TL;DR: This paper provides a concise overview of time series analysis in the time and frequency domains with lots of references for further reading.
Abstract: Any series of observations ordered along a single dimension, such as time, may be thought of as a time series. The emphasis in time series analysis is on studying the dependence among observations at different points in time. What distinguishes time series analysis from general multivariate analysis is precisely the temporal order imposed on the observations. Many economic variables, such as GNP and its components, price indices, sales, and stock returns are observed over time. In addition to being interested in the contemporaneous relationships among such variables, we are often concerned with relationships between their current and past values, that is, relationships over time.

9,919 citations