scispace - formally typeset
Search or ask a question
ReportDOI

Comparing Predictive Accuracy

TL;DR: In this article, explicit tests of the null hypothesis of no difference in the accuracy of two competing forecasts are proposed and evaluated, and asymptotic and exact finite-sample tests are proposed, evaluated and illustrated.
Abstract: We propose and evaluate explicit tests of the null hypothesis of no difference in the accuracy of two competing forecasts. In contrast to previously developed tests, a wide variety of accuracy measures can be used (in particular, the loss function need not be quadratic and need not even be symmetric), and forecast errors can be non-Gaussian, nonzero mean, serially correlated, and contemporaneously correlated. Asymptotic and exact finite-sample tests are proposed, evaluated, and illustrated.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: The theory of proper scoring rules on general probability spaces is reviewed and developed, and the intuitively appealing interval score is proposed as a utility function in interval estimation that addresses width as well as coverage.
Abstract: Scoring rules assess the quality of probabilistic forecasts, by assigning a numerical score based on the predictive distribution and on the event or value that materializes. A scoring rule is proper if the forecaster maximizes the expected score for an observation drawn from the distributionF if he or she issues the probabilistic forecast F, rather than G ≠ F. It is strictly proper if the maximum is unique. In prediction problems, proper scoring rules encourage the forecaster to make careful assessments and to be honest. In estimation problems, strictly proper scoring rules provide attractive loss and utility functions that can be tailored to the problem at hand. This article reviews and develops the theory of proper scoring rules on general probability spaces, and proposes and discusses examples thereof. Proper scoring rules derive from convex functions and relate to information measures, entropy functions, and Bregman divergences. In the case of categorical variables, we prove a rigorous version of the ...

4,644 citations

Journal ArticleDOI
TL;DR: The authors comprehensively reexamine the performance of variables that have been suggested by the academic literature to be good predictors of the equity premium and find that by and large, these models have predicted poorly both in-sample and out-of-sample (OOS) for 30 years now.
Abstract: Our article comprehensively reexamines the performance of variables that have been suggested by the academic literature to be good predictors of the equity premium. We find that by and large, these models have predicted poorly both in-sample (IS) and out-of-sample (OOS) for 30 years now; these models seem unstable, as diagnosed by their out-of-sample predictions and other statistics; and these models would not have helped an investor with access only to available information to profitably time the market.

3,339 citations

Journal ArticleDOI
TL;DR: In this article, a voluminous literature has emerged for modeling the temporal dependencies in financial market volatility using ARCH and stochastic volatility models and it has been shown that volatility models produce strikingly accurate inter-daily forecasts for the latent volatility factor that would be of interest in most financial applications.
Abstract: A voluminous literature has emerged for modeling the temporal dependencies in financial market volatility using ARCH and stochastic volatility models. While most of these studies have documented highly significant in-sample parameter estimates and pronounced intertemporal volatility persistence, traditional ex-post forecast evaluation criteria suggest that the models provide seemingly poor volatility forecasts. Contrary to this contention, we show that volatility models produce strikingly accurate interdaily forecasts for the latent volatility factor that would be of interest in most financial applications. New methods for improved ex-post interdaily volatility measurements based on high-frequency intradaily data are also discussed.

3,174 citations

Journal ArticleDOI
TL;DR: In this paper, a consistent framework for conditional interval forecast evaluation with higher-order moment dynamics is presented. But this framework is not suitable for the case of exchange rate forecasting, where higher order moment dynamics are present.
Abstract: A complete theory for evaluating interval forecasts has not been worked out to date. Most of the literature implicitly assumes homoskedastic errors even when this is clearly violated, and proceed by merely testing for correct unconditional coverage. Consequently, I set out to build a consistent framework for conditional interval forecast evaluation, which is crucial when higher-order moment dynamics are present. The new methodology is demonstrated in an application to the exchange rate forecasting procedures advocated in risk management.

2,307 citations

Journal ArticleDOI
TL;DR: In this article, the authors analyse the behaviour of two possible tests, and of modifications of these tests designed to circumvent shortcomings in the original formulations, and make a recommendation for one particular testing approach for practical applications.

1,760 citations

References
More filters
ReportDOI
TL;DR: In this article, a simple method of calculating a heteroskedasticity and autocorrelation consistent covariance matrix that is positive semi-definite by construction is described.
Abstract: This paper describes a simple method of calculating a heteroskedasticity and autocorrelation consistent covariance matrix that is positive semi-definite by construction. It also establishes consistency of the estimated covariance matrix under fairly general conditions.

18,117 citations

Journal ArticleDOI
TL;DR: In this article, the parameters of an autoregression are viewed as the outcome of a discrete-state Markov process, and an algorithm for drawing such probabilistic inference in the form of a nonlinear iterative filter is presented.
Abstract: This paper proposes a very tractable approach to modeling changes in regime. The parameters of an autoregression are viewed as the outcome of a discrete-state Markov process. For example, the mean growth rate of a nonstationary series may be subject to occasional, discrete shifts. The econometrician is presumed not to observe these shifts directly, but instead must draw probabilistic inference about whether and when they may have occurred based on the observed behavior of the series. The paper presents an algorithm for drawing such probabilistic inference in the form of a nonlinear iterative filter

9,189 citations

Journal ArticleDOI
01 Jan 1978

6,005 citations

Posted Content
TL;DR: In this article, a simple method of calculating a heteroskedasticity and autocorrelation consistent covariance matrix that is positive semi-definite by construction is described.
Abstract: This paper describes a simple method of calculating a heteroskedasticity and autocorrelation consistent covariance matrix that is positive semi-definite by construction. It also establishes consistency of the estimated covariance matrix under fairly general conditions.

5,822 citations

Journal ArticleDOI
TL;DR: In this article, the authors propose simple and directional likelihood-ratio tests for discriminating and choosing between two competing models whether the models are nonnested, overlapping or nested and whether both, one, or neither is misspecified.
Abstract: In this paper, we propose a classical approach to model selection. Using the Kullback-Leibler Information measure, we propose simple and directional likelihood-ratio tests for discriminating and choosing between two competing models whether the models are nonnested, overlapping or nested and whether both, one, or neither is misspecified. As a prerequisite, we fully characterize the asymptotic distribution of the likelihood ratio statistic under the most general conditions.

5,661 citations