scispace - formally typeset
Search or ask a question
Topic

Forecast skill

About: Forecast skill is a research topic. Over the lifetime, 4156 publications have been published within this topic receiving 150676 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the mean absolute scaled error (MESEME) was proposed as the standard measure for comparing forecast accuracy across multiple time series across different time series types, and was used in the M-competition as well as the M3competition.

3,870 citations

Journal ArticleDOI
TL;DR: The authors proposed a statistical method for postprocessing ensembles based on Bayesian model averaging (BMA), which is a standard method for combining predictive distributions from different sources, and demonstrated that BMA performs reasonably well when the underlying ensemble is calibrated, or even overdispersed.
Abstract: Ensembles used for probabilistic weather forecasting often exhibit a spread-error correlation, but they tend to be underdispersive. This paper proposes a statistical method for postprocessing ensembles based on Bayesian model averaging (BMA), which is a standard method for combining predictive distributions from different sources. The BMA predictive probability density function (PDF) of any quantity of interest is a weighted average of PDFs centered on the individual bias-corrected forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts and reflect the models’ relative contributions to predictive skill over the training period. The BMA weights can be used to assess the usefulness of ensemble members, and this can be used as a basis for selecting ensemble members; this can be useful given the cost of running large ensembles. The BMA PDF can be represented as an unweighted ensemble of any desired size, by simulating from the BMA predictive distribution. The BMA predictive variance can be decomposed into two components, one corresponding to the between-forecast variability, and the second to the within-forecast variability. Predictive PDFs or intervals based solely on the ensemble spread incorporate the first component but not the second. Thus BMA provides a theoretical explanation of the tendency of ensembles to exhibit a spread-error correlation but yet be underdispersive. The method was applied to 48-h forecasts of surface temperature in the Pacific Northwest in January– June 2000 using the University of Washington fifth-generation Pennsylvania State University–NCAR Mesoscale Model (MM5) ensemble. The predictive PDFs were much better calibrated than the raw ensemble, and the BMA forecasts were sharp in that 90% BMA prediction intervals were 66% shorter on average than those produced by sample climatology. As a by-product, BMA yields a deterministic point forecast, and this had root-mean-square errors 7% lower than the best of the ensemble members and 8% lower than the ensemble mean. Similar results were obtained for forecasts of sea level pressure. Simulation experiments show that BMA performs reasonably well when the underlying ensemble is calibrated, or even overdispersed.

1,649 citations

Journal ArticleDOI
TL;DR: This paper found that forecast accuracy is positively associated with analysts' experience and employer size, and negatively associated with the number of firms and industries followed by the analyst (measures of task complexity).

1,242 citations

Posted Content
TL;DR: The authors proposed an alternative framework for out-of-sample comparison of predictive ability based on conditional expectations of forecasts and forecast errors rather than the unconditional expectations that are the focus of the existing literature.
Abstract: We argue that the current framework for predictive ability testing (e.g.,West, 1996) is not necessarily useful for real-time forecast selection, i.e., for assessing which of two competing forecasting methods will perform better in the future. We propose an alternative framework for out-of-sample comparison of predictive ability which delivers more practically relevant conclusions. Our approach is based on inference about conditional expectations of forecasts and forecast errors rather than the unconditional expectations that are the focus of the existing literature. We capture important determinants of forecast performance that are neglected in the existing literature by evaluating what we call the forecasting method (the model and the parameter estimation procedure), rather than just the forecasting model. Compared to previous approaches, our tests are valid under more general data assumptions (heterogeneity rather than stationarity) and estimation methods, and they can handle comparison of both nested and non-nested models, which is not currently possible. To illustrate the usefulness of the proposed tests, we compare the forecast performance of three leading parameter-reduction methods for macroeconomic forecasting using a large number of predictors: a sequential model selection approach, the "diffusion indexes" approach of Stock and Watson (2002), and the use of Bayesian shrinkage estimators.

1,151 citations

Journal ArticleDOI
TL;DR: This paper used forecast combination methods to forecast output growth in a seven-country quarterly economic data set covering 1959 to 1999, with up to 73 predictors per country, and found that the most successful combination forecasts, like the mean, are the least sensitive to the recent performance of individual forecasts.
Abstract: This paper uses forecast combination methods to forecast output growth in a seven-country quarterly economic data set covering 1959‐1999, with up to 73 predictors per country. Although the forecasts based on individual predictors are unstable over time and across countries, and on average perform worse than an autoregressive benchmark, the combination forecasts often improve upon autoregressive forecasts. Despite the unstable performance of the constituent forecasts, the most successful combination forecasts, like the mean, are the least sensitive to the recent performance of the individual forecasts. While consistent with other evidence on the success of simple combination forecasts, this finding is difficult to explain using the theory of combination forecasting in a stationary environment. Copyright © 2004 John Wiley & Sons, Ltd.

1,100 citations


Network Information
Related Topics (5)
Climate model
22.2K papers, 1.1M citations
91% related
Sea surface temperature
21.2K papers, 874.7K citations
87% related
Monsoon
16K papers, 599.8K citations
86% related
Stratosphere
15.7K papers, 586.6K citations
86% related
Troposphere
12K papers, 458.9K citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023178
2022294
2021229
2020212
2019239
2018165