scispace - formally typeset
Open AccessPosted Content

Error Measures for Generalizing About Forecasting Methods: Empirical Comparisons

Reads0
Chats0
TLDR
In this article, the authors evaluated measures for making comparisons of errors across time series and found that the median absolute error of a given method to that from the random walk forecast is not reliable, and therefore inappropriate for comparing accuracy across series.
Abstract
This study evaluated measures for making comparisons of errors across time series. We analyzed 90 annual and 101 quarterly economic time series. We judged error measures on reliability, construct validity, sensitivity to small changes, protection against outliers, and their relationship to decision making. The results lead us to recommend the Geometric Mean of the Relative Absolute Error (GMRAE) when the task involves calibrating a model for a set of time series. The GMRAE compares the absolute error of a given method to that from the random walk forecast. For selecting the most accurate methods, we recommend the Median RAE (MdRAE) when few series are available and the Median Absolute Percentage Error (MdAPE) otherwise. The Root Mean Square Error (RMSE) is not reliable, and is therefore inappropriate for comparing accuracy across series.

read more

Citations
More filters
Journal ArticleDOI

Another look at measures of forecast accuracy

TL;DR: In this paper, the mean absolute scaled error (MESEME) was proposed as the standard measure for comparing forecast accuracy across multiple time series across different time series types, and was used in the M-competition as well as the M3competition.
Journal ArticleDOI

Neural networks for short-term load forecasting: a review and evaluation

TL;DR: This review examines a collection of papers (published between 1991 and 1999) that report the application of NNs to short-term load forecasting, and critically evaluating the ways in which the NNs proposed in these papers were designed and tested.
BookDOI

Forecast verification: a practitioner's guide in atmospheric science

TL;DR: Jolliffe et al. as mentioned in this paper proposed a framework for verification of spatial fields based on binary and categorical events, and proved the correctness of the proposed framework with past, present and future predictions.
Journal ArticleDOI

The M3-Competition: results, conclusions and implications

TL;DR: In this paper, the M3-Competition, the latest edition of the M-Competitions, is described and its results and conclusions are compared with those of the previous two M-competitions as well as with other major empirical studies.
Journal ArticleDOI

25 years of time series forecasting

TL;DR: A review of the past 25 years of research into time series forecasting can be found in this paper, where the authors highlight results published in journals managed by the International Institute of Forecasters.
References
More filters
Journal ArticleDOI

The accuracy of extrapolation (time series) methods: Results of a forecasting competition

TL;DR: The results of a forecasting competition are presented to provide empirical evidence about differences found to exist among the various extrapolative (time series) methods used in the competition.
Posted Content

Rule-Based Forecasting: Development and Validation of an Expert Systems Approach to Combining Time Series Extrapolations

TL;DR: In this paper, the authors examined the feasibility of rule-based forecasting, a procedure that applies forecasting expertise and domain knowledge to produce forecasts according to features of the data, and developed a rule base consisting of 99 rules.
Journal ArticleDOI

Rule-based forecasting: development and validation of an expert systems approach to combining time series extrapolations

TL;DR: In this paper, a rule base consisting of 99 rules was developed to make annual extrapolation forecasts for economic and demographic time series, using 18 features of time series to produce forecasts according to features of data.
Journal ArticleDOI

Long-Range Forecasting.

Related Papers (5)