scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Forecasting in 2004"


Journal ArticleDOI
TL;DR: This paper used forecast combination methods to forecast output growth in a seven-country quarterly economic data set covering 1959 to 1999, with up to 73 predictors per country, and found that the most successful combination forecasts, like the mean, are the least sensitive to the recent performance of individual forecasts.
Abstract: This paper uses forecast combination methods to forecast output growth in a seven-country quarterly economic data set covering 1959‐1999, with up to 73 predictors per country. Although the forecasts based on individual predictors are unstable over time and across countries, and on average perform worse than an autoregressive benchmark, the combination forecasts often improve upon autoregressive forecasts. Despite the unstable performance of the constituent forecasts, the most successful combination forecasts, like the mean, are the least sensitive to the recent performance of the individual forecasts. While consistent with other evidence on the success of simple combination forecasts, this finding is difficult to explain using the theory of combination forecasting in a stationary environment. Copyright © 2004 John Wiley & Sons, Ltd.

1,100 citations


Journal ArticleDOI
TL;DR: In this paper, an ordered probit regression model estimated using 10 years' data is used to forecast English league football match results and a strategy of selecting end-of-season bets with a favourable expected return according to the model appears capable of generating a positive return.
Abstract: An ordered probit regression model estimated using 10 years' data is used to forecast English league football match results. As well as past match results data, the significance of the match for end-of-season league outcomes, the involvement of the teams in cup competition and the geographical distance between the two teams' home towns all contribute to the forecasting model's performance. The model is used to test the weak-form efficiency of prices in the fixed-odds betting market. A strategy of selecting end-of-season bets with a favourable expected return according to the model appears capable of generating a positive return. Copyright © 2004 John Wiley & Sons, Ltd.

183 citations


Journal ArticleDOI
TL;DR: In this article, the authors provided quarterly real GDP estimates for these countries derived by applying the Chow-Lin related series technique to annual real GDP series and evaluated the quality of the disaggregated series through a number of indirect methods.
Abstract: The growing affluence of the East and Southeast Asian economies has come about through a substantial increase in their economic links with the rest of the world, the OECD economies in particular. Econometric studies that try to quantify these links face a severe shortage of high-frequency time series data for China and the group of ASEAN4 (Indonesia, Malaysia, Philippines and Thailand). In this paper we provide quarterly real GDP estimates for these countries derived by applying the Chow‐Lin related series technique to annual real GDP series. The quality of the disaggregated series is evaluated through a number of indirect methods. Some potential problems of using readily available univariate disaggregation techniques are also highlighted. Copyright © 2004 John Wiley & Sons, Ltd.

129 citations


Journal ArticleDOI
TL;DR: In this paper, the authors extend to a multiple-equation context the linearity, model selection and model adequacy tests recently proposed for univariate smooth transition regression models, and examine the nonlinear forecasting power of the Conference Board composite index of leading indicators to predict both output growth and the business cycle phases of the US economy in real time.
Abstract: In this paper, I extend to a multiple-equation context the linearity, model selection and model adequacy tests recently proposed for univariate smooth transition regression models. Using this result, I examine the nonlinear forecasting power of the Conference Board composite index of leading indicators to predict both output growth and the business-cycle phases of the US economy in real time. Copyright © 2004 John Wiley & Sons, Ltd.

110 citations


Journal ArticleDOI
TL;DR: This paper showed that out-of-sample forecast comparisons can help prevent data mining-induced overfitting, using simulations of a simple Monte Carlo design and a real data-based design similar to those used in some previous studies.
Abstract: This paper shows that out-of-sample forecast comparisons can help prevent data mining-induced overfitting. The basic results are drawn from simulations of a simple Monte Carlo design and a real data-based design similar to those used in some previous studies. In each simulation, a general-to-specific procedure is used to arrive at a model. If the selected specification includes any of the candidate explanatory variables, forecasts from the model are compared to forecasts from a benchmark model that is nested within the selected model. In particular, the competing forecasts are tested for equal MSE and encompassing. The simulations indicate most of the post-sample tests are roughly correctly sized. Moreover, the tests have relatively good power, although some are consistently more powerful than others. The paper concludes with an application, modelling quarterly US inflation. Copyright © 2004 John Wiley & Sons, Ltd.

87 citations


Journal ArticleDOI
TL;DR: In this article, an alternative measure for "true volatility" has been suggested, based upon the cumulative squared returns from intra-day data, which outperforms smoothing and moving average techniques which have been previously identified as providing superior volatility forecasts.
Abstract: Volatility plays a key role in asset and portfolio management and derivatives pricing. As such, accurate measures and good forecasts of volatility are crucial for the implementation and evaluation of asset and derivative pricing models in addition to trading and hedging strategies. However, whilst GARCH models are able to capture the observed clustering effect in asset price volatility in-sample, they appear to provide relatively poor out-of-sample forecasts. Recent research has suggested that this relative failure of GARCH models arises not from a failure of the model but a failure to specify correctly the ‘true volatility’ measure against which forecasting performance is measured. It is argued that the standard approach of using ex post daily squared returns as the measure of ‘true volatility’ includes a large noisy component. An alternative measure for ‘true volatility’ has therefore been suggested, based upon the cumulative squared returns from intra-day data. This paper implements that technique and reports that, in a dataset of 17 daily exchange rate series, the GARCH model outperforms smoothing and moving average techniques which have been previously identified as providing superior volatility forecasts. Copyright © 2004 John Wiley & Sons, Ltd.

64 citations


Journal ArticleDOI
TL;DR: In this paper, a logistic function of a user-specified variable is used to model the time-varying parameter in smooth transition models, which has the potential to outperform existing adaptive methods and constant parameter methods.
Abstract: Adaptive exponential smoothing methods allow a smoothing parameter to change over time, in order to adapt to changes in the characteristics of the time series. However, these methods have tended to produce unstable forecasts and have performed poorly in empirical studies. This paper presents a new adaptive method, which enables a smoothing parameter to be modelled as a logistic function of a user-specified variable. The approach is analogous to that used to model the time-varying parameter in smooth transition models. Using simulated data, we show that the new approach has the potential to outperform existing adaptive methods and constant parameter methods when the estimation and evaluation samples both contain a level shift or both contain an outlier. An empirical study, using the monthly time series from the M3-Competition, gave encouraging results for the new approach.

61 citations


Journal ArticleDOI
TL;DR: In this article, a Bayesian model averaging approach for the purpose of forecasting Swedish consumer price index inflation using a large set of potential indicators, comprising some 80 quarterly time series covering a wide spectrum of Swedish economic activity, is presented.
Abstract: We consider a Bayesian model averaging approach for the purpose of forecasting Swedish consumer price index inflation using a large set of potential indicators, comprising some 80 quarterly time series covering a wide spectrum of Swedish economic activity. The paper demonstrates how to efficiently and systematically evaluate (almost) all possible models that these indicators in combination can give rise to. The results, in terms of out-of-sample performance, suggest that Bayesian model averaging is a useful alternative to other forecasting procedures, in particular recognizing the flexibility by which new information can be incorporated. Copyright © 2004 John Wiley & Sons, Ltd.

60 citations


Journal ArticleDOI
TL;DR: The authors used quantile regression to debias the quantiles of the distribution of the ensemble scenarios for a weather variable, which is used as a density forecast, which was needed for pricing weather derivatives.
Abstract: Density forecasts for weather variables are useful for the many industries exposed to weather risk. Weather ensemble predictions are generated from atmospheric models and consist of multiple future scenarios for a weather variable. The distribution of the scenarios can be used as a density forecast, which is needed for pricing weather derivatives. We consider one to 10-day-ahead density forecasts provided by temperature ensemble predictions. More specifically, we evaluate forecasts of the mean and quantiles of the density. The mean of the ensemble scenarios is the most accurate forecast for the mean of the density. We use quantile regression to debias the quantiles of the distribution of the ensemble scenarios. The resultant quantile forecasts compare favourably with those from a GARCH model. These results indicate the strong potential for the use of ensemble prediction in temperature density forecasting.

51 citations


Journal ArticleDOI
TL;DR: A test statistic for the null hypothesis that two competing models have equal density forecast accuracy is proposed and Monte Carlo simulations suggest that the test, which has a known limiting distribution, displays satisfactory size and power properties.
Abstract: A rapidly growing literature emphasizes the importance of evaluating the forecast accuracy of empirical models on the basis of density (as opposed to point) forecasting performance. We propose a test statistic for the null hypothesis that two competing models have equal density forecast accuracy. Monte Carlo simulations suggest that the test, which has a known limiting distribution, displays satisfactory size and power properties. The use of the test is illustrated with an application to exchange rate forecasting. Copyright © 2004 John Wiley & Sons, Ltd.

48 citations


Journal ArticleDOI
TL;DR: The method, here called a 'Markov Bayesian Classifier (MBC), is tested by forecasting turning points in the Swedish and US economies, using leading data, contrasting favourably with earlier HMM studies.
Abstract: A Hidden Markov Model (HMM) is used to classify an out-of-sample observation vector into either of two regimes. This leads to a procedure for making probability forecasts for changes of regimes in a time series, i.e. for turning points. Instead of estimating past turning points using maximum likelihood, the model is estimated with respect to known past regimes. This makes it possible to perform feature extraction and estimation for different forecasting horizons. The inference aspect is emphasized by including a penalty for a wrong decision in the cost function. The method, here called a ‘Markov Bayesian Classifier (MBC)’, is tested by forecasting turning points in the Swedish and US economies, using leading data. Clear and early turning point signals are obtained, contrasting favourably with earlier HMM studies. Some theoretical arguments for this are given. Copyright © 2004 John Wiley & Sons, Ltd.

Journal ArticleDOI
Thomas Lindh1
TL;DR: In this article, the authors show that the age structure contains information correlated to medium-term trends in growth and inflation, and use age structure based forecasts as an aid to monetary policy formation.
Abstract: Economic behaviour as well as economic resources of individuals vary with age. Swedish time series show that the age structure contains information correlated to medium-term trends in growth and inflation. GDP gaps estimated by age structure regressions are closely related to conventional measures. Monetary policy is believed to affect inflation with a lag of 1 or 2 years. Projections of the population's age structure are comparatively reliable several years ahead and provide additional information to improve on 3–5 years-ahead forecasts of potential GDP and inflation. Thus there is a potential scope for using age structure based forecasts as an aid to monetary policy formation. Copyright © 2004 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, the authors apply two ANN models, a back-propagation model and a generalized regression neural network model, to estimate and forecast post-war aggregate unemployment rates in the USA, Canada, UK, France and Japan.
Abstract: Asymmetry has been well documented in the business cycle literature. The asymmetric business cycle suggests that major macroeconomic series, such as a country's unemployment rate, are non-linear and, therefore, the use of linear models to explain their behaviour and forecast their future values may not be appropriate. Many researchers have focused on providing evidence for the non-linearity in the unemployment series. Only recently have there been some developments in applying non-linear models to estimate and forecast unemployment rates. A major concern of non-linear modelling is the model specification problem; it is very hard to test all possible non-linear specifications, and to select the most appropriate specification for a particular model. Artificial neural network (ANN) models provide a solution to the difficulty of forecasting unemployment over the asymmetric business cycle. ANN models are non-linear, do not rely upon the classical regression assumptions, are capable of learning the structure of all kinds of patterns in a data set with a specified degree of accuracy, and can then use this structure to forecast future values of the data. In this paper, we apply two ANN models, a back-propagation model and a generalized regression neural network model to estimate and forecast post-war aggregate unemployment rates in the USA, Canada, UK, France and Japan. We compare the out-of-sample forecast results obtained by the ANN models with those obtained by several linear and non-linear times series models currently used in the literature. It is shown that the artificial neural network models are able to forecast the unemployment series as well as, and in some cases better than, the other univariate econometrics time series models in our test. Copyright © 2004 John Wiley & Sons, Ltd.

Journal ArticleDOI
Jae H. Kim1
TL;DR: In this article, bias-corrected bootstrap prediction regions are constructed by combining bias-correction of VAR parameter estimators with the bootstrap procedure, and the backward VAR model is used to bootstrap VAR forecasts conditionally on past observations.
Abstract: This paper examines small sample properties of alternative bias-corrected bootstrap prediction regions for the vector autoregressive (VAR) model. Bias-corrected bootstrap prediction regions are constructed by combining bias-correction of VAR parameter estimators with the bootstrap procedure. The backward VAR model is used to bootstrap VAR forecasts conditionally on past observations. Bootstrap prediction regions based on asymptotic bias-correction are compared with those based on bootstrap bias-correction. Monte Carlo simulation results indicate that bootstrap prediction regions based on asymptotic bias-correction show better small sample properties than those based on bootstrap bias-correction for nearly all cases considered. The former provide accurate coverage properties in most cases, while the latter over-estimate the future uncertainty. Overall, the percentile-t bootstrap prediction region based on asymptotic bias-correction is found to provide highly desirable small sample properties, outperforming its alternatives in nearly all cases. Copyright © 2004 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the information content of two survey indicators for consumption developments in the near future for eight European countries in the perio d 1985-1998 was assessed, and it was shown that combining consumer sentiment and retail trade confidence into a composite indicator leads to optimal results.
Abstract: This paper assesses the information content of two survey indicators for consumption developments in the near future for eight European countries in the perio d 1985-1998. Empirical work on this topic typically focuses on consumer confidence, the perceptions of buyers of consumption goods. This paper examines whether perceptions of sellers of consumption goods, measured by retail trade surveys, may also improve short-term monitoring of consumption. We find that both consumer confidence and retailer confidence embody valuable information, when analyzed in isolation. For France, Italy and Spain we conclude that adding retail c onfidence does not improve the indicator model once consumer confidence has been included. For the UK the reverse case is obtained. For the remaining four countries we show that combining consumer sentiment and retail trade confidence into a composite indicator leads to optimal results. Our result s suggest that incorporating information from retail trade surveys may offer significant benefits for th e analysis of short-term prospects of consumption.

Journal ArticleDOI
TL;DR: In this paper, a regression approach to stock market forecasting is proposed, and it is shown that the standard predictions ignoring pretesting are much less robust than naive econometrics might suggest.
Abstract: In econometrics, as a rule, the same data set is used to select the model and, conditional on the selected model, to forecast. However, one typically reports the properties of the (conditional) forecast, ignoring the fact that its properties are affected by the model selection (pretesting). This is wrong, and in this paper we show that the error can be substantial. We obtain explicit expressions for this error. To illustrate the theory we consider a regression approach to stock market forecasting, and show that the standard predictions ignoring pretesting are much less robust than naive econometrics might suggest. We also propose a forecast procedure based on the ‘neutral Laplace estimator’, which leads to an improvement over standard model selection procedures. Copyright © 2004 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, a nonlinear time series model where both the conditional mean and the conditional variance are asymmetric functions of past information is proposed, which is particularly useful for analysing fi...
Abstract: We propose a nonlinear time series model where both the conditional mean and the conditional variance are asymmetric functions of past information. The model is particularly useful for analysing fi ...

Journal ArticleDOI
TL;DR: In this paper, the authors compare daily exchange rate value at risk estimates derived from econometric models with those implied by the prices of traded options, and find that during periods of stability, the implied model tends to overestimate value-at-risk, hence over-allocating capital.
Abstract: This paper compares daily exchange rate value at risk estimates derived from econometric models with those implied by the prices of traded options. Univariate and multivariate GARCH models are employed in parallel with the simple historical and exponentially weighted moving average methods. Overall, we find that during periods of stability, the implied model tends to overestimate value at risk, hence over-allocating capital. However, during turbulent periods, it is less responsive than the GARCH-type models, resulting in an under-allocation of capital and a greater number of failures. Hence our main conclusion, which has important implications for risk management, is that market expectations of future volatility and correlation, as determined from the prices of traded options, may not be optimal tools for determining value at risk. Therefore, alternative models for estimating volatility should be sought. Copyright © 2004 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, a state transition-fitted residual scale ratio (ST-FRSR) model is used to predict the conditional probability of extreme events in financial market time series.
Abstract: Financial market time series exhibit high degrees of non-linear variability, and frequently have fractal properties. When the fractal dimension of a time series is non-integer, this is associated with two features: (1) inhomogeneity— extreme fluctuations at irregular intervals, and (2) scaling symmetries— proportionality relationships between fluctuations over different separation distances. In multivariate systems such as financial markets, fractality is stochastic rather than deterministic, and generally originates as a result of multiplicative interactions. Volatility diffusion models with multiple stochastic factors can generate fractal structures. In some cases, such as exchange rates, the underlying structural equation also gives rise to fractality. Fractal principles can be used to develop forecasting algorithms. The forecasting method that yields the best results here is the state transition-fitted residual scale ratio (ST-FRSR) model. A state transition model is used to predict the conditional probability of extreme events. Ratios of rates of change at proximate separation distances are used to parameterize the scaling symmetries. Forecasting experiments are run using intraday exchange rate futures contracts measured at 15-minute intervals. The overall forecast error is reduced on average by up to 7% and in one instance by nearly a quarter. However, the forecast error during the outlying events is reduced by 39% to 57%. The ST-FRSR reduces the predictive error primarily by capturing extreme fluctuations more accurately. Copyright © 2004 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: It is shown that considerable gains in efficiency based on mean-square-error-type criteria can be obtained for short-term predications when using models based on updated disaggregated data, however, as the prediction horizon increases, the gain in using updated aggregated data diminishes substantially.
Abstract: This article develops and extends previous investigations on the temporal aggregation of ARMA predications. Given a basic ARMA model for disaggregated data, two sets of predictors may be constructed for future temporal aggregates: predictions based on models utilizing aggregated data or on models constructed from disaggregated data for which forecasts are updated as soon as the new information becomes available. We show that considerable gains in efficiency based on mean-square-error-type criteria can be obtained for short-term predications when using models based on updated disaggregated data. However, as the prediction horizon increases, the gain in using updated disaggregated data diminishes substantially. In addition to theoretical results associated with forecast efficiency of ARMA models, we also illustrate our findings with two well-known time series. Copyright © 2004 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The authors showed that there is a long-run nonlinear relationship between stock prices and dividends for the US stock market during the period 1871-1996 and that the out-of-sample forecasting performance of the intrinsic bubbles model is significantly better than the performance of two alternatives, namely the random walk and the rational bubbles model.
Abstract: This paper offers strong further empirical evidence to support the intrinsic bubble model of stock prices, developed by Froot and Obstfeld (American Economic Review, 1991), in two ways. First, our results suggest that there is a long-run nonlinear relationship between stock prices and dividends for the US stock market during the period 1871-1996. Second, we find that the out-of-sample forecasting performance of the intrinsic bubbles model is significantly better than the performance of two alternatives, namely the random walk and the rational bubbles model. Copyright © 2004 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The results indicate that all of the existing tests for SETAR-type non-linearity in time series analysis and forecasting are not robust to outliers and model misspecification.
Abstract: In recent years there has been a growing interest in exploiting potential forecast gains from the non-linear structure of self-exciting threshold autoregressive (SETAR) models. Statistical tests have been proposed in the literature to help analysts check for the presence of SETAR-type non-linearities in an observed time series. It is important to study the power and robustness properties of these tests since erroneous test results might lead to misspecified prediction problems. In this paper we investigate the robustness properties of several commonly used non-linearity tests. Both the robustness with respect to outlying observations and the robustness with respect to model specification are considered. The power comparison of these testing procedures is carried out using Monte Carlo simulation. The results indicate that all of the existing tests are not robust to outliers and model misspecification. Finally, an empirical application applies the statistical tests to stock market returns of the four little dragons (Hong Kong, South Korea, Singapore and Taiwan) in East Asia. The non-linearity tests fail to provide consistent conclusions most of the time. The results in this article stress the need for a more robust test for SETAR-type non-linearity in time series analysis and forecasting. Copyright © 2004 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: While the benchmark models perform best without confirmation filters and leverage, the Gaussian mixture model outperforms all of the benchmarks when taking advantage of the possibilities offered by a combination of more sophisticated trading strategies and leverage.
Abstract: The purpose of this paper is twofold. Firstly, to assess the merit of estimating probability density functions rather than level or classification estimations on a one-day-ahead forecasting task of the EUR/USD time series. This is implemented using a Gaussian mixture model neural network, benchmarking the results against standard forecasting models, namely a naive model, a moving average convergence divergence technical model (MACD), an autoregressive moving average model (ARMA), a logistic regression model (LOGIT) and a multi-layer perceptron network (MLP). Secondly, to examine the possibilities of improving the trading performance of those models with confirmation filters and leverage. While the benchmark models perform best without confirmation filters and leverage, the Gaussian mixture model outperforms all of the benchmarks when taking advantage of the possibilities offered by a combination of more sophisticated trading strategies and leverage. This might be due to the ability of the Gaussian mixture model to identify successfully trades with a high Sharpe ratio. Copyright © 2004 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, the seasonal unit root properties of monthly industrial production series for 16 OECD countries were investigated in the context of a structural time series model and it was shown that when these criteria indicate that a smaller number of seasonal unit roots can be assumed and hence that some seasonal roots are stationary, the corresponding model also gives more accurate one-step-ahead forecasts.
Abstract: We investigate the seasonal unit root properties of monthly industrial production series for 16 OECD countries within the context of a structural time series model. A basic version of this model assumes that there are 11 such seasonal unit roots. We propose to use model selection criteria (AIC and BIC) to examine if one or more of these are in fact stationary. We generally find that when these criteria indicate that a smaller number of seasonal unit roots can be assumed and hence that some seasonal roots are stationary, the corresponding model also gives more accurate one-step-ahead forecasts. Copyright © 2004 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, the probability of rejecting the seasonal unit root tests developed by Hylleberg et al. when they are applied to fractionally integrated seasonal time series was studied.
Abstract: We study the probability of rejecting the seasonal unit root tests developed by Hylleberg et al. when they are applied to fractionally integrated seasonal time series. We find that these tests have quite low power and that they lead to a risk of over-differencing. The forecasting performance of fractionally integrated seasonal models is also examined. This approach is compared with the traditional approaches from Box–Jenkins methodology, and the HEGY-type test procedure. Forecasting results obtained from simulated series and quarterly economic time series show that the fractional approach improves the forecasting accuracy with regard to the other approaches. Copyright © 2004 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the authors analyzed the nonlinear behavior of the information content in the spread for future real economic activity and showed that significant improvement in forecasting accuracy at least for one-step ahead forecasts can be obtained over the linear model.
Abstract: We analyze the nonlinear behavior of the information content in the spread for future real economic activity. The spread linearly predicts one year ahead real growth in nine industrial production sectors of the US and four of the UK over the last forty years. However, recent investigations on the spread-real activity relation have questioned both its linear nature and its time-invariant framework. Our, in-sample, empirical evidence suggests that the spread real activity relationship exhibits asymmetries that allow for different predictive power of the spread when past spread values were above or below some threshold value. We then measure the out-of-sample forecast performance of the nonlinear model using predictive accuracy tests. The results show that significant improvement in forecasting accuracy, at least for one-step ahead forecasts, can be obtained over the linear model.

Journal ArticleDOI
TL;DR: In this article, the authors evaluate the performance of two very reliable methodologies for predicting a downturn in the US economy using composite leading economic indicators (CLI) for the years 2000-01.
Abstract: On 26 November 2001, the National Bureau of Economic Research announced that the US economy had officially entered into a recession in March 2001. This decision was a surprise and did not end all the conflicting opinions expressed by economists. This matter was finally settled in July 2002 after a revision to the 2001 real gross domestic product showed negative growth rates for its first three quarters. A series of political and economic events in the years 2000–01 have increased the amount of uncertainty in the state of the economy, which in turn has resulted in the production of less reliable economic indicators and forecasts. This paper evaluates the performance of two very reliable methodologies for predicting a downturn in the US economy using composite leading economic indicators (CLI) for the years 2000–01. It explores the impact of the monetary policy on CLI and on the overall economy and shows how the gradualness and uncertainty of this impact on the overall economy have affected the forecasts of these methodologies. It suggests that the overexposure of the CLI to the monetary policy tools and a strong, but less effective, expansionary money policy have been the major factors in deteriorating the predictions of these methodologies. To improve these forecasts, it has explored the inclusion of the CLI diffusion index as a prior in the Bayesian methodology. Copyright © 2004 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the authors show that permanent fluctuations in the Cobweb model can be justified as being rational when reinterpreting the model in the theory of rational beliefs, even when inconsistent with a rational expectations equilibrium.
Abstract: "This note shows that permanent fluctuations in the Cobweb model — though inconsistent with a rational expectations equilibrium — can be justified as being rational when reinterpreting the model in the theory of rational beliefs."

Journal ArticleDOI
TL;DR: In this paper, the authors compare the long-run forecasting performance of the multicointegrated variables between a model that correctly imposes the "common feature" restrictions and a (univariate) model that omits these restrictions completely, and the results indicate that different loss functions result in different ranking of models with respect to their infinite horizon forecasting performance.
Abstract: In this paper long-run forecasting of multicointegrating variables is investigated. Multicointegration typically occurs in dynamic systems involving both stock and flow variables whereby a common feature in the form of shared stochastic trends is present across different levels of multiple time series. Hence, the effect of imposing this "common feature" restriction on out-of-sample valuation and forecasting accuracy of such variables is of interest. In particular, we compare the long-run forecasting performance of the multicointegrated variables between a model that correctly imposes the "common feature" restrictions and a (univariate) model that omits these multicointegrating restrictions completely. We employ different loss functions based on a range of mean square forecast error criteria, and the results indicate that different loss functions result in different ranking of models with respect to their infinite horizon forecasting performance. We consider loss functions using a standard trace mean square forecast error criterion (penalizing the forecast errors of flow variables only), and a loss function evaluating forecast errors of changes in both stock and flow variables. The latter loss function is based on the triangular representation of cointegrated systems and was initially suggested by Christoffersen and Diebold (1998). It penalizes deviations from long-run relations among the flow variables through cointegrating restrictions. We present a new loss function which further penalizes deviations in the long run relationship between the levels of stock and flow variables. It is derived from the triangular representation of multicointegrated systems. Using this criterion, system forecasts from a model incorporating multicointegration restrictions dominate forecasts from univariate models. The paper highlights the importance of carefully selecting loss functions in forecast evaluation of models involving stock and flow variables.

Journal ArticleDOI
TL;DR: In this article, human judgments in New York state sales and use tax by examining the actual practices of information integration have been examined based on the social judgment theory (i.e., the lens model), and a judgment analysis exercise was designed and administered to a person from each agency to understand how information integration is processed among different agencies.
Abstract: Human judgments have become quite important in revenue forecasting processes. This paper centres on human judgments in New York state sales and use tax by examining the actual practices of information integration. Based on the social judgment theory (i.e., the lens model), a judgment analysis exercise was designed and administered to a person from each agency (the Division of the Budget, Assembly Ways and Means Committee Majority and Minority, and the Senate Finance Committee) to understand how information integration is processed among different agencies. The results of the judgment analysis exercise indicated that revenue forecasters put different weight on cues. And, in terms of relative and subjective weights, the cues were used differently, although they were presented with the same information. Copyright © 2004 John Wiley & Sons, Ltd.