scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Forecasting in 2014"


Journal ArticleDOI
TL;DR: In this article, the authors developed econometric methods for using the Bayesian Lasso with time-varying parameter models to forecast EU-area inflation with many predictors using time varying parameter models.
Abstract: In this paper, we forecast EU-area inflation with many predictors using time-varying parameter models. The facts that time-varying parameter models are parameter-rich and the time span of our data is relatively short motivate a desire for shrinkage. In constant coefficient regression models, the Bayesian Lasso is gaining increasing popularity as an effective tool for achieving such shrinkage. In this paper, we develop econometric methods for using the Bayesian Lasso with time-varying parameter models. Our approach allows for the coefficient on each predictor to be: i) time varying, ii) constant over time or iii) shrunk to zero. The econometric methodology decides automatically which category each coefficient belongs in. Our empirical results indicate the benefits of such an approach.

98 citations


Journal ArticleDOI
TL;DR: The experimental results indicate that non-financial risk factors and a rule-based system help decrease the error rates and the proposed approach outperforms machine learning methods in assessing the risk of financial statement fraud.
Abstract: This study presents a method of assessing financial statement fraud risk. The proposed approach comprises a system of financial and non-financial risk factors, and a hybrid assessment method that combines machine learning methods with a rule-based system. Experiments are performed using data from Chinese companies by four classifiers (logistic regression, back-propagation neural network, C5.0 decision tree and support vector machine) and an ensemble of those classifiers. The proposed ensemble of classifiers outperform each of the four classifiers individually in accuracy and composite error rate. The experimental results indicate that non-financial risk factors and a rule-based system help decrease the error rates. The proposed approach outperforms machine learning methods in assessing the risk of financial statement fraud. Copyright © 2014 John Wiley & Sons, Ltd.

52 citations


Journal ArticleDOI
TL;DR: In this article, the authors compare several multi-period volatility forecasting models, specifically from the MIDAS and HAR families, in terms of out-of-sample volatility forecasting accuracy, using intra-daily returns of the BOVESPA index, and calculate volatility measures such as realized variance, realized power and realized bipower variation to be used as regressors in both models.
Abstract: In this paper we compare several multi-period volatility forecasting models, specifically from MIDAS and HAR families. We perform our comparisons in terms of out-of-sample volatility forecasting accuracy. We also consider combinations of the models' forecasts. Using intra-daily returns of the BOVESPA index, we calculate volatility measures such as realized variance, realized power variation and realized bipower variation to be used as regressors in both models. Further, we use a nonparametric procedure for separately measuring the continuous sample path variation and the discontinuous jump part of the quadratic variation process. Thus MIDAS and HAR specifications with the continuous sample path and jump variability measures as separate regressors are estimated. Our results in terms of mean squared error suggest that regressors involving volatility measures which are robust to jumps (i.e. realized bipower variation and realized power variation) are better at forecasting future volatility. However, we find that, in general, the forecasts based on these regressors are not statistically different from those based on realized variance (the benchmark regressor). Moreover, we find that, in general, the relative forecasting performances of the three approaches (i.e. MIDAS, HAR and forecast combinations) are statistically equivalent. Copyright © 2014 John Wiley & Sons, Ltd.

42 citations


Journal Article
TL;DR: In this article, the forecasting performance of a broad monetary aggregate (M3) in predicting euro area inflation was examined, and the results indicated that the evolution of M3 is still in line with money demand even in the period of the financial and economic crisis.
Abstract: This paper examines the forecasting performance of a broad monetary aggregate (M3) in predicting euro area inflation. Excess liquidity is measured as the difference between the actual money stock and its fundamental value, the latter determined by a money demand function. The out-of sample forecasting performance is compared to widely used alternatives, such as the term structure of interest rates. The results indicate that the evolution of M3 is still in line with money demand even in the period of the financial and economic crisis. Monetary indicators are useful to predict inflation at the longer horizons, especially if the forecasting equations are based on measures of excess liquidity. Due to the stable link between money and inflation, central banks should implement exit strategies from the current policy path, as soon as the financial conditions are expected to return to normality.

39 citations


Journal ArticleDOI
TL;DR: In this article, the authors explore whether a direct measure of mood explains the Monday effect and find that a greater proportion of investors are more pessimistic in the early days of the week, and become more optimistic as the week progresses.
Abstract: A number of studies have explored the sources of the Monday effect, according to which returns are on average negative on Mondays. We contribute to the literature by exploring whether a direct measure of mood explains the Monday effect. In line with psychological literature, a greater proportion of investors are more pessimistic in the early days of the week, and become more optimistic as the week progresses. We use novel daily mood data from Facebook across 20 international markets to explore the impact of mood on the Monday anomaly. We find that the Monday effect disappears after controlling for mood. In line with our hypothesis that mood drives the Monday effect, we find that the effect is more prominent within small capitalization indices and within collectivist and high-uncertainty-avoidance countries. Investors could consider mood levels to forecast Mondays with more (less) pronounced negative returns. Copyright © 2014 John Wiley & Sons, Ltd.

32 citations


Journal ArticleDOI
TL;DR: In this article, a quantile regression approach to equity premium forecasting is proposed, where robust point forecasts are generated from a set of quantile forecasts using both fixed and time-varying weighting schemes, thereby exploiting the entire distributional information associated with each predictor.
Abstract: We propose a quantile regression approach to equity premium forecasting. Robust point forecasts are generated from a set of quantile forecasts using both fixed and time-varying weighting schemes, thereby exploiting the entire distributional information associated with each predictor. Further gains are achieved by incorporating the forecast combination methodology into our quantile regression setting. Our approach using a time-varying weighting scheme delivers statistically and economically significant out-of-sample forecasts relative to both the historical average benchmark and the combined predictive mean regression modeling approach.

30 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a framework to evaluate the information content of subjective expert density forecasts using micro data from the ECB's Survey of Professional Forecasters (SPF) using scoring functions which evaluate the entire predictive densities, including an evaluation of the impact of density features such as their location, spread, skew and tail risk on density forecast performance.
Abstract: In this paper, we propose a framework to evaluate the information content of subjective expert density forecasts using micro data from the ECB’s Survey of Professional Forecasters (SPF) A key aspect of our analysis is the use of scoring functions which evaluate the entire predictive densities, including an evaluation of the impact of density features such as their location, spread, skew and tail risk on density forecast performance Overall, we find considerable heterogeneity in the performance of the surveyed densities at the individual level Relative to a set of crude benchmark alternatives, this performance is somewhat better for GDP growth than for inflation, although in the former case it diminishes substantially with the forecast horizon In addition, relative to the proposed benchmarks, we report evidence of some improvement in the performance of expert densities during the recent period of macroeconomic volatility However, our analysis also reveals clear evidence of overconfidence or neglected risks in the expert probability assessments, as reflected also in frequent occurrences of events which are assigned a zero probability Moreover, higher moment features of the expert densities, such as their skew or the degree of probability mass in their tails, are shown not to contribute significantly to improvements in individual density forecast performance JEL-Code: C220, C530

30 citations


Journal ArticleDOI
TL;DR: In this article, a hybrid genetic algorithm-support vector regression (GA-SVR) model was proposed for economic forecasting and macroeconomic variable selection, which is applied to the task of forecasting US inflation and unemployment.
Abstract: In this paper a hybrid genetic algorithm–support vector regression (GA-SVR) model in economic forecasting and macroeconomic variable selection is introduced. The proposed algorithm is applied to the task of forecasting US inflation and unemployment. GA-SVR genetically optimizes the SVR parameters and adapts to the optimal feature subset from a feature space of potential inputs. The feature space includes a wide pool of macroeconomic variables that might affect the two series under study. The forecasting performance of GA-SVR is benchmarked with a random walk model, an autoregressive moving average model, a moving average convergence/divergence model, a multi-layer perceptron, a recurrent neural network and a genetic programming algorithm. In terms of our results, GA-SVR outperforms all benchmark models and provides evidence on which macroeconomic variables can be relevant predictors of US inflation and unemployment in the specific period under study. Copyright © 2014 John Wiley & Sons, Ltd.

28 citations


Journal ArticleDOI
TL;DR: In this article, the authors assess the performance of boosting when forecasting a wide range of macroeconomic variables, and find that boosting mostly outperforms the autoregressive benchmark, and that K-fold cross-validation works much better as stopping criterion than the commonly used information criteria.
Abstract: The use of large datasets for macroeconomic forecasting has received a great deal of interest recently. Boosting is one possible method of using high-dimensional data for this purpose. It is a stage-wise additive modelling procedure, which, in a linear specification, becomes a variable selection device that iteratively adds the predictors with the largest contribution to the fit. Using data for the United States, the euro area and Germany, we assess the performance of boosting when forecasting a wide range of macroeconomic variables. Moreover, we analyse to what extent its forecasting accuracy depends on the method used for determining its key regularization parameter: the number of iterations. We find that boosting mostly outperforms the autoregressive benchmark, and that K-fold cross-validation works much better as stopping criterion than the commonly used information criteria. Copyright © 2014 John Wiley & Sons, Ltd.

27 citations


Journal ArticleDOI
TL;DR: This article showed that persistent lead-lag relationships spanning mere fractions of a seccond exist in all three possible pairings of the S&P500, FTSE100, and DAX futures contracts.
Abstract: We show that persistent lead-lag relationships spanning mere fractions of a seccond exist in all three possible pairings of the S&P500, FTSE100, and DAX futures contracts. These relationships exhibit clear intraday patterns which help us to forecast mid-quote changes in lagging contracts with directional accuracy in excess of 85%. A simple algorithmic trading strategy exploiting these relations yields economically significant profits which are robust to market impact costs and the bid-ask spread. We find that price slippage and infrastructure costs are our most important limits to arbitrage. Our results support the Grossman and Stiglitz (1976, 1980) view that informational ine?fficiencies incentivize arbitrageurs to eliminate mispricings.

25 citations


Journal ArticleDOI
TL;DR: A model-free option pricing approach with neural networks, which can be applied to real-time pricing and hedging of FX options, which concludes that the performance of closed-form pricing models depends highly on the volatility estimator whereas neural networks can avoid this estimation problem but require market liquidity for training.
Abstract: High-frequency trading and automated algorithm impose high requirements on computational methods. We provide a model-free option pricing approach with neural networks, which can be applied to real-time pricing and hedging of FX options. In contrast to well-known theoretical models, an essential advantage of our approach is the simultaneous pricing across different strike prices and parsimonious use of real-time input variables. To test its ability for the purpose of high-frequency trading, we perform an empirical run-time trading simulation with a tick dataset of EUR/USD options on currency futures of 4 weeks. In very short non-overlapping 15-minute out-of-sample intervals, theoretical option prices derived from the Black model compete against nonparametric option prices through two different neural network topologies. We show that the approximated pricing function of learning networks is suitable for generating fast run-time option pricing evaluation as their performance is slightly better in comparison to theoretical prices. The derivation of the network function is also useful for performing hedging strategies. We conclude that the performance of closed-form pricing models depends highly on the volatility estimator, whereas neural networks can avoid this estimation problem but require market liquidity for training. Nevertheless, we also have to take particular enhancements into account, which give us useful hints for further research and steps. Copyright © 2014 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the authors examined the relationship between stock prices and commodity prices and whether this can be used to forecast stock returns and found that the historical mean model outperformed the forecast models in both the static and recursive approaches.
Abstract: This paper examines the relationship between stock prices and commodity prices and whether this can be used to forecast stock returns. As both prices are linked to expected future economic performance they should exhibit a long-run relationship. Moreover, changes in sentiment towards commodity investing may affect the nature of the response to disequilibrium. Results support cointegration between stock and commodity prices, while Bai–Perron tests identify breaks in the forecast regression. Forecasts are computed using a standard fixed (static) in-sample/out-of-sample approach and by both recursive and rolling regressions, which incorporate the effects of changing forecast parameter values. A range of model specifications and forecast metrics are used. The historical mean model outperforms the forecast models in both the static and recursive approaches. However, in the rolling forecasts, those models that incorporate information from the long-run stock price/commodity price relationship outperform both the historical mean and other forecast models. Of note, the historical mean still performs relatively well compared to standard forecast models that include the dividend yield and short-term interest rates but not the stock/commodity price ratio. Copyright © 2014 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, a generalized method of moments (GOMM) was used to estimate a new multifractal model for realized volatility and perform forecasting by means of best linear forecasts derived via the Levinson-Durbin algorithm.
Abstract: Multifractal models have recently been introduced as a new type of data-generating process for asset returns and other financial data. Here we propose an adaptation of this model for realized volatility. We estimate this new model via generalized method of moments and perform forecasting by means of best linear forecasts derived via the Levinson–Durbin algorithm. Its out-of-sample performance is compared against other popular time series specifications. Using an intra-day dataset for five major international stock market indices, we find that the the multifractal model for realized volatility improves upon forecasts of its earlier counterparts based on daily returns and of many other volatility models. While the more traditional RV-ARFIMA model comes out as the most successful model (in terms of the number of cases in which it has the best forecasts for all combinations of forecast horizons and evaluation criteria), the new model performs often significantly better during the turbulent times of the recent financial crisis. Copyright © 2014 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The authors showed that the performance of alternative parametric volatility models, like EGARCH (exponential general autoregressive conditional heteroskedasticity) or GARCH, and Markov regime-switching models, can be improved if they are combined with skewed distributions of asset return innovations.
Abstract: This paper provides clear-cut evidence that the out-of-sample VaR (value-at-risk) forecasting performance of alternative parametric volatility models, like EGARCH (exponential general autoregressive conditional heteroskedasticity) or GARCH, and Markov regime-switching models, can be considerably improved if they are combined with skewed distributions of asset return innovations. The performance of these models is found to be similar to that of the EVT (extreme value theory) approach. The performance of the latter approach can also be improved if asset return innovations are assumed to be skewed distributed. The performance of the Markov regime-switching model is considerably improved if this model allows for EGARCH effects, for all different volatility regimes considered. Copyright © 2014 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, an extension of the Euro-Sting single-index dynamic factor model is used to construct short-term forecasts of quarterly GDP growth for the euro area by accounting for financial variables as leading indicators.
Abstract: This paper uses an extension of the Euro-Sting single-index dynamic factor model to construct short-term forecasts of quarterly GDP growth for the euro area by accounting for financial variables as leading indicators. From a simulated real-time exercise, the model is used to investigate the forecasting accuracy across the different phases of the business cycle. Our extension is also used to evaluate the relative forecasting ability of the two most reliable business cycle surveys for the euro area: the PMI and the ESI. We show that the latter produces more accurate GDP forecasts than the former. Finally, the proposed model is also characterized by its great ability to capture the European business cycle, as well as the probabilities of expansion and/or contraction periods. Copyright © 2014 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the authors consider whether the space spanned by the latent factor structure in mortality data can be adequately described by developments in gross domestic product, health expenditure and lifestyle-related risk factors using statistical techniques developed in macroeconomics and finance.
Abstract: Mortality models used for forecasting are predominantly based on the statistical properties of time series and do not generally incorporate an understanding of the forces driving secular trends. This paper addresses three research questions: Can the factors found in stochastic mortality-forecasting models be associated with real-world trends in health-related variables? Does inclusion of health-related factors in models improve forecasts? Do resulting models give better forecasts than existing stochastic mortality models? We consider whether the space spanned by the latent factor structure in mortality data can be adequately described by developments in gross domestic product, health expenditure and lifestyle-related risk factors using statistical techniques developed in macroeconomics and finance. These covariates are then shown to improve forecasts when incorporated into a Bayesian hierarchical model. Results are comparable or better than benchmark stochastic mortality models. Copyright © 2014 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, the authors compare several extensions and alternative regime-switching formulations, including logistic specifications of the underlying states, logistic smooth transition and finite mixture regression, and conclude that the finite mixture approach performs well in an extensive, out-of-sample forecasting comparison.
Abstract: Forecasting prices in electricity markets is a crucial activity for both risk management and asset optimization. Intra-day power prices have a fine structure and are driven by an interaction of fundamental, behavioural and stochastic factors. Furthermore, there are reasons to expect the functional forms of price formation to be nonlinear in these factors and therefore specifying forecasting models that perform well out-of-sample is methodologically challenging. Markov regime switching has been widely advocated to capture some aspects of the nonlinearity, but it may suffer from overfitting and unobservability in the underlying states. In this paper we compare several extensions and alternative regime-switching formulations, including logistic specifications of the underlying states, logistic smooth transition and finite mixture regression. The finite mixture approach to regime switching performs well in an extensive, out-of-sample forecasting comparison. Copyright © 2014 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The aim of this research was to analyse the different results that can be achieved using support vector machines (SVM) to forecast the weekly change movement of different simulated markets.
Abstract: The aim of this research was to analyse the different results that can be achieved using support vector machines (SVM) to forecast the weekly change movement of different simulated markets. The markets are developed by a GARCH model based on the SP also results are good in trend simulated markets. Copyright © 2014 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The results indicate that GEP and GP produce significant trading performance when applied to ASE 20 and outperform the well‐known existing methods.
Abstract: This paper presents an application of the gene expression programming (GEP) and integrated genetic programming (GP) algorithms to the modelling of ASE 20 Greek index. GEP and GP are robust evolutionary algorithms that evolve computer programs in the form of mathematical expressions, decision trees or logical expressions. The results indicate that GEP and GP produce significant trading performance when applied to ASE 20 and outperform the well-known existing methods. The trading performance of the derived models is further enhanced by applying a leverage filter. Copyright © 2014 John Wiley & Sons, Ltd.

Journal ArticleDOI
Abstract: Recent literature has suggested that macroeconomic forecasters may have asymmetric loss functions, and that there may be heterogeneity across forecasters in the degree to which they weigh under- and over-predictions. Using an individual-level analysis that exploits the Survey of Professional Forecasters respondents’ histogram forecasts, we find little evidence of asymmetric loss for the inflation forecasters. Copyright © 2013 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the authors compared the performance of different models for forecasting the UK government bond yield curve before and after the dramatic lowering of short-term interest rates from October 2008.
Abstract: This paper compares the experience of forecasting the UK government bond yield curve before and after the dramatic lowering of short-term interest rates from October 2008. Out-of-sample forecasts for 1, 6 and 12 months are generated from each of a dynamic Nelson-Siegel model, autoregressive models for both yields and the principal components extracted from those yields, a slope regression and a random walk model. At short forecasting horizons, there is little difference in the performance of the models both prior to and after 2008. However, for medium- to longer-term horizons, the slope regression provided the best forecasts prior to 2008, while the recent experience of near-zero short interest rates coincides with a period of forecasting superiority for the autoregressive and dynamic Nelson-Siegel models. © 2014 John Wiley & Sons, Ltd.


Journal ArticleDOI
TL;DR: In this article, a two-region dynamic stochastic general equilibrium (DSGE) model of an open economy within the European Monetary Union is presented, which is built in the New Keynesian tradition and contains real and nominal rigidities such as habit formation in consumption, price and wage stickiness.
Abstract: In this paper we lay out a two-region dynamic stochastic general equilibrium (DSGE) model of an open economy within the European Monetary Union. The model, which is built in the New Keynesian tradition, contains real and nominal rigidities such as habit formation in consumption, price and wage stickiness as well as rich stochastic structure. The framework also incorporates the theory of unemployment, small open economy aspects and a nominal interest rate that is set exogenously by the area-wide monetary authority. As an illustration, the model is estimated on Luxembourgish data. We evaluate the properties of the estimated model and assess its forecasting performance relative to reduced-form model such as vector autoregression (VAR). In addition, we study the empirical validity of the DSGE model restrictions by applying a DSGE-VAR approach. Copyright © 2014 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This paper analyzed the behavior of experts who quote forecasts for monthly SKU-level sales data where they compare data before and after the moment that experts received different kinds of feedback on their behavior.
Abstract: textWe analyze the behavior of experts who quote forecasts for monthly SKU-level sales data where we compare data before and after the moment that experts received different kinds of feedback on their behavior. We have data for 21 experts located in as many countries who make SKU-level forecasts for a variety of pharmaceutical products for October 2006 to September 2007. We study the behavior of the experts by comparing their forecasts with those from an automated statistical program, and we report the forecast accuracy over these 12 months. In September 2007 these experts were given feedback on their behavior and they received a training at the headquarters' office, where specific attention was given to the ins and outs of the statistical program. Next, we study the behavior of the experts for the 3 months after the training session, that is, October 2007 to December 2007. Our main conclusion is that in the second period the experts' forecasts deviated lesser from the statistical forecasts and that their accuracy improved substantially.

Journal ArticleDOI
TL;DR: In this paper, the authors evaluate the informational content of ex post and ex ante predictors of periods of excess stock valuation for a cross-section comprising 10 OECD economies and a time span of at most 40 years.
Abstract: We evaluate the informational content of ex post and ex ante predictors of periods of excess stock (market) valuation. For a cross-section comprising 10 OECD economies and a time span of at most 40 years, alternative binary chronologies of price bubble periods are determined. Using these chronologies as dependent processes and a set of macroeconomic and financial variables as explanatory variables, panel logit regressions are carried out. With model estimates at hand, both in-sample and out-of-sample forecasts are made. The set of 13 potential predictors is classified in measures of macroeconomic or monetary performance, stock market characteristics and descriptors of capital valuation. The latter, in particular the price-to-book ratio, turn out to have strongest in-sample and out-of-sample explanatory content for the emergence of price bubbles. Copyright (c) 2013 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the effects of disaggregation on forecast accuracy for nonstationary time series using dynamic factor models were analyzed for several European countries of the euro area and their aggregated GDP.
Abstract: This paper focuses on the effects of disaggregation on forecast accuracy for nonstationary time series using dynamic factor models. We compare the forecasts obtained directly from the aggregated series based on its univariate model with the aggregation of the forecasts obtained for each component of the aggregate. Within this framework (first obtain the forecasts for the component series and then aggregate the forecasts), we try two different approaches: (i) generate forecasts from the multivariate dynamic factor model and (ii) generate the forecasts from univariate models for each component of the aggregate. In this regard, we provide analytical conditions for the equality of forecasts. The results are applied to quarterly gross domestic product (GDP) data of several European countries of the euro area and to their aggregated GDP. This will be compared to the prediction obtained directly from modeling and forecasting the aggregate GDP of these European countries. In particular, we would like to check whether long-run relationships between the levels of the components are useful for improving the forecasting accuracy of the aggregate growth rate. We will make forecasts at the country level and then pool them to obtain the forecast of the aggregate. The empirical analysis suggests that forecasts built by aggregating the country-specific models are more accurate than forecasts constructed using the aggregated data. Copyright © 2014 John Wiley & Sons, Ltd.

Posted ContentDOI
TL;DR: In this article, the authors examined the long-run dynamics and the cyclical structure of the US stock market using fractional integration techniques and showed that the fractional cyclical model outperformed the others in a number of cases.
Abstract: This paper examines the long-run dynamics and the cyclical structure of the US stock market using fractional integration techniques. We implement a version of the tests of Robinson (1994a), which enables one to consider unit (or fractional) roots both at the zero (long-run) and at the cyclical frequencies. We examine the following series: inflation, real risk-free rate, real stock returns, equity premium and price/dividend ratio, annually from 1871 to 1993. When focusing exclusively on the long-run or zero frequency, the estimated order of integration varies considerably, but nonstationarity is found only for the price/dividend ratio. When the cyclical component is also taken into account, the series appear to be stationary but to exhibit long memory with respect to both components in almost all cases. The exception is the price/dividend ratio, whose order of integration is higher than 0.5 but smaller than 1 for the long-run frequency, and is constrained between 0 and 0.5 for the cyclical component. Also, mean reversion occurs in all cases. Finally, we use six different criteria to compare the forecasting performance of the fractional (at zero and cyclical) models with other based on fractional and integer differentiation exclusively at the zero frequency. The results show that the fractional cyclical model outperforms the others in a number of cases.

Journal ArticleDOI
TL;DR: In this article, the authors introduced a new monthly euro Area-wide Leading Indicator (ALI) for the euro area growth cycle which is composed of nine leading series and derived from a one-sided bandpass filter.
Abstract: This paper introduces a new monthly euro Area-wide Leading Indicator (ALI) for the euro area growth cycle which is composed of nine leading series and derived from a one-sided bandpass filter. The main findings are that (i) the GDP growth cycle in the euro area can be well tracked, in a timely manner and at monthly frequency, by a reference growth cycle indicator (GCI) derived from industrial production excluding construction, (ii) the ALI reliably leads turning points in the GCI by 5 months and (iii) longer leading components of the ALI are good predictors of the GCI up to 9 months ahead. A real-time case study on the ALI's capabilities for signalling turning points in the euro area growth cycle from 2007 to 2011 confirms these findings. Copyright © 2013 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The semi‐parametric‐based SVM method is utilizes and shows that it is more efficient than the QML under the skewed Student's‐t distributed error, and its performance is investigated by applying separately a Gaussian kernel and a wavelet kernel.
Abstract: This paper concentrates on comparing estimation and forecasting ability of quasi‐maximum likelihood (QML) and support vector machines (SVM) for financial data. The financial series are fitted into a family of asymmetric power ARCH (APARCH) models. As the skewness and kurtosis are common characteristics of the financial series, a skew‐t distributed innovation is assumed to model the fat tail and asymmetry. Prior research indicates that the QML estimator for the APARCH model is inefficient when the data distribution shows departure from normality, so the current paper utilizes the semi‐parametric‐based SVM method and shows that it is more efficient than the QML under the skewed Student's‐t distributed error. As the SVM is a kernel‐based technique, we further investigate its performance by applying separately a Gaussian kernel and a wavelet kernel. The results suggest that the SVM‐based method generally performs better than QML for both in‐sample and out‐of‐sample data. The outcomes also highlight the fact that the wavelet kernel outperforms the Gaussian kernel with lower forecasting error, better generation capability and more computation efficiency. Copyright © 2014 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, a discrete-time varying-coefficient forward hazard model (DVFHM) is proposed to predict forward default probabilities of firms, which is shown to be a reliable and flexible model for forward default prediction.
Abstract: For predicting forward default probabilities of firms, the discrete-time forward hazard model (DFHM) is proposed We derive maximum likelihood estimates for the parameters in DFHM To improve its predictive power in practice, we also consider an extension of DFHM by replacing its constant coefficients of firm-specific predictors with smooth functions of macroeconomic variables The resulting model is called the discrete-time varying-coefficient forward hazard model (DVFHM) Through local maximum likelihood analysis, DVFHM is shown to be a reliable and flexible model for forward default prediction We use real panel datasets to illustrate these two models Using an expanding rolling window approach, our empirical results confirm that DVFHM has better and more robust out-of-sample performance on forward default prediction than DFHM, in the sense of yielding more accurate predicted numbers of defaults and predicted survival times Thus DVFHM is a useful alternative for studying forward default losses in portfolios Copyright © 2013 John Wiley & Sons, Ltd