scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Forecasting in 2020"


Journal ArticleDOI
TL;DR: In this paper, the authors proposed various ways of constructing two types of information flow, based on realized volatility (RV) and implied volatility (IV), in multiple international markets and used a heterogeneous autoregressive framework to forecast the future volatility of each market for 1 day to 22 days ahead.
Abstract: Inspired by the commonly held view that international stock market volatility is equivalent to cross‐market information flow, we propose various ways of constructing two types of information flow, based on realized volatility (RV) and implied volatility (IV), in multiple international markets. We focus on the RVs derived from the intraday prices of eight international stock markets and use a heterogeneous autoregressive framework to forecast the future volatility of each market for 1 day to 22 days ahead. Our Diebold‐Mariano tests provide strong evidence that information flow with IV enhances the accuracy of forecasting international RVs over all of the prediction horizons. The results of a model confidence set test show that a market's own IV and the first principal component of the international IVs exhibit the strongest predictive ability. In addition, the use of information flows with IV can further increase economic returns. Our results are supported by the findings of a wide range of robustness checks.

76 citations


Journal ArticleDOI
TL;DR: In this paper, the authors introduce volatility impulse response functions (VIRF) for dynamic conditional correlation (DCC)-generalized autoregressive conditional heteroskedasticity (GARCH) models.
Abstract: This study introduces volatility impulse response functions (VIRF) for dynamic conditional correlation–generalized autoregressive conditional heteroskedasticity (DCC‐GARCH) models. In addition, the implications with respect to network analysis—using the connectedness approach of Diebold and Y ιlmaz (Journal of Econometrics, 2014, 182(1), 119–134)—is discussed. The main advantages of this framework are (i) that the time‐varying dynamics do not underlie a rolling‐window approach and (ii) that it allows us to test whether the propagation mechanism is time varying or not. An empirical analysis on the volatility transmission mechanism across foreign exchange rate returns is illustrated. The results indicate that the Swiss franc and the euro are net transmitters of shocks, whereas the British pound and the Japanese yen are net volatility receivers of shocks. Finally, the findings suggest a high degree of comovement across European currencies, which has important portfolio and risk management implications.

58 citations


Journal ArticleDOI
TL;DR: A novel credit scoring model, which forecasts the probability of default for each applicant and guides the lenders' decision‐making in P2P lending, and utilizes an advanced gradient boosting decision tree technique to predict default loans.
Abstract: Peer‐to‐peer (P2P) lending is facing severe information asymmetry problems and depends highly on the internal credit scoring system. This paper provides a novel credit scoring model, which forecasts the probability of default for each applicant and guides the lenders' decision‐making in P2P lending. The proposal is expected to improve the existing credit scoring models in P2P lending from two aspects, namely the classifier and the usage of narrative data. We utilize an advanced gradient boosting decision tree technique (i.e., CatBoost) to predict default loans. Moreover, a soft information extraction technique based on keyword clustering is developed to compensate for the insufficient hard credit data. Validated on three real‐world datasets, the experimental results demonstrate that variables extracted from narrative data are powerful features, and the utilization of narrative data significantly improves the predictability relative to solely using hard information. The results of sensitivity analysis reveal that CatBoost outperforms the industry benchmark under different cluster numbers of extracted soft information; meanwhile a small number of clusters (e.g., three) is preferred for consideration of model performance, computational cost, and comprehensibility. We finally facilitate a discussion on practical implication and explanatory considerations.

52 citations


Journal ArticleDOI
TL;DR: A long short‐term memory (LSTM) prediction model in which the actual delay time corresponded to the dependent variable was established via Python and indicated that the LSTM model outperformed the other two models.
Abstract: Delay prediction is an important issue associated with train timetabling and dispatching. Based on real‐world operation records, accurate forecasting of delays is of immense significance in train operation and decisions of dispatchers. In this study, we established a model that illustrates the interaction between train delays and their affecting factors via train describer records on a Dutch railway line. Based on the main factors that affect train delay and the time series trend, we determined the independent and dependent variables. A long short‐term memory (LSTM) prediction model in which the actual delay time corresponded to the dependent variable was established via Python. Finally, the prediction accuracy of the random forest model and artificial neural network model was compared. The results indicated that the LSTM model outperformed the other two models.

38 citations


Journal ArticleDOI
TL;DR: In this paper, a Bayesian vector autoregressive model with multivariate stochastic volatility is proposed to handle vast dimensional information sets. But the model is not suitable for large time series.
Abstract: We develop a Bayesian vector autoregressive (VAR) model with multivariate stochastic volatility that is capable of handling vast dimensional information sets. Three features are introduced to permit reliable estimation of the model. First, we assume that the reduced‐form errors in the VAR feature a factor stochastic volatility structure, allowing for conditional equation‐by‐equation estimation. Second, we apply recently developed global‐local shrinkage priors to the VAR coefficients to cure the curse of dimensionality. Third, we utilize recent innovations to efficiently sample from high‐dimensional multivariate Gaussian distributions. This makes simulation‐based fully Bayesian inference feasible when the dimensionality is large but the time series length is moderate. We demonstrate the merits of our approach in an extensive simulation study and apply the model to US macroeconomic data to evaluate its forecasting capabilities.

38 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the time-varying volatility patterns of some major commodities as well as the potential factors that drive their long-term volatility component using commodity futures for crude oil (WTI and Brent), gold, silver and platinum.
Abstract: This paper investigates the time-varying volatility patterns of some major commodities as well as the potential factors that drive their long-term volatility component. For this purpose, we make use of a recently proposed GARCH-MIDAS approach which typically allows us to examine the role of economic and financial variables of different frequencies. Using commodity futures for crude oil (WTI and Brent), gold, silver and platinum, our results show the necessity of disentangling the short- and long-term components in modeling and forecasting commodity volatility. They also indicate that the long-term volatility of most commodity futures is significantly driven by the level of the general real economic activity as well as the changes in consumer sentiment, industrial production, and economic policy uncertainty. However, the forecasting results are not alike across commodity futures as no single model fits all commodities.

37 citations


Journal ArticleDOI
TL;DR: In this paper, a new approach for the prediction of the electricity price based on forecasting aggregated purchase and sale curves is proposed, where the basic idea is to model the hourly purchase and the sale curves, to predict them and to find the intersection of the predicted curves in order to obtain the predicted equilibrium market price and volume.
Abstract: This work proposes a new approach for the prediction of the electricity price based on forecasting aggregated purchase and sale curves. The basic idea is to model the hourly purchase and the sale curves, to predict them and to find the intersection of the predicted curves in order to obtain the predicted equilibrium market price and volume. Modeling and forecasting of purchase and sale curves is performed by means of functional data analysis methods. More specifically, parametric (FAR) and nonparametric (NPFAR) functional autoregressive models are considered and compared to some benchmarks. An appealing feature of the functional approach is that, unlike other methods, it provides insights into the sale and purchase mechanism connected with the price and demand formation process and can therefore be used for the optimization of bidding strategies. An application to the Italian electricity market (IPEX) is also provided, showing that NPFAR models lead to a statistically significant improvement in the forecasting accuracy.

37 citations


Journal ArticleDOI
TL;DR: The results indicated that the prediction performance of EEMD combined model is better than that of individual models, especially for the 3‐days forecasting horizon, and the machine learning methods outperform the statistical methods to forecast high‐frequency volatile components.
Abstract: Improving the prediction accuracy of agricultural product futures prices is important for the investors, agricultural producers and policy makers. This is to evade the risks and enable the government departments to formulate appropriate agricultural regulations and policies. This study employs Ensemble Empirical Mode Decomposition (EEMD) technique to decompose six different categories of agricultural futures prices. Subsequently three models, Support Vector Machine (SVM), Neural Network (NN) and ARIMA models are used to predict the decomposition components. The final hybrid model is then constructed by comparing the prediction performance of the decomposition components. The predicting performance of the combination model were then compared with the benchmark individual models, SVM, NN, and ARIMA. Our main interest in this study is on the short‐term forecasting, and thus we only consider 1‐day and 3‐days forecast horizons. The results indicated that the prediction performance of EEMD combined model is better than that of individual models, especially for the 3‐days forecasting horizon. The study also concluded that the machine learning methods outperform the statistical methods to forecast high‐frequency volatile components. However, there is no obvious difference between individual models in predicting the low‐frequency components.

37 citations


Journal ArticleDOI
TL;DR: In this paper, a novel Markov regime-switching mixed-data sampling (MRS-MIADS) model was proposed to improve the prediction accuracy of the realized variance (RV) of Bitcoin.
Abstract: The primary purpose of this paper is to investigate whether a novel Markov regime‐switching mixed‐data sampling (MRS‐MIADS) model we design can improve the prediction accuracy of the realized variance (RV) of Bitcoin. Moreover, to verify whether the importance of jumps for RV forecasting changes over time, we extend the standard MIDAS model to characterize two volatility regimes and introduce a jump‐driven time‐varying transition probability between the two regimes. Our results suggest that the proposed novel MRS‐MIDAS model exhibits statistically significant improvement for forecasting the RV of Bitcoin. In addition, we find that jump occurrences significantly increase the persistence of the high‐volatility regime and switch between high‐ and low‐volatility regimes. A wide range of checks confirm the robustness of our results. Finally, the proposed model shows significant improvement for 2‐week and 1‐month horizon forecasts.

36 citations



Journal ArticleDOI
TL;DR: In this paper, a hybrid empirical mode decomposition (EMD) and support vector regression (SVR) with back-propagation neural network (BPNN) model is proposed.
Abstract: Since load forecasting plays a decisive role in the safe and stable operation of power systems, it is particularly important to explore forecasting methods accurately. In this article, the hybrid empirical mode decomposition (EMD) and support vector regression (SVR) with back‐propagation neural network (BPNN), namely the EMDHR‐SVR‐BPNN model, is proposed. Information theory is mainly used to solve the data tendency problem, and the EMD method is used to solve the data volatility problem. There is no interaction between these two methods; thus these two models can complement each other through generalized regression of orthogonal decomposition. Taking the load data from the New South Wales (NSW, Australia) market as an example, the obtained simulation results are compared with other models. It is concluded that the proposed EMDHR‐SVR‐BPNN model not only improves the forecasting accuracy but also has good fitting ability. It can reflect the changing tendency of data in a timely manner, providing a strong basis for the electricity generation of the power sector in the future, thus reducing electricity waste. The proposed EMDHR‐SVR‐BPNN model has potential for employment in mid‐short term load forecasting.

Journal ArticleDOI
TL;DR: The improved ARIMA model based on deep learning not only enriches the models for the forecasting of time series, but also provides effective tools for high‐frequency strategy design to reduce the investment risks of stock index.
Abstract: Through empirical research, it is found that the traditional autoregressive integrated moving average (ARIMA) model has a large deviation for the forecasting of high‐frequency financial time series. With the improvement in storage capacity and computing power of high‐frequency financial time series, this paper combines the traditional ARIMA model with the deep learning model to forecast high‐frequency financial time series. It not only preserves the theoretical basis of the traditional model and characterizes the linear relationship, but also can characterize the nonlinear relationship of the error term according to the deep learning model. The empirical study of Monte Carlo numerical simulation and CSI 300 index in China show that, compared with ARIMA, support vector machine (SVM), long short‐term memory (LSTM) and ARIMA‐SVM models, the improved ARIMA model based on LSTM not only improves the forecasting accuracy of the single ARIMA model in both fitting and forecasting, but also reduces the computational complexity of only a single deep learning model. The improved ARIMA model based on deep learning not only enriches the models for the forecasting of time series, but also provides effective tools for high‐frequency strategy design to reduce the investment risks of stock index.

Journal ArticleDOI
TL;DR: In this article, the authors proposed an FDP framework to reveal the financial distress features of listed Chinese companies, incorporating financial, management, and textual factors, and evaluated the prediction performance of multiple models in different time spans.
Abstract: Financial distress prediction (FDP) has been widely considered as a promising approach to reducing financial losses. While financial information comprises the traditional factors involved in FDP, nonfinancial factors have also been examined in recent studies. In light of this, the purpose of this study is to explore the integrated factors and multiple models that can improve the predictive performance of FDP models. This study proposes an FDP framework to reveal the financial distress features of listed Chinese companies, incorporating financial, management, and textual factors, and evaluating the prediction performance of multiple models in different time spans. To develop this framework, this study employs the wrapper‐based feature selection method to extract valuable features, and then constructs multiple single classifiers, ensemble classifiers, and deep learning models in order to predict financial distress. The experiment results indicate that management and textual factors can supplement traditional financial factors in FDP, especially textual ones. This study also discovers that integrated factors collected 4 years prior to the predicted benchmark year enable a more accurate prediction, and the ensemble classifiers and deep learning models developed can achieve satisfactory FDP performance. This study makes a novel contribution as it expands the predictive factors of financial distress and provides new findings that can have important implications for providing early warning signals of financial risk.

Journal ArticleDOI
TL;DR: Deep residual compensation extreme learning machine model (DRC-ELM) as discussed by the authors is a deep residual compensation ELM model with multilayer structures applied to regression is presented, which is applied to two practical problems: gold price forecasting and airfoil self noise prediction.
Abstract: The extreme learning machine (ELM) is a type of machine learning algorithm for training a single hidden layer feedforward neural network. Randomly initializing the weight between the input layer and the hidden layer and the threshold of each hidden layer neuron, the weight matrix of the hidden layer can be calculated by the least squares method. The efficient learning ability in ELM makes it widely applicable in classification, regression, and more. However, owing to some unutilized information in the residual, there are relatively huge prediction errors involving ELM. In this paper, a deep residual compensation extreme learning machine model (DRC‐ELM) of multilayer structures applied to regression is presented. The first layer is the basic ELM layer, which helps in obtaining an approximation of the objective function by learning the characteristics of the sample. The other layers are the residual compensation layers in which the learned residual is corrected layer by layer to the predicted value obtained in the previous layer by constructing a feature mapping between the input layer and the output of the upper layer. This model is applied to two practical problems: gold price forecasting and airfoil self‐noise prediction. We used the DRC‐ELM with 50, 100, and 200 residual compensation layers respectively for experiments, which show that DRC‐ELM does better in generalization and robustness than classical ELM, improved ELM models such as GA‐RELM and OS‐ELM, and other traditional machine learning algorithms such as support vector machine (SVM) and back‐propagation neural network (BPNN).

Journal ArticleDOI
TL;DR: Six of the eight top forecasts are generated by the same algorithm, namely a linear support vector regressor (SVR), and the other two highest ranked forecasts are produced as simple mean forecast combinations.
Abstract: We employ 47 different algorithms to forecast Australian log real house prices and growth rates, and compare their ability to produce accurate out‐of‐sample predictions. The algorithms, which are specified in both single‐ and multi‐equation frameworks, consist of traditional time series models, machine learning (ML) procedures, and deep learning neural networks. A method is adopted to compute iterated multistep forecasts from nonlinear ML specifications. While the rankings of forecast accuracy depend on the length of the forecast horizon, as well as on the choice of the dependent variable (log price or growth rate), a few generalizations can be made. For one‐ and two‐quarter‐ahead forecasts we find a large number of algorithms that outperform the random walk with drift benchmark. We also report several such outperformances at longer horizons of four and eight quarters, although these are not statistically significant at any conventional level. Six of the eight top forecasts (4 horizons × 2 dependent variables) are generated by the same algorithm, namely a linear support vector regressor (SVR). The other two highest ranked forecasts are produced as simple mean forecast combinations. Linear autoregressive moving average and vector autoregression models produce accurate olne‐quarter‐ahead predictions, while forecasts generated by deep learning nets rank well across medium and long forecast horizons.

Journal ArticleDOI
TL;DR: This paper assess the predictive content of latent economic policy uncertainty and data surprise factors for forecasting and nowcasting gross domestic product (GDP) using factor-type econometric models and find that the inclusion of new uncertainty and surprise factors leads to superior predictions of GDP growth.
Abstract: In this paper, we assess the predictive content of latent economic policy uncertainty and data surprise factors for forecasting and nowcasting gross domestic product (GDP) using factor‐type econometric models. Our analysis focuses on five emerging market economies: Brazil, Indonesia, Mexico, South Africa, and Turkey; and we carry out a forecasting horse race in which predictions from various different models are compared. These models may (or may not) contain latent uncertainty and surprise factors constructed using both local and global economic datasets. The set of models that we examine in our experiments includes both simple benchmark linear econometric models as well as dynamic factor models that are estimated using a variety of frequentist and Bayesian data shrinkage methods based on the least absolute shrinkage operator (LASSO). We find that the inclusion of our new uncertainty and surprise factors leads to superior predictions of GDP growth, particularly when these latent factors are constructed using Bayesian variants of the LASSO. Overall, our findings point to the importance of spillover effects from global uncertainty and data surprises, when predicting GDP growth in emerging market economies.

Journal ArticleDOI
TL;DR: In this article, the performance of a large set of approaches dealing with multivariate information was compared with a wide variety of competing strategies, including the heterogeneous autoregressive (HAR) benchmark, kitchen sink model, popular forecast combinations, principal component analysis (PCA), partial least squares (PLS), and the ridge, lasso, and elastic net shrinkage methods.
Abstract: This paper aims to accurately forecast US stock market volatility by using international market volatility information flows. The results show the significant ability of the combined international volatility information to predict US stock volatility. The predictability is found to be both statistically and economically significant. Furthermore, in this framework, we compare the performance of a large set of approaches dealing with multivariate information. Dynamic model averaging (DMA) and dynamic model selection (DMS) perform better than a wide variety of competing strategies, including the heterogeneous autoregressive (HAR) benchmark, kitchen sink model, popular forecast combinations, principal component analysis (PCA), partial least squares (PLS), and the ridge, lasso, and elastic net shrinkage methods. A wide range of extensions and robustness checks reduce the concern regarding data mining. DMA and DMS are also able to significantly forecast international stock market volatilities.

Journal ArticleDOI
TL;DR: In this paper, the authors evaluated the application potential of multiple linear regression (MLR) and machine learning tools such as support vector regression (SVR) and Gaussian process regression (GPR) to forecast the agricultural energy consumption of Turkey.
Abstract: Agricultural productivity highly depends on the cost of energy required for cultivation. Thus prior knowledge of energy consumption is an important step for energy planning and policy development in agriculture. The aim of the present study is to evaluate the application potential of multiple linear regression (MLR) and machine learning tools such as support vector regression (SVR) and Gaussian process regression (GPR) to forecast the agricultural energy consumption of Turkey. In the development of the models, widespread indicators such as agricultural value‐added, total arable land, gross domestic product share of agriculture, and population data were used as input parameters. Twenty‐eight‐year historical data from 1990 to 2017 were utilized for the training and testing stages of the models. A Bayesian optimization method was applied to improve the prediction capability of SVR and GPR models. The performance of the models was measured by various statistical tools. The results indicated that the Bayesian optimized GPR (BGPR) model with exponential kernel function showed a superior prediction capability over MLR and Bayesian optimized SVR model. The root mean square error, mean absolute deviation, mean absolute percentage error, and coefficient of determination (R2) values for the BGPR model were determined as 0.0022, 0.0005, 0.2041, and 0.9999 in the training phase and 0.0452, 0.0310, 7.7152, and 0.9677 in the testing phase, respectively. As a result, it can be concluded that the proposed BGPR model is an efficient technique and has the potential to predict agricultural energy consumption with high accuracy.

Journal ArticleDOI
TL;DR: It is shown how the proposed methodology overcomes both the usual challenges (e.g. simulating regime switching, volatility clustering, skewed tails, etc.) as well as the new ones added by the current market environment characterized by low to negative interest rates.
Abstract: The aim of this paper is to propose a new methodology that allows forecasting, through Vasicek and CIR models, of future expected interest rates based on rolling windows from observed financial market data. The novelty, apart from the use of those models not for pricing but for forecasting the expected rates at a given maturity, consists in an appropriate partitioning of the data sample. This allows capturing all the statistically significant time changes in volatility of interest rates, thus giving an account of jumps in market dynamics. The new approach is applied to different term structures and is tested for both models. It is shown how the proposed methodology overcomes both the usual challenges (e.g., simulating regime switching, volatility clustering, skewed tails) as well as the new ones added by the current market environment characterized by low to negative interest rates.

Journal ArticleDOI
TL;DR: This article investigated the role of trading volume and data frequency in volatility forecasting by evaluating the performance of Generalized Autoregressive Conditional Heteroskedasticity Mixed-Data Sampling (GARCH•MIDAS), traditional GARCH, and intraday GARCH models.
Abstract: This research investigates the role of trading volume and data frequency in volatility forecasting by evaluating the performance of Generalized Autoregressive Conditional Heteroskedasticity Mixed‐Data Sampling (GARCH‐MIDAS), traditional GARCH, and intraday GARCH models. We take trading volume as the proxy for information flow and examine whether the Sequential Information Arrival Hypothesis (SIAH) is supported in the China stock market. The contributions of this study are as follows. (1) We provide a more consistent comparison to evaluate the forecasting ability of the MIDAS approach. (2) We extend the literature on the forecasting performance of trading volume to the GARCH‐MIDAS approach. (3) We present clear evidence to support that forecasting ability strongly relies upon data frequency. The empirical results show that: (1) GARCH‐MIDAS is not able to beat the traditional GARCH method when both are estimated by the same predictor sampled at different frequencies; (2) there is a positive relation between trading volume and volatility, but no clear evidence appears that SIAH holds in the China stock market; and (3) high‐frequency data are highly recommended for daily realized volatility (RV) forecasting, whereas intraday GARCH could significantly outperform traditional GARCH and GARCH‐MIDAS in volatility forecasting.

Journal ArticleDOI
Nima Nonejad1
TL;DR: The authors investigated whether crude oil price volatility is predictable by conditioning on macroeconomic variables and found that the predictive power associated with the more successful macro economic variables concentrates around the Great Recession until 2015.
Abstract: We investigate whether crude oil price volatility is predictable by conditioning on macroeconomic variables. We consider a large number of predictors, take into account the possibility that relative predictive performance varies over the out‐of‐sample period, and shed light on the economic drivers of crude oil price volatility. Results using monthly data from 1983:M1 to 2018:M12 document that variables related to crude oil production, economic uncertainty and variables that either describe the current stance or provide information about the future state of the economy forecast crude oil price volatility at the population level 1 month ahead. On the other hand, evidence of finite‐sample predictability is very weak. A detailed examination of our out‐of‐sample results using the fluctuation test suggests that this is because relative predictive performance changes drastically over the out‐of‐sample period. The predictive power associated with the more successful macroeconomic variables concentrates around the Great Recession until 2015. They also generate the strongest signal of a decrease in the price of crude oil towards the end of 2008.

Journal ArticleDOI
TL;DR: In this paper, the authors constructed a financial distress prediction model that includes not only traditional financial variables, but also several important corporate governance variables using data from Taiwan, and the empirical results show that the best in-sample and out-of-sample prediction models should combine the financial variables with the corporate governance features.
Abstract: This paper constructs a financial distress prediction model that includes not only traditional financial variables, but also several important corporate governance variables. Using data from Taiwan, the empirical results show that the best in‐sample and out‐of‐sample prediction models should combine the financial variables with the corporate governance variables. Moreover, the prediction accuracy is higher for the models using dynamic distress threshold values than those with tradition threshold values. Most financial ratios, except for the debt ratio, are higher in financially sound companies than in financial distressed ones. With regard to the corporate governance variables, we find that the CEO/Chairman duality may not result in the outbreak of financial distress, but higher equity pledge ratios of managers (shareholding ratios by board members and insiders) positively (negatively) correlate with financial distress.

Journal ArticleDOI
TL;DR: In this paper, a 10-year global index portfolio of developed, emerging, and commodity markets was analyzed by fitting vine copulas (e.g., r, c, vines, cvines, d, dvines), IGARCH(1,1) riskMetrics value at risk (VaR), and portfolio optimization methods based on risk measures such as the variance, conditional value-at risk, conditional drawdown-at-risk, minimizing regret (Minimax), and mean absolute deviation.
Abstract: This paper undertakes an in‐sample and rolling‐window comparative analysis of dependence, market, and portfolio investment risks on a 10‐year global index portfolio of developed, emerging, and commodity markets. We draw our empirical results by fitting vine copulas (e.g., r‐vines, c‐vines, d‐vines), IGARCH(1,1) RiskMetrics value‐at‐risk (VaR), and portfolio optimization methods based on risk measures such as the variance, conditional value‐at‐risk, conditional drawdown‐at‐risk, minimizing regret (Minimax), and mean absolute deviation. The empirical results indicate that all international indices tend to correlate strongly in the negative tail of the return distribution; however, emerging markets, relative to developed and commodity markets, exhibit greater dependence, market, and portfolio investment risks. The portfolio optimization shows a clear preference towards the gold commodity for investment, while Japan and Canada are found to have the highest and lowest market risk, respectively. The vine copula analysis identifies symmetry in the dependence dynamics of the global index portfolio modeled. Large VaR diversification benefits are produced at the 95% and 99% confidence levels by the modeled international index portfolio. The empirical results may appeal to international portfolio investors and risk managers for advanced portfolio management, hedging, and risk forecasting.

Journal ArticleDOI
TL;DR: In this article, the authors proposed the WT•FCD•MLGRU model, which is the combination of wavelet transform, filter cycle decomposition and multilag neural networks.
Abstract: With the development of artificial intelligence, deep learning is widely used in the field of nonlinear time series forecasting. It is proved in practice that deep learning models have higher forecasting accuracy compared with traditional linear econometric models and machine learning models. With the purpose of further improving forecasting accuracy of financial time series, we propose the WT‐FCD‐MLGRU model, which is the combination of wavelet transform, filter cycle decomposition and multilag neural networks. Four major stock indices are chosen to test the forecasting performance among traditional econometric model, machine learning model and deep learning models. According to the result of empirical analysis, deep learning models perform better than traditional econometric model such as autoregressive integrated moving average and improved machine learning model SVR. Besides, our proposed model has the minimum forecasting error in stock index prediction.

Journal ArticleDOI
TL;DR: This paper explored the role of business cycle proxies, measured by the output gap at the global, regional, and local levels, as potential predictors of stock market volatility in the emerging BRICS nations.
Abstract: This paper explores the role of business cycle proxies, measured by the output gap at the global, regional, and local levels, as potential predictors of stock market volatility in the emerging BRICS nations. We observe that the emerging BRICS nations display a rather heterogeneous pattern when it comes to the relative role of idiosyncratic factors as a predictor of stock market volatility. While domestic output gap is found to capture significant predictive information for India and China particularly, the business cycles associated with emerging economies and the world in general are strongly important for the BRIC countries and weakly for South Africa, especially in the postglobal financial crisis era. The findings suggest that despite the increase in the financial integration of world capital markets, emerging economies can still bear significant exposures to idiosyncratic risk factors, an issue of high importance for the profitability of global diversification strategies.

Journal ArticleDOI
TL;DR: This approach shows that the dependence on internal hotel occupancy data can be removed by making use of a proxy measure for hotel occupancy rate at a city level and it is shown how the proposed framework improves managerial decision making in tourism planning.
Abstract: This study proposes Gaussian processes to forecast daily hotel occupancy at a city level. Unlike other studies in the tourism demand prediction literature, the hotel occupancy rate is predicted on a daily basis and 45 days ahead of time using online hotel room price data. A predictive framework is introduced that highlights feature extraction and selection of the independent variables. This approach shows that the dependence on internal hotel occupancy data can be removed by making use of a proxy measure for hotel occupancy rate at a city level. Six forecasting methods are investigated, including linear regression, autoregressive integrated moving average and recent machine learning methods. The results indicate that Gaussian processes offer the best tradeoff between accuracy and interpretation by providing prediction intervals in addition to point forecasts. It is shown how the proposed framework improves managerial decision making in tourism planning.

Journal ArticleDOI
TL;DR: In this paper, a generalized autoregressive conditional heteroskedasticity-mixed data sampling (GARCH-MIDAS-ES) model was introduced to examine whether the importance of extreme shocks changes in different time ranges.
Abstract: This paper introduces a novel generalized autoregressive conditional heteroskedasticity–mixed data sampling–extreme shocks (GARCH‐MIDAS‐ES) model for stock volatility to examine whether the importance of extreme shocks changes in different time ranges. Based on different combinations of the short‐ and long‐term effects caused by extreme events, we extend the standard GARCH‐MIDAS model to characterize the different responses of the stock market for short‐ and long‐term horizons, separately or in combination. The unique timespan of nearly 100 years of the Dow Jones Industrial Average (DJIA) daily returns allows us to understand the stock market volatility under extreme shocks from a historical perspective. The in‐sample empirical results clearly show that the DJIA stock volatility is best fitted to the GARCH‐MIDAS‐SLES model by including the short‐ and long‐term impacts of extreme shocks for all forecasting horizons. The out‐of‐sample results and robustness tests emphasize the significance of decomposing the effect of extreme shocks into short‐ and long‐term effects to improve the accuracy of the DJIA volatility forecasts.

Journal ArticleDOI
TL;DR: In this article, the authors investigated aggregated survey forecasts with forecast horizons of 3, 12, and 24 months for the exchange rates of the Chinese yuan, the Hong Kong dollar, the Japanese yen, and the Singapore dollar vis-a-vis the US dollar and, hence, for four different currency regimes.
Abstract: By linking measures of forecast accuracy as well as testing procedures with regard to forecast rationality this paper investigates aggregated survey forecasts with forecast horizons of 3, 12, and 24 months for the exchange rates of the Chinese yuan, the Hong Kong dollar, the Japanese yen, and the Singapore dollar vis‐a‐vis the US dollar and, hence, for four different currency regimes. The rationality of the exchange rate predictions is initially assessed utilizing tests for unbiasedness and efficiency which indicate that the investigated forecasts are irrational in the sense that the predictions are biased. As one major contribution of this paper, it is subsequently shown that these results are not consistent with an alternative, less restrictive, measure of rationality. Investigating the order of integration of the time series as well as cointegrating relationships, this empirical evidence supports the conclusion that the majority of forecasts are in fact rational. Regarding forerunning properties of the predictions, the results are rather mediocre, with shorter term forecasts for the tightly managed USD/CNY FX regime being one exception. As one additional important and novel evaluation result, it can be concluded, that the currency regime matters for the quality of exchange rate forecasts.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a generic statement on how to classify the timescales and further presented different applications of these forecasts across the entire wind power value chain, and further proposed different methods to forecast the wind in terms of wind speeds and wind power generation across different timecales.
Abstract: The intermittency of the wind has been reported to present significant challenges to power and grid systems, which intensifies with increasing penetration levels. Accurate wind forecasting can mitigate these challenges and help in integrating more wind power into the grid. A range of studies have presented algorithms to forecast the wind in terms of wind speeds and wind power generation across different timescales. However, the classification of timescales varies significantly across the different studies (2010–2014). The timescale is important in specifying which methodology to use when, as well in uniting future research, data requirements, etc. This study proposes a generic statement on how to classify the timescales, and further presents different applications of these forecasts across the entire wind power value chain.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a non-homogeneous hidden Markov model for forecasting univariate time series. And they used the recently proposed Polya-Gamma latent variable scheme to address this problem.
Abstract: We consider finite state-space non-homogeneous hidden Markov models for forecasting univariate time series. Given a set of predictors, the time series are modeled via predictive regressions with state-dependent coefficients and time-varying transition probabilities that depend on the predictors via a logistic/multinomial function. In a hidden Markov setting, inference for logistic regression coefficients becomes complicated and in some cases impossible due to convergence issues. In this paper, we aim to address this problem utilizing the recently proposed Polya-Gamma latent variable scheme. Also, we allow for model uncertainty regarding the predictors that affect the series both linearly — in the mean — and non-linearly — in the transition matrix. Predictor selection and inference on the model parameters are based on an automatic Markov chain Monte Carlo scheme with reversible jump steps. Hence the proposed methodology can be used as a black box for predicting time series. Using simulation experiments, we illustrate the performance of our algorithm in various setups, in terms of mixing properties, model selection and predictive ability. An empirical study on realized volatility data shows that our methodology gives improved forecasts compared to benchmark models.