scispace - formally typeset
Search or ask a question

Showing papers in "Computational Economics in 2021"


Journal ArticleDOI
TL;DR: In this article, an explainable Artificial Intelligence model is proposed for credit risk management and, in particular, in measuring the risks that arise when credit is borrowed employing peer-to-peer lending platforms.
Abstract: The paper proposes an explainable Artificial Intelligence model that can be used in credit risk management and, in particular, in measuring the risks that arise when credit is borrowed employing peer to peer lending platforms. The model applies correlation networks to Shapley values so that Artificial Intelligence predictions are grouped according to the similarity in the underlying explanations. The empirical analysis of 15,000 small and medium companies asking for credit reveals that both risky and not risky borrowers can be grouped according to a set of similar financial characteristics, which can be employed to explain their credit score and, therefore, to predict their future behaviour.

93 citations


Journal ArticleDOI
Jaehyun Yoon1
TL;DR: The results of this paper show that for the 2001–2018 period, the forecasts by the gradient boosting model and random forest model are more accurate than the benchmark forecasts.
Abstract: This paper presents a method for creating machine learning models, specifically a gradient boosting model and a random forest model, to forecast real GDP growth. This study focuses on the real GDP growth of Japan and produces forecasts for the years from 2001 to 2018. The forecasts by the International Monetary Fund and Bank of Japan are used as benchmarks. To improve out-of-sample prediction, the cross-validation process, which is designed to choose the optimal hyperparameters, is used. The accuracy of the forecast is measured by mean absolute percentage error and root squared mean error. The results of this paper show that for the 2001–2018 period, the forecasts by the gradient boosting model and random forest model are more accurate than the benchmark forecasts. Between the gradient boosting and random forest models, the gradient boosting model turns out to be more accurate. This study encourages increasing the use of machine learning models in macroeconomic forecasting.

70 citations



Journal ArticleDOI
TL;DR: This work designed a next-generation MAS stock market simulator, in which each agent learns to trade autonomously via reinforcement learning, and can faithfully reproduce key market microstructure metrics, such as various price autocorrelation scalars over multiple time intervals.
Abstract: Quantitative finance has had a long tradition of a bottom-up approach to complex systems inference via multi-agent systems (MAS) These statistical tools are based on modelling agents trading via a centralised order book, in order to emulate complex and diverse market phenomena These past financial models have all relied on so-called zero-intelligence agents, so that the crucial issues of agent information and learning, central to price formation and hence to all market activity, could not be properly assessed In order to address this, we designed a next-generation MAS stock market simulator, in which each agent learns to trade autonomously via reinforcement learning We calibrate the model to real market data from the London Stock Exchange over the years 2007 to 2018, and show that it can faithfully reproduce key market microstructure metrics, such as various price autocorrelation scalars over multiple time intervals Agent learning thus enables accurate emulation of the market microstructure as an emergent property of the MAS

33 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed an integrated approach based on linear and nonlinear models that can predict the unemployment rates more accurately, which guarantees that the proposed model cannot show "explosive" behavior or growing variance over time.
Abstract: Unemployment has always been a very focused issue causing a nation as a whole to lose its economic and financial contribution. Unemployment rate prediction of a country is a crucial factor for the country’s economic and financial growth planning and a challenging job for policymakers. Traditional stochastic time series models, as well as modern nonlinear time series techniques, were employed for unemployment rate forecasting previously. These macroeconomic data sets are mostly nonstationary and nonlinear in nature. Thus, it is atypical to assume that an individual time series forecasting model can generate a white noise error. This paper proposes an integrated approach based on linear and nonlinear models that can predict the unemployment rates more accurately. The proposed hybrid model of the unemployment rate can improve their forecasts by reflecting the unemployment rate’s asymmetry. The model’s applications are shown using seven unemployment rate data sets from various countries, namely, Canada, Germany, Japan, Netherlands, New Zealand, Sweden, and Switzerland. The results of computational tests are very promising in comparison with other conventional methods. The results for asymptotic stationarity of the proposed hybrid approach using Markov chains and nonlinear time series analysis techniques are given in this paper which guarantees that the proposed model cannot show ‘explosive’ behavior or growing variance over time.

32 citations


Journal ArticleDOI
TL;DR: This study proposes a novel two-stage ensemble machine learning model named SVR-ENANFIS for stock price prediction by combining features of support vector regression (SVR) and ensemble adaptive neuro fuzzy inference system (ENFIS).
Abstract: Stock market forecasting is considered to be a challenging topic among time series forecasting. This study proposes a novel two-stage ensemble machine learning model named SVR-ENANFIS for stock price prediction by combining features of support vector regression (SVR) and ensemble adaptive neuro fuzzy inference system (ENANFIS). In the first stage, the future values of technical indicators are forecasted by SVR. In the second stage, ENANFIS is utilized to forecast the closing price based on prediction results of first stage. Finally, the proposed model SVR-ENANFIS is tested on 4 securities randomly selected from the Shanghai and Shenzhen Stock Exchanges with data collected from 2012 to 2017, and the predictions are completed 1–10, 15 and 30 days in advance. The experimental results show that the proposed model SVR-ENANFIS has superior prediction performance than single-stage model ENANFIS and several two-stage models such as SVR-Linear, SVR-SVR, and SVR-ANN.

30 citations


Journal ArticleDOI
TL;DR: This paper considers the daily closing prices of BSE Energy Index, Crude Oil, DJIA Index, Natural Gas, and NIFTY Index representing natural resources, developing and developed economies from January 2012 to March 2017 to analyze the inherent evolutionary dynamics of financial and energy markets.
Abstract: In this paper, we analyze the inherent evolutionary dynamics of financial and energy markets. We study their inter-relationships and perform predictive analysis using an integrated nonparametric framework. We consider the daily closing prices of BSE Energy Index, Crude Oil, DJIA Index, Natural Gas, and NIFTY Index representing natural resources, developing and developed economies from January 2012 to March 2017 for this purpose. DJIA and NIFTY account for the global financial market while the other three-time series represent the energy market. First, we investigate the empirical characteristics of the underlying temporal dynamics of the financial time series through the technique of nonlinear dynamics to extract the key insights. Results suggest the existence of a strong trend component and long-range dependence as the underlying pattern. Then we apply the continuous wavelet transformation based multiscale exploration to investigate the co-movements of considered assets. We discover the long and medium-range co-movements among the heterogeneous assets. The findings of dynamic time-varying association reveal interesting insights that may assist portfolio managers in mitigating risk. Finally, we deploy a wavelet-based time-varying dynamic approach for estimating the conditional correlation among the said assets to determine the hedge ratios for practical implications.

27 citations


Journal ArticleDOI
TL;DR: In this paper, the role of US-China trade war in forecasting out-of-sample daily realized volatility of Bitcoin returns is analyzed using a machine learning technique known as random forests.
Abstract: We analyze the role of the US–China trade war in forecasting out-of-sample daily realized volatility of Bitcoin returns. We study intraday data spanning from 1st July 2017 to 30th June 2019. We use the heterogeneous autoregressive realized volatility model (HAR-RV) as the benchmark model to capture stylized facts such as heterogeneity and long-memory. We then extend the HAR-RV model to include a metric of US–China trade tensions. This is our primary forecasting variable of interest, and it is based on Google Trends. We also control for jumps, realized skewness, and realized kurtosis. For our empirical analysis, we use a machine-learning technique that is known as random forests. Our findings reveal that US–China trade uncertainty does improve forecast accuracy for various configurations of random forests and forecast horizons.

25 citations


Journal ArticleDOI
TL;DR: This is the first empirical application of an SVR model to ship price forecasts and can contribute valuable feedback to investment, financing, and risk management decisions in the global shipping business.
Abstract: A novel and innovative forecasting framework is proposed to generate newbuilding ship price predictions for different vessel types and shipping markets, incorporating recent developments in the dynamic field of artificial intelligence and machine learning algorithms. Based on the advantages of the support vector machine framework, an appropriate support vector regression (SVR) model is specified, tested, and validated for ship price forecasts. The SVR predictive performance is subsequently comparatively evaluated against standard time-series forecast models, such as the ARIMA models, based on convenient statistical criteria. The predictive power of the SVR model is found to be superior to that of the ARIMA model, delivering satisfactory, robust, and promising results. This is the first empirical application of an SVR model to ship price forecasts and can contribute valuable feedback to investment, financing, and risk management decisions in the global shipping business.

25 citations


Journal ArticleDOI
TL;DR: A state-of-the-art of reinforcement learning techniques are proposed, and applications in economics, game theory, operation research and finance are presented.
Abstract: Reinforcement learning algorithms describe how an agent can learn an optimal action policy in a sequential decision process, through repeated experience. In a given environment, the agent policy provides him some running and terminal rewards. As in online learning, the agent learns sequentially. As in multi-armed bandit problems, when an agent picks an action, he can not infer ex-post the rewards induced by other action choices. In reinforcement learning, his actions have consequences: they influence not only rewards, but also future states of the world. The goal of reinforcement learning is to find an optimal policy – a mapping from the states of the world to the set of actions, in order to maximize cumulative reward, which is a long term strategy. Exploring might be sub-optimal on a short-term horizon but could lead to optimal long-term ones. Many problems of optimal control, popular in economics for more than forty years, can be expressed in the reinforcement learning framework, and recent advances in computational science, provided in particular by deep learning algorithms, can be used by economists in order to solve complex behavioral problems. In this article, we propose a state-of-the-art of reinforcement learning techniques, and present applications in economics, game theory, operation research and finance.

23 citations


Journal ArticleDOI
TL;DR: The genetic algorithm (GA) is utilized to adjust and determine the initial weights and thresholds of the backpropagation neural network (BPNN), which assesses the credit risks, and the GA-BPNN algorithm performs well in credit risk prediction of agricultural SCF, and its prediction accuracy and prediction speed are improved.
Abstract: The risk assessment methods of agricultural supply chain finance (SCF) are explored to reduce agricultural SCF’s credit risks. First, the genetic algorithm (GA) is utilized to adjust and determine the initial weights and thresholds of the backpropagation neural network (BPNN), which assesses the credit risks. Second, for the problem that many factors affect the credit risks and the difficulty in selecting the characteristics, the principle of assessment indicator selection is proposed; the characteristics of these indicators are selected by principal component analysis (PCA). Finally, the case analysis method is utilized to verify the proposed risk assessment method, and an optimal credit risk assessment method is established. The results show that GA-BPNN can accelerate the convergence speed of the BPNN and improve the disadvantage in easily falling into the local minimum of BPNN. The PCA method simplifies the complexity of assessment indicator selection, and the representative indicators in agricultural SCF credit risk assessment are successfully selected. Through verification, it is found that the GA-BPNN algorithm performs well in credit risk prediction of agricultural SCF, and its prediction accuracy and prediction speed are improved. Therefore, the used GA-BPNN has performed well in the credit risk prediction of agricultural SCF, which applies to financial credit risk assessment to reduce the credit risks in agricultural SCF.

Journal ArticleDOI
TL;DR: It is argued that the modern machine learning algorithms, although impressive in terms of their performance, do not necessarily shed enough light on human learning and take us further away from Simon’s lifelong quest to understand the mechanics of actual human behaviour.
Abstract: In this paper, we consider learning by human beings and machines in the light of Herbert Simon’s pioneering contributions to the theory of Human Problem Solving. Using board games of perfect information as a paradigm, we explore differences in human and machine learning in complex strategic environments. In doing so, we contrast theories of learning in classical game theory with computational game theory proposed by Simon. Among theories that invoke computation, we make a further distinction between computable and computational or machine learning theories. We argue that the modern machine learning algorithms, although impressive in terms of their performance, do not necessarily shed enough light on human learning. Instead, they seem to take us further away from Simon’s lifelong quest to understand the mechanics of actual human behaviour.

Journal ArticleDOI
TL;DR: This model has a great potential impact on the adequacy of macroeconomic policy, providing tools that help to achieve macroeconomic and monetary stability at the global level, and creating new methodological opportunities for GDP growth forecasting.
Abstract: Precise macroeconomic forecasting is one of the major aims of economic analysis because it facilitates a timely assessment of future economic conditions and can be used for monetary, fiscal, and economic policy purposes. Numerous works have studied the behavior of the macroeconomic situation and have developed models to forecast them. However, the existing models have limitations, and the literature demands more research on the subject given that the accuracy of the models is still poor, and they have only been expanded for developed countries. This paper presents a comparison of methodologies for GDP growth forecasting and, consequently, new forecasting models of GDP growth have been constructed with the ability to estimate accurately future scenarios globally. A sample of 70 countries was used, which has allowed the use of sample combinations that consider the regional heterogeneity of the warning indicators. To the sample under study, different methods have been applied to achieve a high accuracy model, comparing Quantum Computing with Deep Learning procedures, being Deep Neural Decision Trees, which has provided excellent prediction results thanks to large-scale processing with mini-batch-based learning and can be connected to any larger Neural Networks model. Our model has a great potential impact on the adequacy of macroeconomic policy, providing tools that help to achieve macroeconomic and monetary stability at the global level, and creating new methodological opportunities for GDP growth forecasting.

Journal ArticleDOI
TL;DR: In this paper, a comprehensive model is established and the dynamic factor analysis method is used for urban panel data to evaluate urban competitiveness in the Huaihe River eco-economic belt, and the results show that economic development of a city has the greatest impact on its competitiveness while the impact of quality of life is small.
Abstract: Construction of the Huaihe River ecological-economic belt—an important component of the “One Belt, One Road” initiative—is essential for the development of central China. Urban competitiveness can reflect the level of urban development and comprehensive strength that, in turn, determine the trend of urban development. To evaluate urban competitiveness in the Huaihe River eco-economic belt, a comprehensive model is established and the dynamic factor analysis method is used for urban panel data. The results show that the economic development of a city has the greatest impact on its competitiveness while the impact of quality of life is small. In general, the spatial distribution of static scores of urban competitiveness in the Huaihe River eco-economic belt is unbalanced and the variation trend of dynamic scores mainly manifests as M or W shapes with regularity in time and space. The spatial distribution of the comprehensive scores of urban competitiveness varies dramatically, ranging from strong in eastern coastal areas to weak in central and western regions. In the construction of the Huaihe River eco-economic belt, urban development should rely on the comparative advantages of central cities to drive the common development of surrounding cities, helping in the overall development of the eco-economic belt and promoting the coordinated development of eastern and western regions.

Journal ArticleDOI
TL;DR: In this article, the authors classified and prioritized the economy growth-effected knowledge-based indicators using logarithmic fuzzy preference programming, and found that the institutional and economic regime has the priority in comparison with other measures in economic growth.
Abstract: The knowledge-based economy is the basis of economics in which all businesses and industries benefit from the distribution and application of knowledge in pursuit of their goals to meet their needs. But the prosperity and growth of a knowledge-based economy can only be achieved if the economic, socio-political and legal frameworks of a country have the necessary background to realize the required indicators of a knowledge-based economy. In this paper, the economy growth-effected knowledge-based indicators are classified and prioritized using logarithmic fuzzy preference programming. Based on the results, the institutional and economic regime has the priority in comparison with other measures in economic growth. The results of prioritizing alternative criteria show that the technology foundation, structure of trained manpower, trade and capital, employment and economical trademark, respectively affect economic growth. Furthermore, the trade-related indicators are a low effect on economic growth, however, the technology-related indicators are most effective on it. Therefore, today’s oil and export economies are less prioritized than the application of knowledge, and in today’s world, the industrial economy cannot be advanced and must move towards a knowledge-based economy.

Journal ArticleDOI
TL;DR: The training of Pi-Sigma artificial neural networks is performed by differential evolution algorithm uses DE/rand/1 mutation strategy and it is seen that the proposed method has a very effective performance compared with many artificial neural network models.
Abstract: Looking at the artificial neural networks’ literature, most of the studies started with feedforward artificial neural networks and the training of many feedforward artificial neural networks models are performed with derivative-based algorithms such as levenberg–marquardt and back-propagation learning algorithms in the first studies. In recent years, although many new heuristic algorithms have been proposed for different aims these heuristic algorithms are also frequently used in the training process of many different artificial neural network models. Pi-sigma artificial neural networks have different importance than many artificial neural network models with its higher-order network structure and superior forecasting performance. In this study, the training of Pi-Sigma artificial neural networks is performed by differential evolution algorithm uses DE/rand/1 mutation strategy. The performance of the proposed method is evaluated by two data sets and seen that the proposed method has a very effective performance compared with many artificial neural network models.

Journal ArticleDOI
TL;DR: The result of model training shows that the machine learning models improve the accuracy significantly compared to linear multiple regression and spatial econometric models, and the performance of the stacking model is better than that of standalone machineLearning models.
Abstract: The accurate appraisal of second-hand housing prices plays an important role in second-hand housing transactions, mortgages and risk assessment. Machine learning technology, gradually applied to finance and economics, can also be used to upgrade the traditional appraisal methods of second-hand housing. A large number of appraisal indicators and price data on second-hand housing in Beijing, Shanghai, Guangzhou and Shenzhen, four first-tier cities in China, can be obtained by using crawler technology. Then, the geographical location information of second-hand housing can be visualized by GIS technology, and the descriptive text of second-hand housing can be processed by natural language processing. Finally, combined with other numerical and classification indicators, the second-hand housing appraisal model based on a two-tier stacking framework is constructed by using random forest, adaptive boosting, gradient boosting decision tree, light gradient boosting machine and extreme gradient boosting as base models and back propagation neural network as the meta-model. The result of model training shows that the machine learning models improve the accuracy significantly compared to linear multiple regression and spatial econometric models, and the performance of the stacking model is better than that of standalone machine learning models.

Journal ArticleDOI
TL;DR: In this article, the relevance of temperature volatility shocks for the dynamics of productivity, macroeconomic aggregates and asset prices was examined using two centuries of UK temperature data, and it was shown that the relationship between temperature volatility and the macroeconomy varies over time.
Abstract: We produce novel empirical evidence on the relevance of temperature volatility shocks for the dynamics of productivity, macroeconomic aggregates and asset prices. Using two centuries of UK temperature data, we document that the relationship between temperature volatility and the macroeconomy varies over time. First, the sign of the causality from temperature volatility to TFP growth is negative in the post-war period (i.e., 1950–2015) and positive before (i.e., 1800–1950). Second, over the pre-1950 (post-1950) period temperature volatility shocks positively (negatively) affect TFP growth. In the post-1950 period, temperature volatility shocks are also found to undermine equity valuations and other main macroeconomic aggregates. More importantly, temperature volatility shocks are priced in the cross section of returns and command a positive premium. We rationalize these findings within a production economy featuring long-run productivity and temperature volatility risk. In the model temperature volatility shocks generate non-negligible welfare costs. Such costs decrease (increase) when coupled with immediate technology adaptation (capital depreciation).

Journal ArticleDOI
TL;DR: In this paper, a performance measurement heuristic that combines DEA and structural equation modelling (SEM) enables developing relationships between the criteria and sub-criteria for sustainability performance measurement that facilitates to identify improvement measures for every SME within a region through a statistical modelling approach.
Abstract: Although the contribution of small and medium-sized enterprises (SMEs) to economic growth is beyond doubt, they collectively affect the environment and society negatively. As SMEs have to perform in a very competitive environment, they often find it difficult to achieve their environmental and social targets. Therefore, making SMEs sustainable is one of the most daunting tasks for both policy makers and SME owners/managers alike. Prior research argues that through measuring SMEs’ supply chain sustainability performance and deriving means of improvement one can make SMEs’ business more viable, not only from an economic perspective, but also from the environmental and social point of view. Prior studies apply data envelopment analysis (DEA) for measuring the performance of groups of SMEs using multiple criteria (inputs and outputs) by segregating efficient and inefficient SMEs and suggesting improvement measures for each inefficient SME through benchmarking it against the most successful one. However, DEA is limited to recommending means of improvement solely for inefficient SMEs. To bridge this gap, the use of structural equation modelling (SEM) enables developing relationships between the criteria and sub-criteria for sustainability performance measurement that facilitates to identify improvement measures for every SME within a region through a statistical modelling approach. As SEM suggests improvements not from the perspective of individual SMEs but for the totality of SMEs involved, this tool is more suitable for policy makers than for individual company owners/managers. However, a performance measurement heuristic that combines DEA and SEM could make use of the best of each technique, and thereby could be the most appropriate tool for both policy makers and individual SME owners/managers. Additionally, SEM results can be utilized by DEA as inputs and outputs for more effective and robust results since the latter are based on more objective measurements. Although DEA and SEM have been applied separately to study the sustainability of organisations, according to the authors’ knowledge, there is no published research that has combined both the methods for sustainable supply chain performance measurement. The framework proposed in the present study has been applied in two different geographical locations—Normandy in France and Midlands in the UK—to demonstrate the effectiveness of sustainable supply chain performance measurement using the combined DEA and SEM approach. Additionally, the state of the companies’ sustainability in both regions is revealed with a number of comparative analyses.

Journal ArticleDOI
TL;DR: This work examines whether corporate bankruptcy predictions can be improved by utilizing the recurrent neural network (RNN) and long short-term memory (LSTM) algorithms, which can process sequential data.
Abstract: We examine whether corporate bankruptcy predictions can be improved by utilizing the recurrent neural network (RNN) and long short-term memory (LSTM) algorithms, which can process sequential data. Employing the RNN and LSTM methodologies improves bankruptcy prediction performance relative to using other classification techniques, such as logistic regression, support vector machine, and random forest methods. Because performance indicators, such as sensitivity and specificity, differ depending on the methodology, selecting a model that suits the purpose of the bankruptcy predictions is necessary. Our ensemble model, a synthesis of all methodologies, exhibits the best forecasting performance. In the test sample for the ensemble model, none of the observations with a default probability of less than 10% defaults within one year.

Journal ArticleDOI
TL;DR: To capture the features of long memory and jump behaviour in financial assets, a fuzzy mixed fractional Brownian motion model with jumps is proposed and can be treated as a reference pricing tool for financial analysts or investors.
Abstract: As we all know, the financial environment on which option prices depend is very complex and fuzzy, which is mainly affected by the risk preferences of investors, economic policies, markets and other non-random uncertainty. Thus, the input data in the options pricing formula cannot be expected to be precise. However, fuzzy set theory has been introduced as a main method for modeling the uncertainties of the input parameters in the option pricing model. In this paper, we discuss the pricing problem of European options under the fuzzy environment. Specifically, to capture the features of long memory and jump behaviour in financial assets, we propose a fuzzy mixed fractional Brownian motion model with jumps. Subsequently, we present the fuzzy prices of European options under the assumption that the underlying stock price, the risk-free interest rate, the volatility, the jump intensity and the mean value and variance of jump magnitudes are all fuzzy numbers. This assumption allows the financial investors to pick any option price with an acceptable belief degree to make investment decisions based on their risk preferences. In order to obtain the belief degree, the interpolation search algorithm has been proposed. Numerical analysis and examples are also presented to illustrate the performance of our proposed model and the designed algorithm. Finally, empirically studies are performed by utilizing the underlying SSE 50 ETF returns and European options written on SSE 50 ETF. The empirical results indicate that the proposed pricing model is reasonable and can be treated as a reference pricing tool for financial analysts or investors.

Journal ArticleDOI
TL;DR: In this article, the authors used a small number of coherent trend-following technical indicators with similar characteristics, but constructed with a different philosophy, in order to predict the movement of a stock market (the Athens Stock Exchange).
Abstract: This paper utilizes a small number of coherent trend-following technical indicators with similar characteristics, but constructed with a different philosophy, in order to predict the movement of a stock market (the Athens Stock Exchange—ASE). Each one of them produces independent buy/sell signals which are used by a previously strict classic trading strategy that has been transformed appropriately to promote the subjectivity and fuzziness. These signals act as inputs to an appropriately designed fuzzy system, which makes a medium-term prediction regarding the optimum level (percent) of the investor’s portfolio which should be invested. The performance of the model for the 1997–2012 period is excessively superior from the buy and hold (B&H) strategy and the interest gained from saving bank accounts, even after the subtraction of the trading costs. The results are very convincing, even when the testing period is divided into a number of bull and bear market sub periods.

Journal ArticleDOI
TL;DR: Experimental results indicated that the proposed method with an integration method of Principal Component Analysis and Random Forest could be efficiently applied to Chinese security market and can provide useful suggestions to market regulators for insider trading investigations.
Abstract: Insider trading is one kind of criminal behaviors in security markets. It has existed since the birth of the security market. Until 2018, the history of the Chinese security market is less than 30 years, nonetheless, insider trading behavior frequently occurred. In this study, we mainly explore the features of insider trading behavior by studying relevant indicators during the sensitive period (time window length before the release of insider information). For this purpose, an intelligent system with an integration method of Principal Component Analysis (PCA) and Random Forest (RF) is proposed to identify insider tradings in Chinese security market. In the proposed method, we first collect twenty-six relevant indicators for insider trading samples that occurred from 2007 to 2017 and corresponding non-insider trading samples in Chinese security market. Next, by using the PCA, indicator dimension is reduced and principal components are extracted. Then, relations between insider trading samples and principal components are learnt by the RF algorithm. In the identification phase, the trained PCA-RF model is applied to classify the insider trading and non-insider trading samples, as well as analyzing the relative importance of indicators for insider trading identification. Experimental results showed us that under the 30-, 60-, and 90-days time window lengths, recall results of the proposed method for the out-of-samples identification were 73.53%, 83.87%, and 79.41%, respectively. We further investigate the voting threshold of RF for the proposed method, and we found when the voting threshold of RF was increased to more than 70%, the proposed method produced identification accuracy up to more than 90%. In addition, the relative importance result of RF indicated that three indicators were crucial for insider trading identification. Moreover, identification accuracy and efficiency of the proposed method were substantially superior to benchmark methods. In summary, experimental results indicated that the proposed method could be efficiently applied to Chinese security market. Thus, the proposed method can provide useful suggestions to market regulators for insider trading investigations.

Journal ArticleDOI
TL;DR: In this paper, the authors replicate the core model of the well-tested Keynes + Schumpeter agent-based model family, which features an endogenous innovation process in the evolutionary tradition based on invention and imitation.
Abstract: We replicate the core model of the well-tested Keynes + Schumpeter agent-based model family, which features an endogenous innovation process in the evolutionary tradition based on invention and imitation. We introduce heterogeneous labor in the form of three different types of workers, representing different skill levels. In addition to a number of other stylized facts, which are reproduced by any Keynes + Schumpeter model, our version also generates wage inequality and labor market polarization due to skill-biased technological change. We introduce various labor market institutions and policies to our artificial economy in order to test, whether and how they affect inequality and polarization. Those policies, which alter relative wages induce an evolution of the technological development towards a lower demand for the relatively expensive type of worker. Policies and institutions that only aim at increasing the relative wages of low- and medium skilled workers therefore prove to be unable to combat inequality in the long run on their own. In order to be effective, those policies must be combined with educative measures that allow the workers to adapt to the changes in labor demand. Our findings have important implications on the design of real-world policies against inequality and polarization, since they shed light on potential unintended consequences of some of these policies.

Journal ArticleDOI
TL;DR: It can be seen that commercial banks can effectively improve risk management ability and efficiency promoted by technological development, so the level of business risk they undertake can be reduced.
Abstract: To further explore the influence path of internet finance on the risk prevention and management of commercial banks, the backpropagation neural network optimization algorithm was used to predict the risk value and the change of the risk level of commercial banks under the background of internet environment was empirically studied and analyzed. The results showed that the maximum size of genetic algebra and the number of individuals significantly impacted the algorithm’s optimization performance when the genetic algorithm was used for parameter optimization. Through continuous attempts, the prediction effect was the best when the genetic algebra was 62, and the individual number was 45. The training network showed that the test set’s fitting degree was 96.07%, and the prediction error was 0.84%, which was much better than those before optimization. When the predicted risk value was more significant than 0.39, the bank should be vigilant and strengthen risk prevention. The development of internet finance can reduce commercial banks’ business risk levels, reduce their dependence on traditional business, and decrease commercial banks’ business risk levels. It can be seen that commercial banks can effectively improve risk management ability and efficiency promoted by technological development, so the level of business risk they undertake can be reduced.

Journal ArticleDOI
TL;DR: In this paper, the authors applied newly developed DL networks, the deep canonically correlation analysis and deep canonical correlated autoencoders to perform FinTech data mining, and the proposed model employed financial statement data regarding many listed high technology companies in Taiwan stock markets.
Abstract: With the progress of financial technology (FinTech), real-time information from FinTech is huge and complicated. For various fields of research, identifying intrinsic features of complex data is important, not limited to financial big data. Reviewing previous studies, there are no suitable methods to deal with complex financial data. General methods are traditionally developed from statistics and machine learning. They are usually in some shallow model forms, which cannot fully represent complex, compositional, and hierarchical financial data features. Due to above drawbacks, this study tries to address the problem by advanced deep learning (DL) methods. In DL more layers will increase the power for abstract data representation. Recently, DL has achieved state-of-the-art performance in a wide range of tasks including speech, image, and vision. DL is effective in learning increasingly more abstract representations in a layer-wise manner. That just meets the characteristic of financial data. This study applies newly developed DL networks, the deep canonically correlation analysis and deep canonically correlated autoencoders to perform FinTech data mining. To test the proposed model, this study employed financial statement data regarding many listed high technology companies in Taiwan stock markets. The computation of deep learning is leveraged by multiple graphics processing unit. Our systems and traditional methods are compared by the same data. Empirical results showed that our systems outperform traditional techniques from statistics and machine learning.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the relationship between electricity and growth of the economy by applying the newly developed bootstrap autoregressive-distributed lag test with a Fourier function to examine both the causality and cointegration for China, India, and the United States (US).
Abstract: In this study, the relationship between electricity and growth of the economy is investigated by applying the newly-developed bootstrap autoregressive-distributed lag test with a Fourier function to examine both the causality and cointegration for China, India, and the United States (US) While it is not possible to detect a long-term cointegration relation among the economy's electricity and growth, the study findings demonstrate the contingency of the causality The ensemble method in machine learning performs better than conventional methods as electricity is an independent indicator for forecast economics Concerning the US, previous electricity consumption has a positive impact on the current nature of economic growth In contrast, the consumption of electricity is negatively affected by the development of the economy However, for China and India, positive and negative feedback can be observed, respectively Due to the increased awareness of the environment's adverse effects, China should promote technologies that conserve energy and boost energy efficiency to achieve sustainable development in both environmental and economic terms In India's context, broadening access to electricity has significance for residents in rural areas and enhances economic growth It is recommended that policy-makers promote innovative technologies in the US, as the abundant natural and human resources can make valuable contributions to the society and development of the economy

Journal ArticleDOI
TL;DR: In this paper, the authors applied generalized autoregressive conditional heteroskedastic model to investigate the Bitcoin datasets and found that the Bitcoin price has a positive relationship with the exchange rates (USD/Euro, USD/GBP, USD /CHF and Euro /GBP), the DAX and the Nikkei 225, while a negative relationship with Fed funds rate, the FTSE 100, and the USD index.
Abstract: This study explores the determinants of Bitcoin’s price from 2010 to 2018. This study applies Generalized Autoregressive Conditional Heteroskedastic model to investigate the Bitcoin datasets. The experimental results find the Bitcoin price has positive relationship to the exchange rates (USD/Euro, USD/GBP, USD/CHF and Euro/GBP), the DAX and the Nikkei 225, while a negative relationship with the Fed funds rate, the FTSE 100, and the USD index. Especially, Bitcoin price is significantly affected by the Fed funds rate, followed by the Euro/GBP rate, the USD/GBP rate and the West Texas Intermediate price. This study also executes the decision tree and support vector machine techniques to predict the trend of Bitcoin price. The machine learning approach could be a more suitable methodology than traditional statistics for predicting the Bitcoin price.

Journal ArticleDOI
TL;DR: The results show that the financial risk evaluation index system of four dimensions of solvency, operation ability, profitability, growth ability and cash flow ability can affect the financialrisk of enterprises.
Abstract: In order to improve the ability of enterprises to deal with financial risks, reduce labor costs, reduce financial losses, increase investors' trust in enterprise finance, and establish a comprehensive enterprise financial risk evaluation index system, the deep learning technology and data mining method under the artificial intelligence environment are applied to the financial risk analysis of listed companies. Under this background, an analysis method of financial risk prevention based on interactive mining is put forward. Around the various financial risks faced by listed companies, a special risk analysis model is established to analyze the key factors. Through the empirical analysis of 21 listed companies, rules with high trust are found, and the financial crisis of listed companies is forewarned in time. The results show that the financial risk evaluation index system of four dimensions of solvency, operation ability, profitability, growth ability and cash flow ability can affect the financial risk of enterprises. Compared with the traditional data mining algorithm, the algorithm of financial risk index evaluation model constructed in this exploration has the best performance, and the average detection accuracy is 90.27%. The accuracy of the model can be improved by 30%. The results show that the weight of each variable is good, and all of them pass the consistency test. The evaluation effect is high, and the relative error is 1.55%, which proves the rationality and accuracy of the model. The financial risk prevention model based on deep learning and data mining technology can provide a theoretical basis for the research of enterprise financial risk prevention.

Journal ArticleDOI
TL;DR: In this article, the authors compared the cost-effectiveness of two typical pricing policies, i.e., cap and trade and carbon tax, in terms of the estimation of carbon shadow prices.
Abstract: Cost-effectiveness comparisons between two typical pricing policies, ie, cap and trade and carbon tax, are rare in the literature and are tackled in this study We define various carbon shadow prices at different administrative levels By using a computable general equilibrium model, the cost-effectiveness of various policies is compared in terms of the estimation of carbon shadow prices The results show that an energy cap-and-trade policy yields a close GDP-based carbon shadow price but a lower GSPV-based (gross-social-production-value-based) carbon shadow price than a proportional energy reduction policy does Compared to a cap-and-trade policy, a carbon tax policy yields a much lower GDP-based carbon shadow price but a higher GSPV-based price Improving the stringency of either a cap-and-trade policy or a carbon tax policy has limit impact on the industrial structure of the whole economy despite the impact on both the GDP and the GSPV are different between these two policies The comparison of the two carbon pricing policies mainly implies that a carbon tax is more cost-effective than cap-and-trade for a carbon- and trade-intensive economy, but cap-and-trade has lower sector-level impacts than carbon tax especially when the cap restriction is loose