scispace - formally typeset
Search or ask a question
Author

Jinrong Chen

Bio: Jinrong Chen is an academic researcher from China University of Mining and Technology. The author has contributed to research in topics: Artificial neural network & Probabilistic neural network. The author has an hindex of 3, co-authored 3 publications receiving 84 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: It is proved by instance analysis that new algorithm was superior to the traditional model in terms of convergence rate, predicted value error, number of trainings conducted successfully, etc, which indicates the effect of the new algorithm and deserves further popularization.
Abstract: There is a function of dynamic mapping when processing non-linear complex data with Elman neural networks. Because Elman neural network inherits the feature of back-propagation neural network to some extent, it has many defects; for example, it is easy to fall into local minimum, the fixed learning rate, the uncertain number of hidden layer neuron and so on. It affects the processing accuracy. So we optimize the weights, thresholds and numbers of hidden layer neurons of Elman networks by genetic algorithm. It improves training speed and generalization ability of Elman neural networks to get the optimal algorithm model. It has been proved by instance analysis that new algorithm was superior to the traditional model in terms of convergence rate, predicted value error, number of trainings conducted successfully, etc. It indicates the effect of the new algorithm and deserves further popularization.

52 citations

Journal ArticleDOI
TL;DR: A FA-BP neural network algorithm is proposed which can simplify the network structure, improve the velocity of convergence, and save the running time and the results show that under the prediction precision is not reduced, the error of the prediction value is reduced by using the new algorithm, and therefore the algorithm is effective.
Abstract: Back-Propagation (BP) neural network, as one of the most mature and most widespread algorithms, has the ability of large scale computing and has unique advantages when dealing with nonlinear high dimensional data. But when we manipulate high dimensional data with BP neural network, many feature variables provide enough information, but too many network inputs go against designing of the hidden-layer of the network and take up plenty of storage space as well as computing time, and in the process interfere the convergence of the training network, even influence the the accuracy of recognition finally. Factor analysis (FA) is a multivariate analysis method which transforms many feature variables into few synthetic variables. Aiming at the characteristics that the samples processed have more feature variables, combining with the structure feature of BP neural network, a FA-BP neural network algorithm is proposed. Firstly we reduce the dimensionality of the feature factor using FA, and then regard the features reduced as the input of the BP neural network, carry on network training and simulation with low dimensional data that we get. This algorithm here can simplify the network structure, improve the velocity of convergence, and save the running time. Then we apply the new algorithm in the field of pest prediction to emulate. The results show that under the prediction precision is not reduced, the error of the prediction value is reduced by using the new algorithm, and therefore the algorithm is effective.

37 citations

Proceedings ArticleDOI
16 Apr 2010
TL;DR: This paper proposes a radial basis function (RBF) neural network algorithm based on factor analysis (FA- RBF) with the architecture feature of RBF network when the data are high-dimensional and complex, and compares it with the RBF neural network algorithms based on principal component analysis (PCA-RBF).
Abstract: This paper proposes a radial basis function (RBF) neural network algorithm based on factor analysis (FA-RBF) with the architecture feature of RBF network when the data are high-dimensional and complex. By reducing the feature dimension of the original data, FA-RBF algorithm regards the data after dimension reduction as the inputs of the RBF network, and then trains and simulates the network. The algorithm obviously simplifies the network architecture. By analyzing an example, the results show when the algorithm's predicted precision is not reduced, the convergence velocity is improved, the running time is saved and the error of the predicted value is reduced. In order to test and verify the validity of the new algorithm, we compare it with the RBF neural network algorithm based on principal component analysis (PCA-RBF), the predicted results of FA-RBF algorithm are better than the results of RBF and PCA-RBF algorithm.

10 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This article evaluated the utility of artificial neural networks (ANNs) in terms of their ability to forecast rainfall as a continuous variable and showed that using ANNs highlight the value of the Interdecadal Pacific Oscillation, an index never used in the official seasonal forecasts for Queensland that, until recently, were based on statistical models.

165 citations

Journal ArticleDOI
02 May 2016
TL;DR: The overall algorithmic development of RBF networks by giving special focus on their learning methods, novel kernels, and fine tuning of kernel parameters have been discussed and the recent research work on optimization of multi-criterions inRBF networks is considered.
Abstract: Abstract Radial basis function networks (RBFNs) have gained widespread appeal amongst researchers and have shown good performance in a variety of application domains. They have potential for hybridization and demonstrate some interesting emergent behaviors. This paper aims to offer a compendious and sensible survey on RBF networks. The advantages they offer, such as fast training and global approximation capability with local responses, are attracting many researchers to use them in diversified fields. The overall algorithmic development of RBF networks by giving special focus on their learning methods, novel kernels, and fine tuning of kernel parameters have been discussed. In addition, we have considered the recent research work on optimization of multi-criterions in RBF networks and a range of indicative application areas along with some open source RBFN tools.

81 citations

Journal ArticleDOI
TL;DR: Volatility forecasts associated with the price of gold, silver, and copper are analyzed, finding that the best models to forecast the price return volatility of these main metals are the ANN-GARCH model with regressors.
Abstract: A hybrid model is analyzed to predict the price volatility of gold, silver and copperThe hybrid model used is a ANN-GARCH model with regressors.APGARCH with exogenous variables is used as benchmark.The benchmark is better than the classical GARCH used in previous studies.The incorporation of ANN into the best Garch with regressors increases the accuracy. In this article, we analyze volatility forecasts associated with the price of gold, silver, and copper, three of the most important metals in the world market. First, a group of GARCH models are used to forecast volatility, including explanatory variables like the US Dollar-Euro and US Dollar-Yen exchange rates, the oil price, and the Chinese, Indian, British, and American stock market indexes. Subsequently, these model predictions are used as inputs for a neural network in order to analyze the increase in hybrid predictive power. The results obtained show that for these three metals, using the hybrid neural network model increases the forecasting power of out-of-sample volatility. In order to optimize the results, we conducted a series of sensitizations of the artificial neural network architecture and analyses for different cases, finding that the best models to forecast the price return volatility of these main metals are the ANN-GARCH model with regressors. Due to the heteroscedasticity in the financial series, the loss function used is Heteroskedasticity-adjusted Mean Squared Error (HMSE), and to test the superiority of the models, the Model Confidence Set is used.

60 citations

Journal ArticleDOI
TL;DR: This study may provide a proper guide for novice as well as expert researchers in the design of evolutionary neural networks helping them choose suitable values of genetic algorithm operators for applications in a specific problem domain.
Abstract: Neural networks and genetic algorithms are the two sophisticated machine learning techniques presently attracting attention from scientists, engineers, and statisticians, among others. They have gained popularity in recent years. This paper presents a state of the art review of the research conducted on the optimization of neural networks through genetic algorithm searches. Optimization is aimed toward deviating from the limitations attributed to neural networks in order to solve complex and challenging problems. We provide an analysis and synthesis of the research published in this area according to the application domain, neural network design issues using genetic algorithms, types of neural networks and optimal values of genetic algorithm operators (population size, crossover rate and mutation rate). This study may provide a proper guide for novice as well as expert researchers in the design of evolutionary neural networks helping them choose suitable values of genetic algorithm operators for applications in a specific problem domain. Further research direction, which has not received much attention from scholars, is unveiled.

48 citations

Journal ArticleDOI
TL;DR: A novel optimized GA–Elman neural network algorithm where the connection weights are real-encoded, while the neurons of the hidden layer also adopt real-coding but with the addition of binary control genes is proposed.
Abstract: The Elman neural network has good dynamic properties and strong global stability, being most widely used to deal with nonlinear, dynamic, and complex data. However, as an optimization of the backpropagation (BP) neural network, the Elman model inevitably inherits some of its inherent deficiencies, influencing the recognition precision and operating efficiency. Many improvements have been proposed to resolve these problems, but it has proved difficult to balance the many relevant features such as storage space, algorithm efficiency, recognition precision, etc. Also, it is difficult to obtain a permanent solution from a temporary solution simultaneously. To address this, a genetic algorithm (GA) can be introduced into the Elman algorithm to optimize the connection weights and thresholds, which can prevent the neural network from becoming trapped in local minima and improve the training speed and success rate. The structure of the hidden layer can also be optimized using the GA, which can solve the difficult problem of determining the number of neurons. Most previous studies on such evolutionary Elman algorithms optimized the connection weights or network structure individually, which represents a slight deficiency. We propose herein a novel optimized GA–Elman neural network algorithm where the connection weights are real-encoded, while the neurons of the hidden layer also adopt real-coding but with the addition of binary control genes. In this new algorithm, the connection weights and the number of hidden neurons are optimized using hybrid encoding and evolution simultaneously, greatly improving the performance of the resulting novel GA–Elman algorithm. The results of three experiments show that this new GA–Elman model is superior to the traditional model in terms of all calculated indexes.

48 citations