An optimizing BP neural network algorithm based on genetic algorithm
Citations
556 citations
429 citations
Cites background or methods from "An optimizing BP neural network alg..."
...…solid theoretical basis and simple network structuremodelmake the neural network in the fields of pattern recognition, image processing, sensors, signal processing and automatic control have significant results (Ding et al. 2011a,b, 2012;Quteishat andLim2008;Zhang and Wang 2009; Ding and Jin 2013)....
[...]
...Coupled with its solid theoretical basis and simple network structuremodelmake the neural network in the fields of pattern recognition, image processing, sensors, signal processing and automatic control have significant results (Ding et al. 2011a,b, 2012;Quteishat andLim2008;Zhang and Wang 2009; Ding and Jin 2013). It is widely used in the fields of expert system (MarkowskaKaczmar and Trelak 2005), pattern recognition (Mohamed 2011), intelligent control (Ding et al. 2011a,b), combinatorial optimization (Kahramanli and Allahverdi 2009) and prediction (Hagan et al. 2002). At present, there are many common kinds of neural network model, such as BP network (Ding et al. 2011a,b; Feng et al. 2009), RBF network (Ding et al. 2011a,b), Hopfield network, CMAC cerebellar model, ART adaptive resonance theory (LeCun and Bengio 1995; Benqio 2009; Carpenter and Grossberg 2003; Li et al. 2013), etc. The powerful computing capabilities of the neural network are achieved through the propagation of information between neurons. According to the direction of the neural network internal information transfer, the neural network can be divided into two categories: feedforward neural network and feedback type neural networks. Extreme learning machine (ELM) described in this paper is for single hidden layer feedforward neural network which is one of feedforward neural networks. Feedforward neural network model has been extensively used in many fields due to its ability to approximate complex nonlinear mappings directly from the input samples. Among them, for a single hidden layer feedforward neural network learning ability the majority of studies focused on the input samples, divided into two aspects of the compact set and finite set. Hornik (1991) proved that if the activation function is continuous, bounded and nonconstant, then continuous mappings can be approximated in measure by neural networks over compact input sets....
[...]
...At present, there are many common kinds of neural network model, such as BP network (Ding et al. 2011a,b; Feng et al. 2009), RBF network (Ding et al. 2011a,b), Hopfield network, CMAC cerebellar model, ART adaptive resonance theory (LeCun and Bengio 1995; Benqio 2009; Carpenter and Grossberg 2003;…...
[...]
...It is widely used in the fields of expert system (MarkowskaKaczmar and Trelak 2005), pattern recognition (Mohamed 2011), intelligent control (Ding et al. 2011a,b), combinatorial optimization (Kahramanli and Allahverdi 2009) and prediction (Hagan et al. 2002)....
[...]
...Coupled with its solid theoretical basis and simple network structuremodelmake the neural network in the fields of pattern recognition, image processing, sensors, signal processing and automatic control have significant results (Ding et al. 2011a,b, 2012;Quteishat andLim2008;Zhang and Wang 2009; Ding and Jin 2013). It is widely used in the fields of expert system (MarkowskaKaczmar and Trelak 2005), pattern recognition (Mohamed 2011), intelligent control (Ding et al. 2011a,b), combinatorial optimization (Kahramanli and Allahverdi 2009) and prediction (Hagan et al. 2002). At present, there are many common kinds of neural network model, such as BP network (Ding et al. 2011a,b; Feng et al. 2009), RBF network (Ding et al. 2011a,b), Hopfield network, CMAC cerebellar model, ART adaptive resonance theory (LeCun and Bengio 1995; Benqio 2009; Carpenter and Grossberg 2003; Li et al. 2013), etc. The powerful computing capabilities of the neural network are achieved through the propagation of information between neurons. According to the direction of the neural network internal information transfer, the neural network can be divided into two categories: feedforward neural network and feedback type neural networks. Extreme learning machine (ELM) described in this paper is for single hidden layer feedforward neural network which is one of feedforward neural networks. Feedforward neural network model has been extensively used in many fields due to its ability to approximate complex nonlinear mappings directly from the input samples. Among them, for a single hidden layer feedforward neural network learning ability the majority of studies focused on the input samples, divided into two aspects of the compact set and finite set. Hornik (1991) proved that if the activation function is continuous, bounded and nonconstant, then continuous mappings can be approximated in measure by neural networks over compact input sets. On this basis Leshno et al. (1991) improved the results and proved that feedforward networks with a non-polynomial activation function can approximate continuous functions....
[...]
398 citations
233 citations
187 citations
References
2,877 citations
"An optimizing BP neural network alg..." refers methods in this paper
...GA is an iterative computation process, and its main steps include: encoding, initialization of the population, selection, genetic operation (crossover, mutation), evaluation and stop decision (Yao and Xu 2006)....
[...]
224 citations
126 citations
"An optimizing BP neural network alg..." refers methods in this paper
...…such as constructing a neural network based on particle swarm optimization algorithm (Chen and Yu 2005), and using evolutionary algorithms to optimize the neural networks (Eysa and Saeed 2005; Harpham 2004; Venkatesan 2009; Yao and Islam 2008), which have been proved feasible and effective....
[...]
118 citations
"An optimizing BP neural network alg..." refers methods in this paper
...…such as constructing a neural network based on particle swarm optimization algorithm (Chen and Yu 2005), and using evolutionary algorithms to optimize the neural networks (Eysa and Saeed 2005; Harpham 2004; Venkatesan 2009; Yao and Islam 2008), which have been proved feasible and effective....
[...]
117 citations
"An optimizing BP neural network alg..." refers methods in this paper
...…such as constructing a neural network based on particle swarm optimization algorithm (Chen and Yu 2005), and using evolutionary algorithms to optimize the neural networks (Eysa and Saeed 2005; Harpham 2004; Venkatesan 2009; Yao and Islam 2008), which have been proved feasible and effective....
[...]
...To overcome the disadvantages, many optimization algorithms have been introduced in the study and design of neural networks such as constructing a neural network based on particle swarm optimization algorithm (Chen and Yu 2005), and using evolutionary algorithms to optimize the neural networks (Eysa and Saeed 2005; Harpham 2004; Venkatesan 2009; Yao and Islam 2008), which have been proved feasible and effective....
[...]