scispace - formally typeset
Search or ask a question

Showing papers on "Evolutionary programming published in 1990"


Journal ArticleDOI
TL;DR: Evolutionary programming is analyzed as a technique for training a general neural network that can yield faster, more efficient yet robust training procedures that accommodate arbitrary interconnections and neurons possessing additional processing capabilities.
Abstract: Neural networks are parallel processing structures that provide the capability to perform various pattern recognition tasks. A network is typically trained over a set of exemplars by adjusting the weights of the interconnections using a back propagation algorithm. This gradient search converges to locally optimal solutions which may be far removed from the global optimum. In this paper, evolutionary programming is analyzed as a technique for training a general neural network. This approach can yield faster, more efficient yet robust training procedures that accommodate arbitrary interconnections and neurons possessing additional processing capabilities.

300 citations


Book ChapterDOI
01 Oct 1990
TL;DR: The distributed genetic algorithm presented has a population structure that allows the introduction of “ecological opportunity” in the evolutionary process in a manner motivated by the macro-evolutionary theory of Eldredge and Gould.
Abstract: The distributed genetic algorithm presented has a population structure that allows the introduction of “ecological opportunity” [Wrig 82] in the evolutionary process in a manner motivated by the macro-evolutionary theory of Eldredge and Gould [Eldr 72]. The K-partition problem is selected from the domain of VLSI design and empirical results are presented to show the advantage derived from the population structure.

49 citations


Proceedings ArticleDOI
05 Jun 1990
TL;DR: It is evident from the experimental results that sophisticated genetic operators are not required to ensure successful adaptation, and an entire solution to the routing problem at hand was generated assuming a fixed environment.
Abstract: Evolutionary programming is all inductive reasoning process wherein randomness is selectively incorporated to build a logic that meets the challenges posed by the environment. A demonstration of evolutionary optimization in the domain of routing multiple autonomous underwater vehicle (AUVs) is indicated. A series of experiments of increasing complexity were designed to evaluate the potential of evolutionary programming for AUV routing. In the experiments, the evolutionary algorithm was given no knowledge about the problem itself, only information concerning the relative worth of its proposed solutions. It is evident from the experimental results that sophisticated genetic operators are not required to ensure successful adaptation. The experiments only involved two-dimensional routings with relatively simple constraints. The experiments were performed off-line, that is, an entire solution to the routing problem at hand was generated assuming a fixed environment. >

42 citations


Proceedings ArticleDOI
05 Nov 1990
TL;DR: Evolutionary programming was conceived in I960 as an alternative approach to artificial intelligence and now takes the form of hierarchic evolutionary programming wherein higher levels evolve optinial parameters for lower levels, and artificial consciousness wherein a self-referential capability is used to enhance such hierarchic control.
Abstract: Evolutionary programming was conceived in I960 as an alternative approach to artificial intelligence. The initial applica+ tions concerned prediction, modeling, and the control of unknown processes with respect to an arbitrary payoff function. Future applications take the form of hierarchic evolutionary programming wherein higher levels evolve optinial parameters for lower levels, and artificial consciousness wherein a self-referential capability is used to enhance such hierarchic control. Some consideration is given to the generatioii ol true autonomy; that is, niachines that develop their own purpose. Some unanswered questions are offered for considera tion.

28 citations


Proceedings ArticleDOI
17 Jun 1990
TL;DR: The use of evolutionary programming to train a neural network is addressed by examining a simple classification problem involving two populations of interest, indicating that evolutionary programming is useful for discovering the optimal set of weights in a Neural network.
Abstract: The use of evolutionary programming to train a neural network is addressed by examining a simple classification problem involving two populations of interest. The first follows a Gaussian distribution having zero means and unit variance; the second is also Gaussian, but with a mean of one and unit variance. Ten independent samples are drawn from either one of the populations with the objective being to properly identify the appropriate underlying distribution. A network consisting of 10 input nodes, a single hidden layer of 10 nodes, and a final classifying node was constructed. To train the network, evolutionary programming was used to minimize the sum of the squared differences between each target output and the actual network output. A population of 10 parent vectors representing the 121 weights and biases of the network was maintained, and the limit of evolution was chosen to be 1000 generations. A training set of 100 patterns, 50 from each population, was determined randomly and then fixed for the extent of the evolution. The results indicate that evolutionary programming is useful for discovering the optimal set of weights in a neural network

23 citations


Book ChapterDOI
01 Oct 1990
TL;DR: This paper presents a general, problem-independent learning strategy for neural networks, based on Rechenberg's evolutionary strategy, which proposes that the optimization always works with a subset of object variables.
Abstract: This paper presents a general, problem-independent learning strategy for neural networks, based on Rechenberg's evolutionary strategy ([Rechenberg 73]). The main innovation of the evolutionary strategy proposed is that the optimization always works with a subset of object variables. The size of this subspace is controlled adaptively. My experiments showed that a learning strategy based exclusively on evolutionary algorithms only works properly if some modifications are made to the original algorithms. These are described in the section “Adaptive Selection of the Object Variables”.

19 citations


Proceedings ArticleDOI
05 Jun 1990
TL;DR: The experiments discussed indicate the practicality of implementing neural networks using backpropagation or evolutionary programming for online optimal navigation in order to train networks with single and multiple hidden layers.
Abstract: A neural net approach is considered as a nonlinear controller for precise navigation and positioning of an autonomous underwater vehicle (AUV) around and about fixed and/or moving objects. The network can be trained to operate within various noise conditions consisting of current fields or other constraints. A neural net uses sensor position and velocity information as the inputs and relative position and motion vectors for the propulsion/steering unit as the output. The effectiveness of backpropagation and evolutionary programming methods for training networks with single and multiple hidden layers are investigated. Results based on simulated data sources and capabilities are presented. The experiments discussed indicate the practicality of implementing neural networks using backpropagation or evolutionary programming for online optimal navigation. >

12 citations


Proceedings ArticleDOI
05 Nov 1990
TL;DR: The technique permits analysis of neural strategies against a set of plants and gives both the best choice of control parameters and identification of the plant configuration which is most difficult for the best controller to handle.
Abstract: This paper describes the use of evolutionary programming for computer-aided design and testing of neural single layer regulators. The design and testing problem is viewed as a game in that the controller parameters are to be chosen with a minimax criterion, i.e. to minimize the loss associated with their use on the worst possible plant parameters. The technique permits analysis of neural strategies against a set of plants. This gives both the best choice of control parameters and identification of the plant configuration which is most difficult for the best controller to handle. Computations of an approximate minimax solution are fast, requiring less than 24 hours in background mode on a SUN 4/260 running with other normal traffic.

12 citations



Proceedings ArticleDOI
01 Nov 1990
TL;DR: This paper applies evolutionary programming to the design of neural networks for artifact detection in pulsatile arterial pressure waveforms and reduced the complexity of a neural artifact detector from 21 hidden nodes to two with very little penalty in classification error.
Abstract: This paper applies evolutionary programming to the design of neural networks for artifact detection in pulsatile arterial pressure waveforms. Evolutionary programming is an efficient search scheme which avpids the tendency of back propagation to get detained in local minima. Use of this technique reduced the complexity of a neural artifact detector[3] from 21 hidden nodes to two with very little penalty in classification error.

9 citations


Proceedings ArticleDOI
27 Nov 1990
TL;DR: Evolutionary optimization is proposed as a method for machine learning that can address systems in which there is little or no prior knowledge and is more versatile than classic prediction and correlation error methods.
Abstract: Evolutionary optimization is proposed as a method for machine learning. Simulating evolution can be used for the prediction, identification, and control of time-varying plants. Models which describe the input-output characteristics of the system are evolved in fast time. This evolutionary programming can address systems in which there is little or no prior knowledge. There is no requirement for using a squared error or other smooth criterion. The technique is more versatile than classic prediction and correlation error methods. >