scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Evolutionary Computation in 2000"


Journal ArticleDOI
TL;DR: A novel approach to balance objective and penalty functions stochastically, i.e., stochastic ranking, is introduced, and a new view on penalty function methods in terms of the dominance of penalty and objective functions is presented.
Abstract: Penalty functions are often used in constrained optimization. However, it is very difficult to strike the right balance between objective and penalty functions. This paper introduces a novel approach to balance objective and penalty functions stochastically, i.e., stochastic ranking, and presents a new view on penalty function methods in terms of the dominance of penalty and objective functions. Some of the pitfalls of naive penalty methods are discussed in these terms. The new ranking method is tested using a (/spl mu/, /spl lambda/) evolution strategy on 13 benchmark problems. Our results show that suitable ranking alone (i.e., selection), without the introduction of complicated and specialized variation operators, is capable of improving the search performance significantly.

1,571 citations


Journal ArticleDOI
TL;DR: This work presents a new approach to feature extraction in which feature selection and extraction and classifier training are performed simultaneously using a genetic algorithm, and employs this technique in combination with the k nearest neighbor classification rule.
Abstract: Pattern recognition generally requires that objects be described in terms of a set of measurable features. The selection and quality of the features representing each pattern affect the success of subsequent classification. Feature extraction is the process of deriving new features from original features to reduce the cost of feature measurement, increase classifier efficiency, and allow higher accuracy. Many feature extraction techniques involve linear transformations of the original pattern vectors to new vectors of lower dimensionality. While this is useful for data visualization and classification efficiency, it does not necessarily reduce the number of features to be measured since each new feature may be a linear combination of all of the features in the original pattern vector. Here, we present a new approach to feature extraction in which feature selection and extraction and classifier training are performed simultaneously using a genetic algorithm. The genetic algorithm optimizes a feature weight vector used to scale the individual features in the original pattern vectors. A masking vector is also employed for simultaneous selection of a feature subset. We employ this technique in combination with the k nearest neighbor classification rule, and compare the results with classical feature selection and extraction techniques, including sequential floating forward feature selection, and linear discriminant analysis. We also present results for the identification of favorable water-binding sites on protein surfaces.

849 citations


Journal ArticleDOI
TL;DR: Experiments on two real-world problems demonstrate that EENCL can produce NN ensembles with good generalization ability.
Abstract: Based on negative correlation learning and evolutionary learning, this paper presents evolutionary ensembles with negative correlation learning (EENCL) to address the issues of automatic determination of the number of individual neural networks (NNs) in an ensemble and the exploitation of the interaction between individual NN design and combination. The idea of EENCL is to encourage different individual NNs in the ensemble to learn different parts or aspects of the training data so that the ensemble can learn better the entire training data. The cooperation and specialization among different individual NNs are considered during the individual NN design. This provides an opportunity for different NNs to interact with each other and to specialize. Experiments on two real-world problems demonstrate that EENCL can produce NN ensembles with good generalization ability.

453 citations


Journal ArticleDOI
TL;DR: It is shown that epistasis, as expressed by the dominance of the flow and distance matrices of a QAP instance, the landscape ruggedness in terms of the correlation length of a landscape, and the correlation between fitness and distance of local optima in the landscape together are useful for predicting the performance of memetic algorithms-evolutionary algorithms incorporating local search.
Abstract: In this paper, a fitness landscape analysis for several instances of the quadratic assignment problem (QAP) is performed, and the results are used to classify problem instances according to their hardness for local search heuristics and meta-heuristics based on local search. The local properties of the fitness landscape are studied by performing an autocorrelation analysis, while the global structure is investigated by employing a fitness distance correlation analysis. It is shown that epistasis, as expressed by the dominance of the flow and distance matrices of a QAP instance, the landscape ruggedness in terms of the correlation length of a landscape, and the correlation between fitness and distance of local optima in the landscape together are useful for predicting the performance of memetic algorithms-evolutionary algorithms incorporating local search (to a certain extent). Thus, based on these properties, a favorable choice of recombination and/or mutation operators can be found. Experiments comparing three different evolutionary operators for a memetic algorithm are presented.

445 citations


Journal ArticleDOI
TL;DR: These algorithms represent a promising way for introducing a correct exploration/exploitation balance in order to avoid premature convergence and reach approximate final solutions in the use of genetic algorithms.
Abstract: A major problem in the use of genetic algorithms is premature convergence. One approach for dealing with this problem is the distributed genetic algorithm model. Its basic idea is to keep, in parallel, several subpopulations that are processed by genetic algorithms, with each one being independent of the others. Making distinctions between the subpopulations by applying genetic algorithms with different configurations, we obtain the so-railed heterogeneous distributed genetic algorithms. These algorithms represent a promising way for introducing a correct exploration/exploitation balance in order to avoid premature convergence and reach approximate final solutions. This paper presents the gradual distributed real-coded genetic algorithms, a type of heterogeneous distributed real-coded genetic algorithms that apply a different crossover operator to each sub-population. Experimental results show that the proposals consistently outperform sequential real-coded genetic algorithms.

291 citations


Journal ArticleDOI
TL;DR: The various issues that arise in the approach to GP-based classification, such as the creation of training sets, the role of incremental learning, and the choice of function set in the evolution of GPCE, are discussed, as well as conflict resolution for uniquely assigning a class.
Abstract: Explores the feasibility of applying genetic programming (GP) to multicategory pattern classification problem. GP can discover relationships and express them mathematically. GP-based techniques have an advantage over statistical methods because they are distribution-free, i.e., no prior knowledge is needed about the statistical distribution of the data. GP also automatically discovers the discriminant features for a class. GP has been applied for two-category classification. A methodology for GP-based n-class classification is developed. The problem is modeled as n two-class problems, and a genetic programming classifier expression (GPCE) is evolved as a discriminant function for each class. The GPCE is trained to recognize samples belonging to its own class and reject others. A strength of association (SA) measure is computed for each GPCE to indicate the degree to which it can recognize samples of its own class. SA is used for uniquely assigning a class to an input feature vector. Heuristic rules are used to prevent a GPCE with a higher SA from swamping one with a lower SA. Experimental results are presented to demonstrate the applicability of GP for multicategory classification, and they are found to be satisfactory. We also discuss the various issues that arise in our approach to GP-based classification, such as the creation of training sets, the role of incremental learning, and the choice of function set in the evolution of GPCE, as well as conflict resolution for uniquely assigning a class.

277 citations


Journal ArticleDOI
TL;DR: Two simple ways to use a genetic algorithm (GA) to design a multiple-classifier system are suggested that can be made less prone to overtraining by including penalty terms in the fitness function accounting for the number of features used.
Abstract: We suggest two simple ways to use a genetic algorithm (GA) to design a multiple-classifier system. The first GA version selects disjoint feature subsets to be used by the individual classifiers, whereas the second version selects (possibly) overlapping feature subsets, and also the types of the individual classifiers. The two GAs have been tested with four real data sets: heart, Satimage, letters, and forensic glasses. We used three-classifier systems and basic types of individual classifiers (the linear and quadratic discriminant classifiers and the logistic classifier). The multiple-classifier systems designed with the two GAs were compared against classifiers using: all features; the best feature subset found by the sequential backward selection method; and the best feature subset found by a CA. The GA design can be made less prone to overtraining by including penalty terms in the fitness function accounting for the number of features used.

267 citations


Journal ArticleDOI
TL;DR: This paper examines recent developments in the field of evolutionary computation for manufacturing optimization with a wide range of problems, from job shop and flow shop scheduling, to process planning and assembly line balancing.
Abstract: The use of intelligent techniques in the manufacturing field has been growing the last decades due to the fact that most manufacturing optimization problems are combinatorial and NP hard. This paper examines recent developments in the field of evolutionary computation for manufacturing optimization. Significant papers in various areas are highlighted, and comparisons of results are given wherever data are available. A wide range of problems is covered, from job shop and flow shop scheduling, to process planning and assembly line balancing.

264 citations


Journal ArticleDOI
TL;DR: A new extension of EP/N computes a safe-optimum path of a ship in given static and dynamic environments, and a safe trajectory of the ship in a collision situation is determined on the basis of this algorithm.
Abstract: For a given circumstance (i.e., a collision situation at sea), a decision support system for navigation should help the operator to choose a proper manoeuvre, teach him good habits, and enhance his general intuition on how to behave in similar situations in the future. By taking into account certain boundaries of the maneuvering region along with information on navigation obstacles and other moving ships, the problem of avoiding collisions is reduced to a dynamic optimization task with static and dynamic constraints. This paper presents experiments with a modified version of the Evolutionary Planner/Navigator (EP/N). Its new version, /spl thetav/EP/N++, is a major component of a such decision support system. This new extension of EP/N computes a safe-optimum path of a ship in given static and dynamic environments. A safe trajectory of the ship in a collision situation is determined on the basis of this algorithm. The introduction of a time parameter, the variable speed of the ship, and time-varying constraints representing movable ships are the main features of the new system. Sample results of ship trajectories obtained for typical navigation situations are presented.

206 citations


Journal ArticleDOI
TL;DR: A hybrid approach to fuzzy supervised learning that is based on a genetic-neuro learning algorithm and derived through a least-squares solution of an over-determined system using the singular value decomposition (SVD) algorithm.
Abstract: A hybrid approach to fuzzy supervised learning is presented. It is based on a genetic-neuro learning algorithm. The mixed-genetic coding adopted involves only the premises of the fuzzy rules. The conclusions are derived through a least-squares solution of an over-determined system using the singular value decomposition (SVD) algorithm. The paper presents the results obtained with C++ software called GEFREX that implements the proposed algorithm. The main characteristic of the algorithm is the compactness of the fuzzy systems extracted. Several comparisons ranging from approximation problems, classification problems, and time series predictions show that GEFREX reaches a smaller error than found in previous works with the same or a smaller number of rules. Further, it succeeds in identifying significant features. Although the SVD is used extensively, the learning time is decidedly reduced in comparison with previous work.

178 citations


Journal ArticleDOI
TL;DR: This paper presents two new tree-generation algorithms for genetic programming and for "strongly typed" genetic programming, a common variant, that are fast, allow the user to request specific tree sizes, and guarantee probabilities of certain nodes appearing in trees.
Abstract: Genetic programming is an evolutionary optimization method that produces functional programs to solve a given task. These programs commonly take the form of trees representing LISP s-expressions, and a typical evolutionary run produces a great many of these trees. For this reason, a good tree-generation algorithm is very important to genetic programming. This paper presents two new tree-generation algorithms for genetic programming and for "strongly typed" genetic programming, a common variant. These algorithms are fast, allow the user to request specific tree sizes, and guarantee probabilities of certain nodes appearing in trees. The paper analyzes these two algorithms, and compares them with traditional and recently proposed approaches.

Journal ArticleDOI
TL;DR: This paper generalizes the reference classes of fitness distance correlation and epistasis variance, and constructs a new predictive measure that is insensitive to nonlinear fitness scaling, and investigates the relations between the reference Classes of the measures and a number of intuitively easy classes.
Abstract: This paper studies a number of predictive measures of problem difficulty, among which epistasis variance and fitness distance correlation are the most widely known. Our approach is based on comparing the reference class of a measure to a number of known easy function classes. First, we generalize the reference classes of fitness distance correlation and epistasis variance, and construct a new predictive measure that is insensitive to nonlinear fitness scaling. We then investigate the relations between the reference classes of the measures and a number of intuitively easy classes. We also point out the need to further identify which functions are easy for a given class of evolutionary algorithms in order to design more efficient hardness indicators for them. We finally restrict attention to the genetic algorithm (GA), and consider both GA-easy and GA-hard fitness functions, and give experimental evidence that the values of the measures, based on random samples, can be completely unreliable and entirely uncorrelated to the convergence quality and convergence speed of GA instances using either proportional or ranking selection.

Journal ArticleDOI
TL;DR: The detailed analysis of the resulting Pareto front suggests a renewed interest in the arrow wing planform for the supersonic wing.
Abstract: This paper discusses the design optimization of a wing for supersonic transport (SST) using a multiple-objective genetic algorithm (MOGA). Three objective functions are used to minimize the drag for supersonic cruise, the drag for transonic cruise, and the bending moment at the wing root for supersonic cruise. The wing shape is defined by 66 design variables. A Euler flow code is used to evaluate supersonic performance, and a potential flow code is used to evaluate transonic performance. To reduce the total computational time, flow calculations are parallelized on an NEC SX-4 computer using 32 processing elements. The detailed analysis of the resulting Pareto front suggests a renewed interest in the arrow wing planform for the supersonic wing.

Journal ArticleDOI
Min-Jea Tahk1, Byung-Chan Sun
TL;DR: A coevolutionary method developed for solving constrained optimization problems based on the evolution of two populations with opposite objectives to solve saddle-point problems that provides consistent solutions with better numerical accuracy than other evolutionary methods.
Abstract: This paper introduces a coevolutionary method developed for solving constrained optimization problems. This algorithm is based on the evolution of two populations with opposite objectives to solve saddle-point problems. The augmented Lagrangian approach is taken to transform a constrained optimization problem to a zero-sum game with the saddle point solution. The populations of the parameter vector and the multiplier vector approximate the zero-sum game by a static matrix game, in which the fitness of individuals is determined according to the security strategy of each population group. Selection, recombination, and mutation are done by using the evolutionary mechanism of conventional evolutionary algorithms such as evolution strategies, evolutionary programming, and genetic algorithms. Four benchmark problems are solved to demonstrate that the proposed coevolutionary method provides consistent solutions with better numerical accuracy than other evolutionary methods.

Journal ArticleDOI
TL;DR: This paper introduces a novel tree construction algorithm called the randomized primal method (RPM) which builds degree-constrained trees of low cost from solution vectors taken as input, and provides strong evidence that the genetic algorithm employing RPM finds significantly lower-cost solutions to random graph d-MST problems than rival methods.
Abstract: Finding the degree-constrained minimum spanning tree (d-MST) of a graph is a well-studied NP-hard problem of importance in communications network design and other network-related problems. In this paper we describe some previously proposed algorithms for solving the problem, and then introduce a novel tree construction algorithm called the randomized primal method (RPM) which builds degree-constrained trees of low cost from solution vectors taken as input. RPM is applied in three stochastic iterative search methods: simulated annealing, multistart hillclimbing, and a genetic algorithm. While other researchers have mainly concentrated on finding spanning trees in Euclidean graphs, we consider the more general case of random graph problems. We describe two random graph generators which produce particularly challenging d-MST problems. On these and other problems we find that the genetic algorithm employing RPM outperforms simulated annealing and multistart hillclimbing. Our experimental results provide strong evidence that the genetic algorithm employing RPM finds significantly lower-cost solutions to random graph d-MST problems than rival methods.

Journal ArticleDOI
TL;DR: This paper proposes a test-case generator for constrained parameter optimization techniques, capable of creating various test problems with different characteristics, and is very useful for analyzing and comparing different constraint-handling techniques.
Abstract: The experimental results reported in many papers suggest that making an appropriate a priori choice of an evolutionary method for a nonlinear parameter optimization problem remains an open question. It seems that the most promising approach at this stage of research is experimental, involving the design of a scalable test suite of constrained optimization problems, in which many features could be tuned easily. It would then be possible to evaluate the merits and drawbacks of the available methods, as well as to test new methods efficiently. In this paper, we propose such a test-case generator for constrained parameter optimization techniques. This generator is capable of creating various test problems with different characteristics including: 1) problems with different relative sizes of the feasible region in the search space; 2) problems with different numbers and types of constraints; 3) problems with convex or nonconvex evaluation functions, possibly with multiple optima; and 4) problems with highly nonconvex constraints consisting of (possibly) disjoint regions. Such a test-case generator is very useful for analyzing and comparing different constraint-handling techniques.

Journal ArticleDOI
TL;DR: This paper presents models that predict the effects of the parallel GA parameters on its search quality by finding the probability that each population converges to the correct solution after each restart, and also calculate the long-run chance of success.
Abstract: Implementations of parallel genetic algorithms (GA) with multiple populations are common, but they introduce several parameters whose effect on the quality of the search is not well understood. Parameters such as the number of populations, their size, the topology of communications, and the migration rate have to be set carefully to reach adequate solutions. This paper presents models that predict the effects of the parallel GA parameters on its search quality. The paper reviews some recent results on the case where each population is connected to all the others and the migration rate is set to the maximum value possible. This bounding case is the simplest to analyze, and it introduces the methodology that is used in the remainder of the paper to analyze parallel GA with arbitrary migration rates and communication topologies. This investigation considers that migration occurs only after each population converges; then, incoming individuals are incorporated into the populations and the algorithm restarts. The models find the probability that each population converges to the correct solution after each restart, and also calculate the long-run chance of success. The accuracy of the models is verified with experiments using one additively decomposable function.

Journal ArticleDOI
TL;DR: It is shown that the decision problem corresponding to optimizing random-model N-K fitness functions is NP-complete for K>1, and is polynomial for K=1, if the restriction that the ith component function depends on theIth bit is removed, then the problem is NP -complete, even for K =1.
Abstract: N-K fitness landscapes have been used widely as examples and test functions in the field of evolutionary computation. We investigate the computational complexity of the problem of optimizing the N-K fitness functions and related fitness functions. We give an algorithm to optimize adjacent-model N-K fitness functions, which is polynomial in N. We show that the decision problem corresponding to optimizing random-model N-K fitness functions is NP-complete for K>1, and is polynomial for K=1. If the restriction that the ith component function depends on the ith bit is removed, then the problem is NP-complete, even for K=1. We also give a polynomial-time approximation algorithm for the arbitrary-model N-K optimization problem.

Journal ArticleDOI
TL;DR: The experimental results, using facial image data, show the feasibility of the adaptive eye location approach, and suggest a novel approach for the adaptive development of task-driven active perception and navigational mechanisms.
Abstract: Eye location is used as a test bed for developing navigation routines implemented as visual routines within the framework of adaptive behavior-based AI. The adaptive eye location approach seeks first where salient objects are, and then what their identity is. Specifically, eye location involves: 1) the derivation of the saliency attention map, and 2) the possible classification of salient locations as eve regions. The saliency ("where") map is derived using a consensus between navigation routines encoded as finite-state automata exploring the facial landscape and evolved using genetic algorithms (GAs). The classification ("what") stage is concerned with the optimal selection of features, and the derivation of decision trees, using GAs, to possibly classify salient locations as eyes. The experimental results, using facial image data, show the feasibility of our method, and suggest a novel approach for the adaptive development of task-driven active perception and navigational mechanisms.

Journal ArticleDOI
TL;DR: A genetic algorithm (GA) approach is proposed for the solution of the EIT inverse problem, in particular for the reconstruction of "static" images, and results of numerical experiments of EIT solved by the GA approach are presented.
Abstract: Electrical impedance tomography (EIT) determines the resistivity distribution inside an inhomogeneous object by means of voltage and/or current measurements conducted at the object boundary. A genetic algorithm (GA) approach is proposed for the solution of the EIT inverse problem, in particular for the reconstruction of "static" images. Results of numerical experiments of EIT solved by the GA approach (GA-EIT in the following) are presented and compared to those obtained by other more-established inversion methods, such as the modified Newton-Raphson and the double-constraint method. The GA approach is relatively expensive in terms of computing time and resources, and at present this limits the applicability of GA-EIT to the field of static imaging. However, the continuous and rapid growth of computing resources makes the development of real-time dynamic imaging applications based on GAs conceivable in the near future.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the reasons for the observed improvements of cellular automata and found that much of the improvement seen was due to their resource sharing technique rather than to coevolution.
Abstract: Coevolution, between a population of candidate solutions and a population of test cases, has received increasing attention as a promising biologically inspired method for improving the performance of evolutionary computation techniques. However, the results of studies of coevolution have been mixed. One of the seemingly more impressive results to date was the improvement via coevolution demonstrated by Juille and Pollack (1998) on evolving cellular automata to perform a classification task. Their study, however, like most other studies on coevolution, did not investigate the mechanisms giving rise to the observed improvements. In this paper, we probe more deeply into the reasons for these observed improvements and present empirical evidence that, in contrast to what was claimed by Juille and Pollack, much of the improvement seen was due to their "resource sharing" technique rather than to coevolution. We also present empirical evidence that resource sharing works, at least in part, by preserving diversity in the population.

Journal ArticleDOI
TL;DR: The paper proposes a related local task for the local search to learn, and finds that this approach is able to reduce the training time considerably, and aims at investigating the interaction between local search and evolutionary search when they are combined.
Abstract: Training neural networks by evolutionary search can require a long computation time. In certain situations, using Lamarckian evolution, local search and evolutionary search can complement each other to yield a better training algorithm. This paper demonstrates the potential of this evolutionary-learning synergy by applying it to train recurrent neural networks in an attempt to resolve a long-term dependency problem and the inverted pendulum problem. This work also aims at investigating the interaction between local search and evolutionary search when they are combined; it is found that the combinations are particularly efficient when the local search is simple. In the case where no teacher signal is available for the local search to learn the desired task directly, the paper proposes a related local task for the local search to learn, and finds that this approach is able to reduce the training time considerably.

Journal ArticleDOI
TL;DR: This paper presents a hybrid genetic algorithm (GA) with an adaptive application of genetic operators for solving the 3-matching problem (3MP), an NP-complete graph problem.
Abstract: This paper presents a hybrid genetic algorithm (GA) with an adaptive application of genetic operators for solving the 3-matching problem (3MP), an NP-complete graph problem. In the 3MP, we search for the partition of a point set into minimal total cost triplets, where the cost of a triplet is the Euclidean length of the minimal spanning tree of the three points. The problem is a special case of grouping and facility location problems. One common problem with GA applied to hard combinatorial optimization, like the 3MP, is to incorporate problem-dependent local search operators into the GA efficiently in order to find high-quality solutions. Small instances of the problem can be solved exactly, but for large problems, we use local optimization. We introduce several general heuristic crossover and local hill-climbing operators, and apply adaptation to choose among them. Our GA combines these operators to form an effective problem solver. It is hybridized as it incorporates local search heuristics, and it is adaptive as the individual recombination/improvement operators are fired according to their online performance. Test results show that this approach gives approximately the same or even slightly better results than our previous, fine tuned GA without adaptation. It is better than a grouping GA for the partitioning considered. The adaptive combination of operators eliminates a large set of parameters, making the method more robust, and it presents a convenient way to build a hybrid problem solver.

Journal ArticleDOI
TL;DR: This paper describes a novel application of evolutionary computation techniques to equation solving and shows that the proposed hybrid algorithms outperform the classical numerical methods significantly in terms of effectiveness and efficiency.
Abstract: Evolutionary computation techniques have mostly been used to solve various optimization and learning problems. This paper describes a novel application of evolutionary computation techniques to equation solving. Several combinations of evolutionary computation techniques and classical numerical methods are proposed to solve linear and partial differential equations. The hybrid algorithms have been compared with the well-known classical numerical methods. The experimental results show that the proposed hybrid algorithms outperform the classical numerical methods significantly in terms of effectiveness and efficiency.

Journal ArticleDOI
TL;DR: An interval arithmetic-based model is designed that extends a hybrid technique, the GA-P method, that combines genetic algorithms and genetic programming and is useful for generating a confidence interval for the output of a model and for obtaining a robust point estimate from data which the authors know to contain outliers.
Abstract: When genetic programming (GP) methods are applied to solve symbolic regression problems, we obtain a point estimate of a variable, but it is not easy to calculate an associated confidence interval. We designed an interval arithmetic-based model that solves this problem. Our model extends a hybrid technique, the GA-P method, that combines genetic algorithms and genetic programming. Models based on interval GA-P can devise an interval model from examples and provide the algebraic expression that best approximates the data. The method is useful for generating a confidence interval for the output of a model, and also for obtaining a robust point estimate from data which we know to contain outliers. The algorithm was applied to a real problem related to electrical energy distribution. Classical methods were applied first, and then the interval GA-P. The results of both studies are used to compare interval GA-P with GP, GA-P, classical regression methods, neural networks, and fuzzy models.

Journal ArticleDOI
TL;DR: A direct consequence of this exceptional set of events is that performing independent (1+1)-ES processes proves to be more advantageous than any population-based (/spl mu/+/spl lambda/)-ES.
Abstract: The behavior of a (1+1)-ES process on Rudolph's binary long k paths is investigated extensively in the asymptotic framework with respect to string length l. First, the case of k=l/sup /spl alpha// is addressed. For /spl alpha//spl ges/1/2, we prove that the long k path is a long path for the (1+1)-ES in the sense that the process follows the entire path with no shortcuts, resulting in an exponential expected convergence time. For /spl alpha/<1/2, the expected convergence time is also exponential, but some shortcuts occur in the meantime that speed up the process. Next, in the case of constant k, the statistical distribution of convergence time is calculated, and the influence of population size is investigated for different (/spl mu/+/spl lambda/)-ES. The histogram of the first hitting time of the solution shows an anomalous peak close to zero, which corresponds to an exceptional set of events that speed up the expected convergence time with a factor of l/sup 2/. A direct consequence of this exceptional set is that performing independent (1+1)-ES processes proves to be more advantageous than any population-based (/spl mu/+/spl lambda/)-ES.

Journal ArticleDOI
TL;DR: This letter presents a new mutation rule that has the same form as the well-known backpropagation learning rule for neural networks that assigns the best individual's fitness as the temporary target at each generation.
Abstract: Evolutionary programming is mainly characterized by two factors: the selection strategy and the mutation rule. This letter presents a new mutation rule that has the same form as the well-known backpropagation learning rule for neural networks. The proposed mutation rule assigns the best individual's fitness as the temporary target at each generation. The temporal error, the distance between the target and an individual at hand, is used to improve the exploration of the search space by guiding the direction of evolution. The momentum, i.e., the accumulated evolution information for the individual, speeds up convergence. The efficiency and robustness of the proposed algorithm are assessed on several benchmark test functions.

Journal ArticleDOI
TL;DR: This paper covers the use of three different genetic algorithms applied sequentially to radar cross-section data to generate point-scatterer models to provide automatic conversion of measured 2D/3D data of low, medium, or, high resolution into scatterers models.
Abstract: This paper covers the use of three different genetic algorithms applied sequentially to radar cross-section data to generate point-scatterer models. The aim is to provide automatic conversion of measured 2D/3D data of low, medium, or, high resolution into scatterer models. The resulting models are intended for use in a missile-target engagement simulator. The first genetic algorithm uses multiple species to locate the scattering centers. The second and third algorithms are for model fine tuning and optimization, respectively. Both of these algorithms use nondominated ranking to generate Pareto-optimal sets of results. The ability to choose results from the Pareto sets allows the designer some flexibility in the creation of the model. A method for constructing compound models to produce full 4 /spl pi/ sr coverage is detailed. Example results from the model generation process are presented.

Journal ArticleDOI
TL;DR: This work employs evolutionary programming (EP) to solve this adaptive regularization problem by generating a population of potential regularization strategies, and allowing them to compete under a new error measure which characterizes a large class of images in terms of their local correlational properties.
Abstract: Image restoration is a difficult problem due to the ill-conditioned nature of the associated inverse filtering operation, which requires regularization techniques. The choice of the corresponding regularization parameter is thus an important issue since an incorrect choice would either lead to noisy appearances in the smooth regions or excessive blurring of the textured regions. In addition, this choice has to be made adaptively across, different image regions to ensure the best subjective quality for the restored image. We employ evolutionary programming (EP) to solve this adaptive regularization problem by generating a population of potential regularization strategies, and allowing them to compete under a new error measure which characterizes a large class of images in terms of their local correlational properties. The nonavailability of explicit gradient information for this measure motivates the adoption of EP techniques for its optimization, which allows efficient search at multiple error surface points. The adoption of EP also allows the broadening of the range of possible cost functions for image processing so that we can choose the most relevant function rather than the most tractable one for a particular image processing application.

Journal ArticleDOI
TL;DR: A novel approach for the integration of evolution programs and constraint-solving techniques over finite domains is presented, and a global parallelization approach is adopted that preserves the properties, behavior, and fundamentals of the sequential algorithm.
Abstract: A novel approach for the integration of evolution programs and constraint-solving techniques over finite domains is presented. This integration provides a problem-independent optimization strategy for large-scale constrained optimization problems over finite domains. In this approach, genetic operators are based on an arc-consistency algorithm, and chromosomes are arc-consistent portions of the search space of the problem. The paper describes the main issues arising in this integration: chromosome representation and evaluation, selection and replacement strategies, and the design of genetic operators. We also present a parallel execution model for a distributed memory architecture of the previous integration. We have adopted a global parallelization approach that preserves the properties, behavior, and fundamentals of the sequential algorithm. Linear speedup is achieved since genetic operators are coarse grained as they perform a search in a discrete space carrying out arc consistency. The implementation has been tested on a GRAY T3E multiprocessor using a complex constrained optimization problem.