scispace - formally typeset
Search or ask a question

Showing papers presented at "Congress on Evolutionary Computation in 2002"


Proceedings ArticleDOI
12 May 2002
TL;DR: This paper introduces a proposal to extend the heuristic called "particle swarm optimization" (PSO) to deal with multiobjective optimization problems and it maintains previously found nondominated vectors in a global repository that is later used by other particles to guide their own flight.
Abstract: This paper introduces a proposal to extend the heuristic called "particle swarm optimization" (PSO) to deal with multiobjective optimization problems. Our approach uses the concept of Pareto dominance to determine the flight direction of a particle and it maintains previously found nondominated vectors in a global repository that is later used by other particles to guide their own flight. The approach is validated using several standard test functions from the specialized literature. Our results indicate that our approach is highly competitive with current evolutionary multiobjective optimization techniques.

1,842 citations


Proceedings ArticleDOI
12 May 2002
TL;DR: The effects of various population topologies on the particle swarm algorithm were systematically investigated and it was discovered that previous assumptions may not have been correct.
Abstract: The effects of various population topologies on the particle swarm algorithm were systematically investigated. Random graphs were generated to specifications, and their performance on several criteria was compared. What makes a good population structure? We discovered that previous assumptions may not have been correct.

1,589 citations


Proceedings ArticleDOI
12 May 2002
TL;DR: Three different approaches for systematically designing test problems for systematic designing multi-objective evolutionary algorithms (MOEAs) showing efficacy in handling problems having more than two objectives are suggested.
Abstract: After adequately demonstrating the ability to solve different two-objective optimization problems, multi-objective evolutionary algorithms (MOEAs) must show their efficacy in handling problems having more than two objectives. In this paper, we suggest three different approaches for systematically designing test problems for this purpose. The simplicity of construction, scalability to any number of decision variables and objectives, knowledge of exact shape and location of the resulting Pareto-optimal front, and ability to control difficulties in both converging to the true Pareto-optimal front and maintaining a widely distributed set of solutions are the main features of the suggested test problems. Because of these features, they should be useful in various research activities on MOEAs, such as testing the performance of a new MOEA, comparing different MOEAs, and having a better understanding of the working principles of MOEAs.

1,392 citations


Proceedings ArticleDOI
12 May 2002
TL;DR: This paper presents a particle swarm optimization algorithm modified by using a dynamic neighborhood strategy, new particle memory updating, and one-dimension optimization to deal with multiple objectives for multiobjective optimization problems.
Abstract: This paper presents a particle swarm optimization (PSO) algorithm for multiobjective optimization problems. PSO is modified by using a dynamic neighborhood strategy, new particle memory updating, and one-dimension optimization to deal with multiple objectives. Several benchmark cases were tested and showed that PSO could efficiently find multiple Pareto optimal solutions.

671 citations


Proceedings ArticleDOI
12 May 2002
TL;DR: The main features of the adaptation of an immune network model include: automatic determination of the population size, combination of local with global search, defined convergence criterion, and capability of locating and maintaining stable local optima solutions.
Abstract: This paper presents the adaptation of an immune network model, originally proposed to perform information compression and data clustering, to solve multimodal function optimization problems. The algorithm is described theoretically and empirically compared with similar approaches from the literature. The main features of the algorithm include: automatic determination of the population size, combination of local with global search (exploitation plus exploration of the fitness landscape), defined convergence criterion, and capability of locating and maintaining stable local optima solutions.

650 citations


Proceedings ArticleDOI
12 May 2002
TL;DR: This work compares several EMO algorithms using the framework of 'outperformance relations' (Hansen and Jaszkiewicz, 1998), which enables it to criticize and contrast a variety of published metrics, leading to some recommendations on which seem most useful in practice.
Abstract: Evolutionary multiobjective optimization (EMO) boasts a proliferation of algorithms and benchmark problems. We need principled ways to compare the performance of different EMO algorithms, but this is complicated by the fact that the result of an EMO run is not a single scalar value, but a collection of vectors forming a nondominated set. Various metrics for nondominated sets have been suggested. We compare several, using the framework of 'outperformance relations' (Hansen and Jaszkiewicz, 1998). This enables us to criticize and contrast a variety of published metrics, leading to some recommendations on which seem most useful in practice.

547 citations


Proceedings ArticleDOI
12 May 2002
TL;DR: The emphasis of this paper is to analyze the dynamics and behavior of SPDE, a new version of PDE with self-adaptive crossover and mutation that is very competitive with other EMO algorithms.
Abstract: The Pareto differential evolution (PDE) algorithm was introduced and showed competitive results. The behavior of PDE, as in many other evolutionary multiobjective optimization (EMO) methods, varies according to the crossover and mutation rates. In this paper, we present a new version of PDE with self-adaptive crossover and mutation. We call the new version self-adaptive Pareto differential evolution (SPDE). The emphasis of this paper is to analyze the dynamics and behavior of SPDE. The experiments also show that the algorithm is very competitive with other EMO algorithms.

419 citations


Proceedings ArticleDOI
12 May 2002
TL;DR: An adaptive PSO is introduced, which automatically tracks various changes in a dynamic system and re-randomization is introduced to respond to the dynamic changes.
Abstract: This paper introduces an adaptive PSO, which automatically tracks various changes in a dynamic system. Different environment detection and response techniques are tested on the parabolic and Rosenbrock benchmark functions, and re-randomization is introduced to respond to the dynamic changes. Performance on the benchmark functions with various severities is analyzed.

353 citations


Proceedings ArticleDOI
12 May 2002
TL;DR: Three variants of PSO are compared with the widely used branch and bound technique, on several integer programming test problems and results indicate that PSO handles efficiently such problems, and in most cases it outperforms the branch and Bound technique.
Abstract: The investigation of the performance of the particle swarm optimization (PSO) method in integer programming problems, is the main theme of the present paper. Three variants of PSO are compared with the widely used branch and bound technique, on several integer programming test problems. Results indicate that PSO handles efficiently such problems, and in most cases it outperforms the branch and bound technique.

307 citations


Proceedings ArticleDOI
12 May 2002
TL;DR: This paper introduces spatial extension to particles in the PSO model in order to overcome premature convergence in iterative optimisation and shows that the SEPSO indeed managed to keep diversity in the search space and yielded superior results.
Abstract: In this paper, we introduce spatial extension to particles in the PSO model in order to overcome premature convergence in iterative optimisation. The standard PSO and the new model (SEPSO) are compared w.r.t. performance on well-studied benchmark problems. We show that the SEPSO indeed managed to keep diversity in the search space and yielded superior results.

280 citations


Proceedings ArticleDOI
12 May 2002
TL;DR: An extension for the differential evolution algorithm for handling nonlinear constraint functions is proposed and demonstrated by solving a suite of ten well-known test problems.
Abstract: An extension for the differential evolution algorithm is proposed for handling nonlinear constraint functions. In comparison with the original algorithm, only the replacement criterion was modified for handling the constraints. In this article the proposed method is described and demonstrated by solving a suite of ten well-known test problems.

Proceedings ArticleDOI
12 May 2002
TL;DR: The differential evolution algorithm is extended to multiobjective optimization problems by using a Pareto-based approach and performs well when applied to several test optimization problems from the literature.
Abstract: Differential evolution is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. In this paper, the differential evolution algorithm is extended to multiobjective optimization problems by using a Pareto-based approach. The algorithm performs well when applied to several test optimization problems from the literature.

Proceedings ArticleDOI
12 May 2002
TL;DR: This work provides an explication of a resource limited artificial immune classification algorithm, named AIRS (Artificial Immune Recognition System), and provides results on simulated data sets to demonstrate the fundamental behavior of the algorithm.
Abstract: This paper presents a new supervised learning paradigm inspired by mechanisms exhibited in immune systems. This work provides an explication of a resource limited artificial immune classification algorithm, named AIRS (Artificial Immune Recognition System), and provides results on simulated data sets to demonstrate the fundamental behavior of the algorithm.

Proceedings ArticleDOI
12 May 2002
TL;DR: A dissipative particle swarm optimization is developed according to the self-organization of dissipative structure where the negative entropy is introduced to construct an opening dissipative system that is far-from-equilibrium so as to driving the irreversible evolution process with better fitness.
Abstract: A dissipative particle swarm optimization is developed according to the self-organization of dissipative structure The negative entropy is introduced to construct an opening dissipative system that is far-from-equilibrium so as to driving the irreversible evolution process with better fitness The testing of two multimodal functions indicates it improves the performance effectively

Proceedings ArticleDOI
12 May 2002
TL;DR: A new meta-heuristic (EPSO) built putting together the best features of evolution strategies (ES) and particle swarm optimization (PSO), including an application in opto-electronics and another in power systems is presented.
Abstract: This paper presents a new meta-heuristic (EPSO) built putting together the best features of evolution strategies (ES) and particle swarm optimization (PSO). Examples of the superiority of EPSO over classical PSO are reported. The paper also describes the application of EPSO to real world problems, including an application in opto-electronics and another in power systems.

Proceedings ArticleDOI
12 May 2002
TL;DR: In this paper, a dynamic clonal selection algorithm, designed to have such properties of self-adaptation, is introduced and investigates the behavior of dynamiCS, which can perform incremental learning on converged data and to adapt to novel data.
Abstract: One significant feature of artificial immune systems is their ability to adapt to continuously changing environments, dynamically learning the fluid patterns of 'self' and predicting new patterns of 'non-self'. This paper introduces and investigates the behaviour of dynamiCS, a dynamic clonal selection algorithm, designed to have such properties of self-adaptation. The effects of three important system parameters: tolerisation period, activation threshold, and life span are explored. The abilities of dynamiCS to perform incremental learning on converged data, and to adapt to novel data are also demonstrated.

Proceedings ArticleDOI
12 May 2002
TL;DR: A system that combines ontogenetic development and artificial evolution to automatically design robots in a physics-based, virtual environment is introduced and it is demonstrated that the evolved genetic regulatory networks from successful evolutionary runs are more modular than those obtained from unsuccessful runs.
Abstract: We introduce a system that combines ontogenetic development and artificial evolution to automatically design robots in a physics-based, virtual environment. Through lesion experiments on the evolved agents, we demonstrate that the evolved genetic regulatory networks from successful evolutionary runs are more modular than those obtained from unsuccessful runs.

Proceedings ArticleDOI
M. Lovbjerg1, T. Krink1
12 May 2002
TL;DR: Self-organized criticality (SOC) can help control the PSO and add diversity and seems promising reaching faster convergence and better solutions.
Abstract: Particle swarm optimisers (PSOs) show potential in function optimisation, but still have room for improvement. Self-organized criticality (SOC) can help control the PSO and add diversity. Extending the PSO with SOC seems promising reaching faster convergence and better solutions.

Proceedings ArticleDOI
12 May 2002
TL;DR: A novel approach inspired by the immune system that allows the application of conventional classification algorithms to perform anomaly detection and produces fuzzy characterization of the normal (or abnormal) space.
Abstract: This paper presents a novel approach inspired by the immune system that allows the application of conventional classification algorithms to perform anomaly detection. This approach appears to be very useful where only positive samples are available to train an anomaly detection system. The proposed approach uses the positive samples to generate negative samples that are used as training data for a classification algorithm. In particular, the algorithm produces fuzzy characterization of the normal (or abnormal) space. This allows it to assign a degree of normalcy, represented by membership value, to elements of the space.

Proceedings ArticleDOI
12 May 2002
TL;DR: Some new results obtained using ASCHEA are presented after extending the penalty function and introducing a niching technique with adaptive radius to handle multimodal functions and a new equality constraint handling strategy is proposed.
Abstract: ASCHEA is an adaptive algorithm for constrained optimization problem based on a population level adaptive penalty function to handle constraints, a constraint-driven mate selection for recombination, and a segregational selection that favors a given number of feasible individuals. In this paper, we present some new results obtained using ASCHEA after extending the penalty function and introducing a niching technique with adaptive radius to handle multimodal functions. Furthermore, we propose a new equality constraint handling strategy. The idea is to start, for each equality, with a large feasible domain and to reduce it progressively along generations, in order to bring it as close as possible to null measure domain. Two approaches are proposed and experimented, the first based on dynamic adjustment and the second based on adaptive adjustment.

Proceedings ArticleDOI
Dirk Thierens1
12 May 2002
TL;DR: Two simple adaptive mutation rate control schemes are proposed, and their feasibility in comparison with a fixed mutation rate, a self-adaptive mutation rate and a deterministically scheduled dynamic mutation rate are shown.
Abstract: The adaptation of mutation rate parameter values is important to allow the search process to optimize its performance during run time. In addition, it frees the user of the need to make non-trivial decisions beforehand. Contrary to real vector coded genotypes, for discrete genotypes most users still prefer to use a fixed mutation rate. Here we propose two simple adaptive mutation rate control schemes, and show their feasibility in comparison with a fixed mutation rate, a self-adaptive mutation rate and a deterministically scheduled dynamic mutation rate.

Proceedings ArticleDOI
12 May 2002
TL;DR: This paper examines a particle swarm algorithm which has been applied to the generation of interactive, improvised music and suggests that the algorithm may have applications to dynamic optimisation problems.
Abstract: This paper examines a particle swarm algorithm which has been applied to the generation of interactive, improvised music. An important feature of this algorithm is a balance between particle attraction to the centre of mass and repulsive, collision avoiding forces. These forces are not present in the classic particle swarm optimisation algorithms. A number of experiments illuminate the nature of these new forces and it is suggested that the algorithm may have applications to dynamic optimisation problems.

Proceedings ArticleDOI
12 May 2002
TL;DR: A method, NeuroEvolution of Augmenting Topologies (NEAT) that outperforms the best fixed-topology methods on a challenging benchmark reinforcement learning task and shows how it is possible for evolution to both optimize and complexify solutions simultaneously, making it possible to evolve increasingly complex solutions over time.
Abstract: Neuroevolution, i.e. evolving artificial neural networks with genetic algorithms, has been highly effective in reinforcement learning tasks, particularly those with hidden state information. An important question in neuroevolution is how to gain an advantage from evolving neural network topologies along with weights. We present a method, NeuroEvolution of Augmenting Topologies (NEAT) that outperforms the best fixed-topology methods on a challenging benchmark reinforcement learning task. We claim that the increased efficiency is due to (1) employing a principled method of crossover of different topologies, (2) protecting structural innovation using speciation, and (3) incrementally growing from minimal structure. We test this claim through a series of ablation studies that demonstrate that each component is necessary to the system as a whole and to each other. What results is significantly faster learning. NEAT is also an important contribution to GAs because it shows how it is possible for evolution to both optimize and complexify solutions simultaneously, making it possible to evolve increasingly complex solutions over time, thereby strengthening the analogy with biological evolution.

Proceedings ArticleDOI
12 May 2002
TL;DR: The problem can be solved successfully by a genetic algorithm based hyperheuristic (hyper-GA) for scheduling geographically distributed training staff and courses, and results for four versions of the hyper-GA as well as a range of simpler heuristics and applying them to five test data set are presented.
Abstract: This paper investigates a genetic algorithm based hyperheuristic (hyper-GA) for scheduling geographically distributed training staff and courses. The aim of the hyper-GA is to evolve a good-quality heuristic for each given instance of the problem and use this to find a solution by applying a suitable ordering from a set of low-level heuristics. Since the user only supplies a number of low-level problem-specific heuristics and an evaluation function, the hyperheuristic can easily be reimplemented for a different type of problem, and we would expect it to be robust across a wide range of problem instances. We show that the problem can be solved successfully by a hyper-GA, presenting results for four versions of the hyper-GA as well as a range of simpler heuristics and applying them to five test data set.

Proceedings ArticleDOI
12 May 2002
TL;DR: Some crucial problems and the limitations of this practice in performing and documenting experimental research in evolutionary computing (EC) are identified, and research directions that should be pursued are elaborate.
Abstract: In this paper, we point to some essential shortcomings in contemporary practice in performing and documenting experimental research in evolutionary computing (EC). We identify some crucial problems and the limitations of this practice, and elaborate on research directions that should be pursued to improve the quality and relevance of experimental research.

Proceedings ArticleDOI
12 May 2002
TL;DR: A simple steady-state, Pareto-based evolutionary algorithm is presented that uses an elitist strategy for replacement and a simple uniform scheme for selection to solve multiple knapsack problems.
Abstract: A simple steady-state, Pareto-based evolutionary algorithm is presented that uses an elitist strategy for replacement and a simple uniform scheme for selection. Throughout the genetic search, progress depends entirely on the replacement policy, and no fitness calculations, rankings, subpopulations, niches or auxiliary populations are required. Preliminary results presented in this paper show improvements on previously published results for some multiple knapsack problems.

Proceedings ArticleDOI
12 May 2002
TL;DR: Experimental results indicate that PSO tackles minimax problems effectively, and PSO alleviates difficulties that might be encountered by gradient-based methods, due to the nature of the minimax: objective function, and potentially lead to failure.
Abstract: This paper investigates the ability of the Particle Swarm Optimization (PSO) method to cope with minimax problems through experiments on well-known test functions. Experimental results indicate that PSO tackles minimax problems effectively. Moreover, PSO alleviates difficulties that might be encountered by gradient-based methods, due to the nature of the minimax: objective function, and potentially lead to failure. The performance of PSO is compared with that of other established approaches, such as the sequential quadratic programming (SQP) method and a recently proposed smoothing technique.

Proceedings ArticleDOI
12 May 2002
TL;DR: The augmented Lagrangian is introduced to transform a constrained optimization to a min-max problem with the saddle-point solution and the efficiency and effectiveness of the new co-evolutionary particle swarm algorithm is illustrated.
Abstract: A co-evolutionary particle swarm optimization (PSO) to solve constrained optimization problems is proposed. First, we introduce the augmented Lagrangian to transform a constrained optimization to a min-max problem with the saddle-point solution. Next, a co-evolutionary PSO algorithm is developed with one PSO focusing on the minimum part of the min-max problem with the other PSO focusing on the maximum part of the min-max problem. The two PSOs are connected through the fitness function. In the fitness calculation of one PSO, the other PSO serves as the environment to that PSO. The new algorithm is tested on three benchmark functions. The simulation results illustrate the efficiency and effectiveness of the new co-evolutionary particle swarm algorithm.

Proceedings ArticleDOI
12 May 2002
TL;DR: This paper presents a new tool for supervised learning, modeled on resource limited Artificial Immune Systems, that is self-regulatory, efficient, and stable under a wide range of user-set parameters.
Abstract: This paper presents a new tool for supervised learning, modeled on resource limited Artificial Immune Systems. A supervised learning system, it is self-regulatory, efficient, and stable under a wide range of user-set parameters. Its performance is comparable to well-established classifiers on a variety of testbeds, including the iris data, the diabetes classification problem, the ionosphere problem, and the rock/metal classification problem for mine detection.

Proceedings ArticleDOI
12 May 2002
TL;DR: This paper studies a simplified form of LISYS, an artificial immune system for network intrusion detection, based on a new, more controlled data set than that used for earlier studies.
Abstract: This paper studies a simplified form of LISYS, an artificial immune system for network intrusion detection. The paper describes results based on a new, more controlled data set than that used for earlier studies. The paper also looks at which parameters appear most important for minimizing false positives, as well as the trade-offs and relationships among parameter settings.