Conference
World Congress on Computational Intelligence
About: World Congress on Computational Intelligence is an academic conference. The conference publishes majorly in the area(s): Artificial neural network & Fuzzy logic. Over the lifetime, 2078 publications have been published by the conference receiving 34864 citations.
Topics: Artificial neural network, Fuzzy logic, Evolutionary computation, Fuzzy control system, Time delay neural network
Papers published on a yearly basis
Papers
More filters
••
27 Jun 1994TL;DR: The Niched Pareto GA is introduced as an algorithm for finding the Pare to optimal set and its ability to find and maintain a diverse "Pareto optimal population" on two artificial problems and an open problem in hydrosystems is demonstrated.
Abstract: Many, if not most, optimization problems have multiple objectives. Historically, multiple objectives have been combined ad hoc to form a scalar objective function, usually through a linear combination (weighted sum) of the multiple attributes, or by turning objectives into constraints. The genetic algorithm (GA), however, is readily modified to deal with multiple objectives by incorporating the concept of Pareto domination in its selection operator, and applying a niching pressure to spread its population out along the Pareto optimal tradeoff surface. We introduce the Niched Pareto GA as an algorithm for finding the Pareto optimal set. We demonstrate its ability to find and maintain a diverse "Pareto optimal population" on two artificial problems and an open problem in hydrosystems. >
2,566 citations
••
01 Jun 2008TL;DR: This paper demonstrates difficulties in their scalability to many-objective problems through computational experiments, and reviews some approaches proposed in the literature for the scalability improvement of EMO algorithms.
Abstract: Whereas evolutionary multiobjective optimization (EMO) algorithms have successfully been used in a wide range of real-world application tasks, difficulties in their scalability to many-objective problems have also been reported. In this paper, first we demonstrate those difficulties through computational experiments. Then we review some approaches proposed in the literature for the scalability improvement of EMO algorithms. Finally we suggest future research directions in evolutionary many-objective optimization.
845 citations
••
27 Jun 1994TL;DR: This paper discusses the use of non-stationary penalty functions to solve general nonlinear programming problems (NP) using real-valued GAs and the effectiveness of these methods is reported.
Abstract: We discuss the use of non-stationary penalty functions to solve general nonlinear programming problems (NP) using real-valued GAs. The non-stationary penalty is a function of the generation number; as the number of generations increases so does the penalty. Therefore, as the penalty increases it puts more and more selective pressure on the GA to find a feasible solution. The ideas presented in this paper come from two basic areas: calculus-based nonlinear programming and simulated annealing. The non-stationary penalty methods are tested on four NP test cases and the effectiveness of these methods are reported. >
781 citations
••
01 Jan 1994TL;DR: In this article, the authors estimate the mean and variance of the probability distribution of the target as a function of the input, given an assumed target error-distribution model through the activation of an auxiliary output unit, which provides a measure of the uncertainty of the usual network output for each input pattern.
Abstract: Introduces a method that estimates the mean and the variance of the probability distribution of the target as a function of the input, given an assumed target error-distribution model. Through the activation of an auxiliary output unit, this method provides a measure of the uncertainty of the usual network output for each input pattern. The authors derive the cost function and weight-update equations for the example of a Gaussian target error distribution, and demonstrate the feasibility of the network on a synthetic problem where the true input-dependent noise level is known. >
579 citations
••
01 Jun 2008TL;DR: NES is presented, a novel algorithm for performing real-valued dasiablack boxpsila function optimization: optimizing an unknown objective function where algorithm-selected function measurements constitute the only information accessible to the method.
Abstract: This paper presents natural evolution strategies (NES), a novel algorithm for performing real-valued dasiablack boxpsila function optimization: optimizing an unknown objective function where algorithm-selected function measurements constitute the only information accessible to the method. Natural evolution strategies search the fitness landscape using a multivariate normal distribution with a self-adapting mutation matrix to generate correlated mutations in promising regions. NES shares this property with covariance matrix adaption (CMA), an evolution strategy (ES) which has been shown to perform well on a variety of high-precision optimization tasks. The natural evolution strategies algorithm, however, is simpler, less ad-hoc and more principled. Self-adaptation of the mutation matrix is derived using a Monte Carlo estimate of the natural gradient towards better expected fitness. By following the natural gradient instead of the dasiavanillapsila gradient, we can ensure efficient update steps while preventing early convergence due to overly greedy updates, resulting in reduced sensitivity to local suboptima. We show NES has competitive performance with CMA on unimodal tasks, while outperforming it on several multimodal tasks that are rich in deceptive local optima.
535 citations