scispace - formally typeset
Search or ask a question
Topic

Premature convergence

About: Premature convergence is a research topic. Over the lifetime, 5311 publications have been published within this topic receiving 96517 citations.


Papers
More filters
Journal ArticleDOI
01 Feb 1996
TL;DR: It is shown how the ant system (AS) can be applied to other optimization problems like the asymmetric traveling salesman, the quadratic assignment and the job-shop scheduling, and the salient characteristics-global data structure revision, distributed communication and probabilistic transitions of the AS.
Abstract: An analogy with the way ant colonies function has suggested the definition of a new computational paradigm, which we call ant system (AS). We propose it as a viable new approach to stochastic combinatorial optimization. The main characteristics of this model are positive feedback, distributed computation, and the use of a constructive greedy heuristic. Positive feedback accounts for rapid discovery of good solutions, distributed computation avoids premature convergence, and the greedy heuristic helps find acceptable solutions in the early stages of the search process. We apply the proposed methodology to the classical traveling salesman problem (TSP), and report simulation results. We also discuss parameter selection and the early setups of the model, and compare it with tabu search and simulated annealing using TSP. To demonstrate the robustness of the approach, we show how the ant system (AS) can be applied to other optimization problems like the asymmetric traveling salesman, the quadratic assignment and the job-shop scheduling. Finally we discuss the salient characteristics-global data structure revision, distributed communication and probabilistic transitions of the AS.

11,224 citations

Journal ArticleDOI
TL;DR: The comprehensive learning particle swarm optimizer (CLPSO) is presented, which uses a novel learning strategy whereby all other particles' historical best information is used to update a particle's velocity.
Abstract: This paper presents a variant of particle swarm optimizers (PSOs) that we call the comprehensive learning particle swarm optimizer (CLPSO), which uses a novel learning strategy whereby all other particles' historical best information is used to update a particle's velocity. This strategy enables the diversity of the swarm to be preserved to discourage premature convergence. Experiments were conducted (using codes available from http://www.ntu.edu.sg/home/epnsugan) on multimodal test functions such as Rosenbrock, Griewank, Rastrigin, Ackley, and Schwefel and composition functions both with and without coordinate rotation. The results demonstrate good performance of the CLPSO in solving multimodal problems when compared with eight other recent variants of the PSO.

3,217 citations

Journal ArticleDOI
Kuk-Hyun Han1, Jong-Hwan Kim1
TL;DR: The results show that QEA performs well, even with a small population, without premature convergence as compared to the conventional genetic algorithm, and a Q-gate is introduced as a variation operator to drive the individuals toward better solutions.
Abstract: This paper proposes a novel evolutionary algorithm inspired by quantum computing, called a quantum-inspired evolutionary algorithm (QEA), which is based on the concept and principles of quantum computing, such as a quantum bit and superposition of states. Like other evolutionary algorithms, QEA is also characterized by the representation of the individual, evaluation function, and population dynamics. However, instead of binary, numeric, or symbolic representation, QEA uses a Q-bit, defined as the smallest unit of information, for the probabilistic representation and a Q-bit individual as a string of Q-bits. A Q-gate is introduced as a variation operator to drive the individuals toward better solutions. To demonstrate its effectiveness and applicability, experiments were carried out on the knapsack problem, which is a well-known combinatorial optimization problem. The results show that QEA performs well, even with a small population, without premature convergence as compared to the conventional genetic algorithm.

1,335 citations

Journal ArticleDOI
TL;DR: A family of improved variants of the DE/target-to-best/1/bin scheme, which utilizes the concept of the neighborhood of each population member, and is shown to be statistically significantly better than or at least comparable to several existing DE variants as well as a few other significant evolutionary computing techniques over a test suite of 24 benchmark functions.
Abstract: Differential evolution (DE) is well known as a simple and efficient scheme for global optimization over continuous spaces. It has reportedly outperformed a few evolutionary algorithms (EAs) and other search heuristics like the particle swarm optimization (PSO) when tested over both benchmark and real-world problems. DE, however, is not completely free from the problems of slow and/or premature convergence. This paper describes a family of improved variants of the DE/target-to-best/1/bin scheme, which utilizes the concept of the neighborhood of each population member. The idea of small neighborhoods, defined over the index-graph of parameter vectors, draws inspiration from the community of the PSO algorithms. The proposed schemes balance the exploration and exploitation abilities of DE without imposing serious additional burdens in terms of function evaluations. They are shown to be statistically significantly better than or at least comparable to several existing DE variants as well as a few other significant evolutionary computing techniques over a test suite of 24 benchmark functions. The paper also investigates the applications of the new DE variants to two real-life problems concerning parameter estimation for frequency modulated sound waves and spread spectrum radar poly-phase code design.

1,086 citations

Journal ArticleDOI
TL;DR: The Wilcoxon Signed-Rank Test is used to statistically compare BSA's effectiveness in solving numerical optimization problems with the performances of six widely used EA algorithms: PSO, CMAES, ABC, JDE, CLPSO and SADE and shows that in general, BSA can solve the benchmark problems more successfully than the comparison algorithms.

837 citations


Network Information
Related Topics (5)
Optimization problem
96.4K papers, 2.1M citations
87% related
Artificial neural network
207K papers, 4.5M citations
87% related
Fuzzy logic
151.2K papers, 2.3M citations
87% related
Feature extraction
111.8K papers, 2.1M citations
84% related
Control theory
299.6K papers, 3.1M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202357
2022180
2021305
2020315
2019291
2018305