scispace - formally typeset
Search or ask a question
Topic

Multi-swarm optimization

About: Multi-swarm optimization is a research topic. Over the lifetime, 19162 publications have been published within this topic receiving 549725 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: It is shown that inclusion of dynamic inertia renders the PSOA relatively insensitive to the values of the cognitive and social scaling factors, and a parameter sensitivity analysis is performed for these two variants.
Abstract: A number of recently proposed variants of the particle swarm optimization algorithm (PSOA) are applied to an extended Dixon-Szeg und constrained test set in global optimization. Of the variants considered, it is shown that constriction as proposed by Clerc, and dynamic inertia and maximum velocity reduction as proposed by Fourie and Groenwold, represent the main contenders from a cost efficiency point of view. A parameter sensitivity analysis is then performed for these two variants in the interests of finding a reliable general purpose off-the-shelf PSOA for global optimization. In doing so, it is shown that inclusion of dynamic inertia renders the PSOA relatively insensitive to the values of the cognitive and social scaling factors.

351 citations

Journal ArticleDOI
TL;DR: This paper develops, analyze, and test a new algorithm for the global minimization of a function subject to simple bounds without the use of derivatives, and shows that the resulting algorithm is highly competitive with other global optimization methods also based on function values.
Abstract: In this paper we develop, analyze, and test a new algorithm for the global minimization of a function subject to simple bounds without the use of derivatives. The underlying algorithm is a pattern search method, more specifically a coordinate search method, which guarantees convergence to stationary points from arbitrary starting points. In the optional search phase of pattern search we apply a particle swarm scheme to globally explore the possible nonconvexity of the objective function. Our extensive numerical experiments showed that the resulting algorithm is highly competitive with other global optimization methods also based on function values.

349 citations

Journal ArticleDOI
01 Jun 2012
TL;DR: A novel algorithm, called self-learning particle swarm optimizer (SLPSO), for global optimization problems, which can enable a particle to choose the optimal strategy according to its own local fitness landscape.
Abstract: Particle swarm optimization (PSO) has been shown as an effective tool for solving global optimization problems. So far, most PSO algorithms use a single learning pattern for all particles, which means that all particles in a swarm use the same strategy. This monotonic learning pattern may cause the lack of intelligence for a particular particle, which makes it unable to deal with different complex situations. This paper presents a novel algorithm, called self-learning particle swarm optimizer (SLPSO), for global optimization problems. In SLPSO, each particle has a set of four strategies to cope with different situations in the search space. The cooperation of the four strategies is implemented by an adaptive learning framework at the individual level, which can enable a particle to choose the optimal strategy according to its own local fitness landscape. The experimental study on a set of 45 test functions and two real-world problems show that SLPSO has a superior performance in comparison with several other peer algorithms.

348 citations

Journal Article
TL;DR: This paper presents a method to employ particle swarms optimizers in a cooperative configuration by splitting the input vector into several sub-vectors, each which is optimized cooperatively in its own swarm.
Abstract: This paper presents a method to employ particle swarms optimizers in a cooperative configuration. This is achieved by splitting the input vector into several sub-vectors, each which is optimized cooperatively in its own swarm. the application of this technique to neural network training is investigated, with promising results.

344 citations

Journal ArticleDOI
01 Aug 2017
TL;DR: The experiment results show that the proposed MGACACO algorithm can avoid falling into the local extremum, and takes on better search precision and faster convergence speed.
Abstract: To overcome the deficiencies of weak local search ability in genetic algorithms (GA) and slow global convergence speed in ant colony optimization (ACO) algorithm in solving complex optimization problems, the chaotic optimization method, multi-population collaborative strategy and adaptive control parameters are introduced into the GA and ACO algorithm to propose a genetic and ant colony adaptive collaborative optimization (MGACACO) algorithm for solving complex optimization problems. The proposed MGACACO algorithm makes use of the exploration capability of GA and stochastic capability of ACO algorithm. In the proposed MGACACO algorithm, the multi-population strategy is used to realize the information exchange and cooperation among the various populations. The chaotic optimization method is used to overcome long search time, avoid falling into the local extremum and improve the search accuracy. The adaptive control parameters is used to make relatively uniform pheromone distribution, effectively solve the contradiction between expanding search and finding optimal solution. The collaborative strategy is used to dynamically balance the global ability and local search ability, and improve the convergence speed. Finally, various scale TSP are selected to verify the effectiveness of the proposed MGACACO algorithm. The experiment results show that the proposed MGACACO algorithm can avoid falling into the local extremum, and takes on better search precision and faster convergence speed.

343 citations


Network Information
Related Topics (5)
Fuzzy logic
151.2K papers, 2.3M citations
88% related
Optimization problem
96.4K papers, 2.1M citations
87% related
Support vector machine
73.6K papers, 1.7M citations
86% related
Artificial neural network
207K papers, 4.5M citations
85% related
Robustness (computer science)
94.7K papers, 1.6M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023183
2022471
202110
20207
201926
2018171