scispace - formally typeset
Search or ask a question
Topic

Multi-swarm optimization

About: Multi-swarm optimization is a research topic. Over the lifetime, 19162 publications have been published within this topic receiving 549725 citations.


Papers
More filters
Proceedings ArticleDOI
27 Jun 2007
TL;DR: A novel technique for organizing swarms of robots into formation utilizing artificial potential fields generated from normal and sigmoid functions is presented, which is simple, computationally efficient, scales well to different swarm sizes, to heterogeneous systems, and to both centralized and decentralized swarm models.
Abstract: A novel technique is presented for organizing swarms of robots into formation utilizing artificial potential fields generated from normal and sigmoid functions. These functions construct the surface swarm members travel on, controlling the overall swarm geometry and the individual member spacing. Limiting functions are defined to provide tighter swarm control by modifying and adjusting a set of control variables forcing the swarm to behave according to set constraints, formation and member spacing. The swarm function and limiting functions are combined to control swarm formation, orientation, and swarm movement as a whole. Parameters are chosen based on desired formation as well as user defined constraints. This approach compared to others, is simple, computationally efficient, scales well to different swarm sizes, to heterogeneous systems, and to both centralized and decentralized swarm models. Simulation results are presented for a swarm of four and ten particles following circle, ellipse and wedge formations. Experimental results are also included with four unmanned ground vehicles (UGV).

129 citations

Journal ArticleDOI
TL;DR: This paper presents a genetic algorithm with a very small population and a reinitialization process (a microgenetic algorithm) for solving multiobjective optimization problems and indicates that this approach is very efficient and performs very well in problems with different degrees of complexity.
Abstract: In this paper, we present a genetic algorithm with a very small population and a reinitialization process (a microgenetic algorithm) for solving multiobjective optimization problems. Our approach uses three forms of elitism, including an external memory (or secondary population) to keep the nondominated solutions found along the evolutionary process. We validate our proposal using several engineering optimization problems taken from the specialized literature and compare our results with respect to two other algorithms (NSGA-II and PAES) using three different metrics. Our results indicate that our approach is very efficient (computationally speaking) and performs very well in problems with different degrees of complexity.

129 citations

Journal ArticleDOI
01 Mar 2015
TL;DR: A new adaptive inertia weight adjusting approach is proposed based on Bayesian techniques in PSO, which is used to set up a sound tradeoff between the exploration and exploitation characteristics and is compared with other types of improved PSO algorithms, which also performs well.
Abstract: Graphical abstractA new particle swarm optimization algorithm based on the Bayesian techniques(BPSO) is proposed. Fig. 1 is the comparisons between different inertia weight strategies for f5 on 10 dimensions. Fig. 2 is comparisons between different PSO methods for f5 on 10 dimensions. Parameter s is the interval of the adjacent two inertia weight change in all iterations. As shown in Fig. 3, different values of s affect the convergence rate in the test function. Fig. 4 is the change of ω in the iterations. Display Omitted HighlightsWhy BPSO can achieve the excellent balance between exploration and exploitation in optimization processing is explained.To overcome the defect of ordinary PSO, a new algorithm with adaptive inertia weight based on Bayesian techniques is proposed.Analysis of parameters s and ω in the BPSO. Particle swarm optimization is a stochastic population-based algorithm based on social interaction of bird flocking or fish schooling. In this paper, a new adaptive inertia weight adjusting approach is proposed based on Bayesian techniques in PSO, which is used to set up a sound tradeoff between the exploration and exploitation characteristics. It applies the Bayesian techniques to enhance the PSO's searching ability in the exploitation of past particle positions and uses the cauchy mutation for exploring the better solution. A suite of benchmark functions are employed to test the performance of the proposed method. The results demonstrate that the new method exhibits higher accuracy and faster convergence rate than other inertia weight adjusting methods in multimodal and unimodal functions. Furthermore, to show the generalization ability of BPSO method, it is compared with other types of improved PSO algorithms, which also performs well.

129 citations

Journal ArticleDOI
Gang Xu1
TL;DR: Experimental results show that the proposed adaptive parameter tuning of particle swarm optimization based on velocity information (APSO-VI) remarkably improves the ability of PSO to jump out of the local optima and significantly enhance the convergence speed and precision.

129 citations

Journal ArticleDOI
TL;DR: The proposed QUATRE algorithm is a swarm based algorithm and use quasi-affine transformation approach for evolution, which has excellent performance not only on uni-modal functions, but also on multi- modal functions even on higher dimension optimization problems.
Abstract: This paper presents a new novel evolutionary approach named QUasi-Affine TRansformation Evolutionary (QUATRE) algorithm, which is a swarm based algorithm and use quasi-affine transformation approach for evolution. The paper also discusses the relation between QUATRE algorithm and other kinds of swarm based algorithms including Particle Swarm Optimization (PSO) variants and Differential Evolution (DE) variants. Comparisons and contrasts are made among the proposed QUATRE algorithm, state-of-the-art PSO variants and DE variants under CEC2013 test suite on real-parameter optimization and CEC2008 test suite on large-scale optimization. Experiment results show that our algorithm outperforms the other algorithms not only on real-parameter optimization but also on large-scale optimization. Moreover, our algorithm has a much more cooperative property that to some extent it can reduce the time complexity (better performance can be achieved by reducing number of generations required for a target optimum by increasing particle population size with the total number of function evaluations unchanged). In general, the proposed algorithm has excellent performance not only on uni-modal functions, but also on multi-modal functions even on higher dimension optimization problems.

129 citations


Network Information
Related Topics (5)
Fuzzy logic
151.2K papers, 2.3M citations
88% related
Optimization problem
96.4K papers, 2.1M citations
87% related
Support vector machine
73.6K papers, 1.7M citations
86% related
Artificial neural network
207K papers, 4.5M citations
85% related
Robustness (computer science)
94.7K papers, 1.6M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023183
2022471
202110
20207
201926
2018171