scispace - formally typeset
Search or ask a question
Topic

Multi-swarm optimization

About: Multi-swarm optimization is a research topic. Over the lifetime, 19162 publications have been published within this topic receiving 549725 citations.


Papers
More filters
Proceedings ArticleDOI
12 May 2002
TL;DR: This paper introduces spatial extension to particles in the PSO model in order to overcome premature convergence in iterative optimisation and shows that the SEPSO indeed managed to keep diversity in the search space and yielded superior results.
Abstract: In this paper, we introduce spatial extension to particles in the PSO model in order to overcome premature convergence in iterative optimisation. The standard PSO and the new model (SEPSO) are compared w.r.t. performance on well-studied benchmark problems. We show that the SEPSO indeed managed to keep diversity in the search space and yielded superior results.

280 citations

Proceedings ArticleDOI
12 Dec 2005
TL;DR: The quasi-Newton method is combined to improve its local search ability and the performance of a modified dynamic multi-swarm particle swarm optimizer (DMS-PSO) on the set of benchmark functions provided by CEC2005 is reported.
Abstract: In this paper, the performance of a modified dynamic multi-swarm particle swarm optimizer (DMS-PSO) on the set of benchmark functions provided by CEC2005 is reported. Different from the existing multi-swarm PSOs and local versions of PSO, the swarms are dynamic and the swarms' size is small. The whole population is divided into many small swarms, these swarms are regrouped frequently by using various regrouping schedules and information is exchanged among the swarms. The quasi-Newton method is combined to improve its local search ability

280 citations

Journal ArticleDOI
TL;DR: A method for parameter meta-optimization based on PSO and its application to neural network training to build a quantitative model for predicting blood-brain barrier permeation of small organic molecules suggests that PSO performance can be improved if meta- Optimized parameter sets are applied.
Abstract: Particle Swarm Optimization (PSO) is an established method for parameter optimization. It represents a population-based adaptive optimization technique that is influenced by several "strategy parameters". Choosing reasonable parameter values for the PSO is crucial for its convergence behavior, and depends on the optimization task. We present a method for parameter meta-optimization based on PSO and its application to neural network training. The concept of the Optimized Particle Swarm Optimization (OPSO) is to optimize the free parameters of the PSO by having swarms within a swarm. We assessed the performance of the OPSO method on a set of five artificial fitness functions and compared it to the performance of two popular PSO implementations. Our results indicate that PSO performance can be improved if meta-optimized parameter sets are applied. In addition, we could improve optimization speed and quality on the other PSO methods in the majority of our experiments. We applied the OPSO method to neural network training with the aim to build a quantitative model for predicting blood-brain barrier permeation of small organic molecules. On average, training time decreased by a factor of four and two in comparison to the other PSO methods, respectively. By applying the OPSO method, a prediction model showing good correlation with training-, test- and validation data was obtained. Optimizing the free parameters of the PSO method can result in performance gain. The OPSO approach yields parameter combinations improving overall optimization performance. Its conceptual simplicity makes implementing the method a straightforward task.

279 citations

Proceedings ArticleDOI
11 Sep 2006
TL;DR: An extension of Self-adaptive Differential Evolution algorithm (SaDE) to solve optimization problems with constraints is proposed, in comparison with the original SaDE algorithm, the replacement criterion was modified for handling constraints.
Abstract: In this paper, we propose an extension of Self-adaptive Differential Evolution algorithm (SaDE) to solve optimization problems with constraints. In comparison with the original SaDE algorithm, the replacement criterion was modified for handling constraints. The performance of the proposed method is reported on the set of 24 benchmark problems provided by CEC2006 special session on constrained real parameter optimization.

279 citations

01 Jan 2009
TL;DR: A continuous miner includes a pair of resonant beams for driving a cutting blade rigidly secured to their lower anti-nodes so that they can pivot about both upper and lower nodes, the miner may be operated to make either vertical or horizontal cuts.
Abstract: A continuous miner includes a pair of resonant beams for driving a cutting blade rigidly secured to their lower anti-nodes. By mounting the beams so that they can pivot about both upper and lower nodes, the miner may be operated to make either vertical or horizontal cuts. When making vertical cuts, the beams are pivoted upward about their upper nodes to drive the blade in an upward arc in front of the miner, the blade being rearwardly off-set from the center line of the beams to reduce blade drag. To make horizontal cuts, the beams are maintained substantially vertically as the miner is driven forward. The precise angle of the blade relative to the ground may be adjusted to reduce blade drag by tilting the beams an appropriate amount about their lower nodes.

278 citations


Network Information
Related Topics (5)
Fuzzy logic
151.2K papers, 2.3M citations
88% related
Optimization problem
96.4K papers, 2.1M citations
87% related
Support vector machine
73.6K papers, 1.7M citations
86% related
Artificial neural network
207K papers, 4.5M citations
85% related
Robustness (computer science)
94.7K papers, 1.6M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023183
2022471
202110
20207
201926
2018171