Proceedings ArticleDOI
Particle swarms for feedforward neural network training
Rui Mendes,Paulo Cortez,Miguel Rocha,José Neves +3 more
- Vol. 2, pp 1895-1899
Reads0
Chats0
TLDR
Particle swarm is an optimization paradigm for real-valued functions, based on the social dynamics of group interaction, and its application to the training of neural networks is proposed.Abstract:
Particle swarm is an optimization paradigm for real-valued functions, based on the social dynamics of group interaction. We propose its application to the training of neural networks. Comparative tests were carried out, for classification and regression tasks.read more
Citations
More filters
Journal ArticleDOI
CALYPSO: A method for crystal structure prediction
TL;DR: This paper focuses on descriptions of the implementation of CALYPSO code and why it works and testing of the code on many known and unknown systems shows high efficiency.
Journal ArticleDOI
A study of particle swarm optimization particle trajectories
TL;DR: Current theoretical studies on particle swarm optimization are extended to investigate particle trajectories for general swarms to include the influence of the inertia term, and a formal proof that each particle converges to a stable point is provided.
Journal ArticleDOI
A hybrid of genetic algorithm and particle swarm optimization for recurrent network design
TL;DR: An evolutionary recurrent network which automates the design of recurrent neural/fuzzy networks using a new evolutionary learning algorithm based on a hybrid of genetic algorithm (GA) and particle swarm optimization (PSO), and is thus called HGAPSO.
Journal ArticleDOI
Particle swarm optimization versus genetic algorithms for phased array synthesis
D.W. Boeringer,Douglas H. Werner +1 more
TL;DR: The particle swarm optimizer shares the ability of the genetic algorithm to handle arbitrary nonlinear cost functions, but with a much simpler implementation it clearly demonstrates good possibilities for widespread use in electromagnetic optimization.
Journal ArticleDOI
Optimizing connection weights in neural networks using the whale optimization algorithm
TL;DR: The qualitative and quantitative results prove that the proposed WOA-based trainer is able to outperform the current algorithms on the majority of datasets in terms of both local optima avoidance and convergence speed.
References
More filters
Proceedings ArticleDOI
Particle swarm optimization
TL;DR: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced, and the evolution of several paradigms is outlined, and an implementation of one of the paradigm is discussed.
Book ChapterDOI
Learning internal representations by error propagation
TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
Book
Learning internal representations by error propagation
TL;DR: In this paper, the problem of the generalized delta rule is discussed and the Generalized Delta Rule is applied to the simulation results of simulation results in terms of the generalized delta rule.
Journal ArticleDOI
The particle swarm - explosion, stability, and convergence in a multidimensional complex space
M. Clerc,James Kennedy +1 more
TL;DR: This paper analyzes a particle's trajectory as it moves in discrete time, then progresses to the view of it in continuous time, leading to a generalized model of the algorithm, containing a set of coefficients to control the system's convergence tendencies.
Proceedings ArticleDOI
Small worlds and mega-minds: effects of neighborhood topology on particle swarm performance
TL;DR: The study manipulated the neighborhood topologies of particle swarms optimizing four test functions and Sociometric structure and the small-world manipulation interacted with function to produce a significant effect on performance.