scispace - formally typeset
Proceedings ArticleDOI

DOSI: Training artificial neural networks using overlapping swarm intelligence with local credit assignment

Reads0
Chats0
TLDR
An overlapping swarm intelligence algorithm for training neural networks in which a particle swarm is assigned to each neuron to search for that neuron's weights and then evaluating the fitness of the particles using a localized network is proposed.
Abstract
A novel swarm-based algorithm is proposed for the training of artificial neural networks Training of such networks is a difficult problem that requires an effective search algorithm to find optimal weight values While gradient-based methods, such as backpropagation, are frequently used to train multilayer feedforward neural networks, such methods may not yield a globally optimal solution To overcome the limitations of gradient-based methods, evolutionary algorithms have been used to train these networks with some success This paper proposes an overlapping swarm intelligence algorithm for training neural networks in which a particle swarm is assigned to each neuron to search for that neuron's weights Unlike similar architectures, our approach does not require a shared global network for fitness evaluation Thus the approach discussed in this paper localizes the credit assignment process by first focusing on updating weights within local swarms and then evaluating the fitness of the particles using a localized network This has the advantage of enabling our algorithm's learning process to be fully distributed

read more

Citations
More filters
Journal ArticleDOI

Factored Evolutionary Algorithms

TL;DR: A formal definition of FEA algorithms is given and empirical results related to their performance are presented, showing that FEA’s performance is not restricted by the underlying optimization algorithm by creating FEA versions of hill climbing, particle swarm optimization, genetic algorithm, and differential evolution and comparing their performance to their single-population and cooperative coevolutionary counterparts.
Journal ArticleDOI

Abductive inference in Bayesian networks using distributed overlapping swarm intelligence

TL;DR: Several multi-swarm algorithms based on the overlapping swarm intelligence framework to find approximate solutions to the problems of full and partial abductive inference in Bayesian belief networks are proposed.
Journal ArticleDOI

Wavelet neural networks using particle swarm optimization training in modeling regional ionospheric total electron content

TL;DR: Comparison of diurnal predicted TEC values from the WNN-PSO, SNN-BP, SNF, NN, SN by PSO training algorithm, and WNN by BP training algorithm models with GPS TEC revealed that the Wnn- PSO provides more accurate predictions than the other methods in the test area.
Journal ArticleDOI

Ionosphere tomography using wavelet neural network and particle swarm optimization training algorithm in Iranian case study

TL;DR: A wavelet neural network with a particle swarm optimization training algorithm to solve pixel-based ionospheric tomography based on the neural network (ITNN) shows that the proposed approach is superior to those of the traditional methods.
Proceedings ArticleDOI

Learning Bayesian classifiers using overlapping swarm intelligence

TL;DR: This paper proposes a novel approximation algorithm for learning Bayesian network classifiers based on Overlapping Swarm Intelligence and indicates that, in many cases, this algorithm significantly outperforms competing approaches, including traditional particle swarm optimization.
References
More filters
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Proceedings ArticleDOI

Particle swarm optimization

TL;DR: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced, and the evolution of several paradigms is outlined, and an implementation of one of the paradigm is discussed.
Journal ArticleDOI

A fast learning algorithm for deep belief nets

TL;DR: A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.
Proceedings ArticleDOI

Comparing inertia weights and constriction factors in particle swarm optimization

TL;DR: It is concluded that the best approach is to use the constriction factor while limiting the maximum velocity Vmax to the dynamic range of the variable Xmax on each dimension.
Related Papers (5)