scispace - formally typeset
Search or ask a question
Author

Nathan Fortier

Bio: Nathan Fortier is an academic researcher from Montana State University. The author has contributed to research in topics: Swarm intelligence & Bayesian network. The author has an hindex of 6, co-authored 12 publications receiving 128 citations. Previous affiliations of Nathan Fortier include Montana Tech of the University of Montana.

Papers
More filters
Journal ArticleDOI
TL;DR: A formal definition of FEA algorithms is given and empirical results related to their performance are presented, showing that FEA’s performance is not restricted by the underlying optimization algorithm by creating FEA versions of hill climbing, particle swarm optimization, genetic algorithm, and differential evolution and comparing their performance to their single-population and cooperative coevolutionary counterparts.
Abstract: Factored evolutionary algorithms (FEAs) are a new class of evolutionary search-based optimization algorithms that have successfully been applied to various problems, such as training neural networks and performing abductive inference in graphical models. An FEA is unique in that it factors the objective function by creating overlapping subpopulations that optimize over a subset of variables of the function. In this paper, we give a formal definition of FEA algorithms and present empirical results related to their performance. One consideration in using an FEA is determining the appropriate factor architecture, which determines the set of variables each factor will optimize. For this reason, we present the results of experiments comparing the performance of different factor architectures on several standard applications for evolutionary algorithms. Additionally, we show that FEA’s performance is not restricted by the underlying optimization algorithm by creating FEA versions of hill climbing, particle swarm optimization, genetic algorithm, and differential evolution and comparing their performance to their single-population and cooperative coevolutionary counterparts.

57 citations

Journal ArticleDOI
01 Apr 2015
TL;DR: Several multi-swarm algorithms based on the overlapping swarm intelligence framework to find approximate solutions to the problems of full and partial abductive inference in Bayesian belief networks are proposed.
Abstract: In this paper we propose several approximation algorithms for the problems of full and partial abductive inference in Bayesian belief networks. Full abductive inference is the problem of finding the $$k$$ k most probable state assignments to all non-evidence variables in the network while partial abductive inference is the problem of finding the $$k$$ k most probable state assignments for a subset of the non-evidence variables in the network, called the explanation set. We developed several multi-swarm algorithms based on the overlapping swarm intelligence framework to find approximate solutions to these problems. For full abductive inference a swarm is associated with each node in the network. For partial abductive inference, a swarm is associated with each node in the explanation set and each node in the Markov blankets of the explanation set variables. Each swarm learns the value assignments for the variables in the Markov blanket associated with that swarm's node. Swarms learning state assignments for the same variable compete for inclusion in the final solution.

24 citations

Proceedings ArticleDOI
01 Nov 2012
TL;DR: An overlapping swarm intelligence algorithm for training neural networks in which a particle swarm is assigned to each neuron to search for that neuron's weights and then evaluating the fitness of the particles using a localized network is proposed.
Abstract: A novel swarm-based algorithm is proposed for the training of artificial neural networks Training of such networks is a difficult problem that requires an effective search algorithm to find optimal weight values While gradient-based methods, such as backpropagation, are frequently used to train multilayer feedforward neural networks, such methods may not yield a globally optimal solution To overcome the limitations of gradient-based methods, evolutionary algorithms have been used to train these networks with some success This paper proposes an overlapping swarm intelligence algorithm for training neural networks in which a particle swarm is assigned to each neuron to search for that neuron's weights Unlike similar architectures, our approach does not require a shared global network for fitness evaluation Thus the approach discussed in this paper localizes the credit assignment process by first focusing on updating weights within local swarms and then evaluating the fitness of the particles using a localized network This has the advantage of enabling our algorithm's learning process to be fully distributed

20 citations

Proceedings ArticleDOI
01 Dec 2014
TL;DR: This paper proposes a novel approximation algorithm for learning Bayesian network classifiers based on Overlapping Swarm Intelligence and indicates that, in many cases, this algorithm significantly outperforms competing approaches, including traditional particle swarm optimization.
Abstract: Bayesian networks are powerful probabilistic models that have been applied to a variety of tasks. When applied to classification problems, Bayesian networks have shown competitive performance when compared to other state-of-the-art classifiers. However, structure learning of Bayesian networks has been shown to be NP-Hard. In this paper, we propose a novel approximation algorithm for learning Bayesian network classifiers based on Overlapping Swarm Intelligence. In our approach a swarm is associated with each attribute in the data. Each swarm learns the edges for its associated attribute node and swarms that learn conflicting structures compete for inclusion in the final network structure. Our results indicate that, in many cases, Overlapping Swarm Intelligence significantly outperforms competing approaches, including traditional particle swarm optimization.

13 citations

Proceedings ArticleDOI
16 Apr 2013
TL;DR: This paper compares the proposed novel swarm-based algorithm to several other local search algorithms and shows that the approach outperforms the competing methods in its ability to find the k-MPE.
Abstract: Abductive inference in Bayesian networks, is the problem of finding the most likely joint assignment to all non-evidence variables in the network. Such an assignment is called the most probable explanation (MPE). A novel swarm-based algorithm is proposed that finds the k-MPE of a Bayesian network. Our approach is an overlapping swarm intelligence algorithm in which a particle swarm is assigned to each node in the network. Each swarm searches for value assignments for its node's Markov blanket. Swarms that have overlapping value assignments compete to determine which assignment will be used in the final solution. In this paper we compare our algorithm to several other local search algorithms and show that our approach outperforms the competing methods in its ability to find the k-MPE.

12 citations


Cited by
More filters
Journal ArticleDOI
01 Sep 1997-Notes

234 citations

Journal ArticleDOI
TL;DR: A comprehensive survey of CCEAs, covering problem decomposition, collaborator selection, individual fitness evaluation, subproblem resource allocation, implementations, benchmark test problems, control parameters, theoretical analyses, and applications is presented.
Abstract: The first cooperative co-evolutionary algorithm (CCEA) was proposed by Potter and De Jong in 1994 and since then many CCEAs have been proposed and successfully applied to solving various complex optimization problems. In applying CCEAs, the complex optimization problem is decomposed into multiple subproblems, and each subproblem is solved with a separate subpopulation, evolved by an individual evolutionary algorithm (EA). Through cooperative co-evolution of multiple EA subpopulations, a complete problem solution is acquired by assembling the representative members from each subpopulation. The underlying divide-and-conquer and collaboration mechanisms enable CCEAs to tackle complex optimization problems efficiently, and hence CCEAs have been attracting wide attention in the EA community. This paper presents a comprehensive survey of these CCEAs, covering problem decomposition, collaborator selection, individual fitness evaluation, subproblem resource allocation, implementations, benchmark test problems, control parameters, theoretical analyses, and applications. The unsolved challenges and potential directions for their solutions are discussed.

183 citations

Journal ArticleDOI
TL;DR: The proposed algorithm, based on bat algorithm, combines chaotic map and random black hole model together, which is helpful not only in avoiding premature convergence, but also in increasing the global search ability, enlarging exploitation area and accelerating convergence speed.
Abstract: We present a hybrid metaheuristic optimization algorithm for solving economic dispatch problems in power systems. The proposed algorithm, based on bat algorithm, combines chaotic map and random black hole model together. Chaotic map is used to prevent premature convergence, and the random black hole model is helpful not only in avoiding premature convergence, but also in increasing the global search ability, enlarging exploitation area and accelerating convergence speed. The pseudocode and related parameters of the proposed algorithm are also given in this paper. Different from other related works, the costs of conventional thermal generators and random wind power are both included in the cost function because of the increasing penetration of wind power. The proposed algorithm has no requirement on the convexity or continuous differentiability of the cost function, although the effect on fuel cost, caused by the underestimation and overestimation of wind power, is included. This makes it feasible to take more practical nonlinear constraints into account, such as prohibited operating zones and ramp rate limits. Three test cases are given to illustrate the effectiveness of the proposed method.

109 citations

Proceedings ArticleDOI
05 Jun 2017
TL;DR: This study proposes a general framework, called the evolution of biocoenosis through symbiosis (EBS), for evolutionary algorithms to deal with the many-tasking problems, and shows that the effectiveness of EBS is superior to that of single task optimization and MFEA on the four MaTPs.
Abstract: Evolutionary multitasking is an emergent topic in evolutionary computation area. Recently, a well-known evolutionary multitasking method, the multi-factorial evolutionary algorithm (MFEA), has been proposed and applied to concurrently solve two or three problems. In MFEA, individuals of different tasks are recombined in a predefined random mating probability. As the number of tasks increases, such recombination of different tasks becomes very frequent, thereby detracting the search from any specific problems and limiting the MFEA's capability to solve many-tasking problems. This study proposes a general framework, called the evolution of biocoenosis through symbiosis (EBS), for evolutionary algorithms to deal with the many-tasking problems. The EBS has two main features: the selection of candidates from concatenate offspring and the adaptive control of information exchange among tasks. The concatenate offspring represent a set of offspring used for all tasks. Moreover, this study presents a test suite of many-tasking problems (MaTPs), modified from the CEC 2014 benchmark problems. The Spearman correlation is adopted to analyze the effect of the shifts of optima on the MaTPs. Experimental results show that the effectiveness of EBS is superior to that of single task optimization and MFEA on the four MaTPs. The results also validate that EBS is capable of exploiting the synergy of fitness landscapes.

66 citations

Proceedings ArticleDOI
01 Jun 2019
TL;DR: Experimental results show that the RDG method can greatly improve the search ability of an optimization algorithm via divide-and-conquer, and outperforms RDG, random decomposition as well as other state-of-the-art methods.
Abstract: In this paper we use a divide-and-conquer approach to tackle large-scale optimization problems with overlapping components. Decomposition for an overlapping problem is challenging as its components depend on one another. The existing decomposition methods typically assign all the linked decision variables into one group, thus cannot reduce the original problem size. To address this issue we modify the Recursive Differential Grouping (RDG) method to decompose overlapping problems, by breaking the linkage at variables shared by multiple components. To evaluate the efficacy of our method, we extend two existing overlapping benchmark problems considering various level of overlap. Experimental results show that our method can greatly improve the search ability of an optimization algorithm via divide-and-conquer, and outperforms RDG, random decomposition as well as other state-of-the-art methods. We further evaluate our method using the CEC’2013 benchmark problems and show that our method is very competitive when equipped with a component optimizer.

61 citations