scispace - formally typeset
R

Rollie Goodman

Researcher at Montana State University

Publications -  4
Citations -  111

Rollie Goodman is an academic researcher from Montana State University. The author has contributed to research in topics: Multi-swarm optimization & Particle swarm optimization. The author has an hindex of 2, co-authored 4 publications receiving 79 citations.

Papers
More filters
Journal ArticleDOI

Factored Evolutionary Algorithms

TL;DR: A formal definition of FEA algorithms is given and empirical results related to their performance are presented, showing that FEA’s performance is not restricted by the underlying optimization algorithm by creating FEA versions of hill climbing, particle swarm optimization, genetic algorithm, and differential evolution and comparing their performance to their single-population and cooperative coevolutionary counterparts.
Proceedings ArticleDOI

A New Discrete Particle Swarm Optimization Algorithm

TL;DR: This paper presents a version of PSO that is able to optimize over discrete variables, which is called Integer and Categorical PSO (ICPSO), and incorporates ideas from Estimation of Distribution Algorithms (EDAs) in that particles represent probability distributions rather than solution values, and the PSO update modifies the probability distributions.
Proceedings ArticleDOI

MICPSO: A method for incorporating dependencies into discrete particle swarm optimization

TL;DR: This work compares MICPSO to ICPSO, Integer PSO (IPSO), an Estimation of Distribution Algorithm called Markovianity-Based Optimization Algorithm (MOA), and a hillclimber on a set of benchmark vertex coloring problems and finds thatMICPSO significantly outperforms all alternatives on all problems tested.
Proceedings ArticleDOI

A Swarm-Based Approach to Learning Phase-Type Distributions for Continuous Time Bayesian Networks

TL;DR: The use of phase-type distributions is an established method for extending the representational power of continuous time Bayesian networks beyond exponentially-distributed state transitions and an extension that uses informed starting locations during optimization is proposed to improve convergence rates when compared to random initialization.