scispace - formally typeset
Open AccessProceedings ArticleDOI

Towards a Theory-Guided Benchmarking Suite for Discrete Black-Box Optimization Heuristics: Profiling $(1+\lambda)$ EA Variants on OneMax and LeadingOnes

Reads0
Chats0
TLDR
In this paper, the authors adjust the COCO software to pseudo-Boolean optimization problems, and obtain from this a benchmarking environment that allows a fine-grained empirical analysis of discrete black-box heuristics.
Abstract
Theoretical and empirical research on evolutionary computation methods complement each other by providing two fundamentally different approaches towards a better understanding of black-box optimization heuristics. In discrete optimization, both streams developed rather independently of each other, but we observe today an increasing interest in reconciling these two sub-branches. In continuous optimization, the COCO (COmparing Continuous Optimisers) benchmarking suite has established itself as an important platform that theoreticians and practitioners use to exchange research ideas and questions. No widely accepted equivalent exists in the research domain of discrete black-box optimization. Marking an important step towards filling this gap, we adjust the COCO software to pseudo-Boolean optimization problems, and obtain from this a benchmarking environment that allows a fine-grained empirical analysis of discrete black-box heuristics. In this documentation we demonstrate how this test bed can be used to profile the performance of evolutionary algorithms. More concretely, we study the optimization behavior of several $(1+\lambda)$ EA variants on the two benchmark problems OneMax and LeadingOnes. This comparison motivates a refined analysis for the optimization time of the $(1+\lambda)$ EA on LeadingOnes.

read more

Citations
More filters
Proceedings ArticleDOI

Self-adjusting evolutionary algorithms for multimodal optimization

TL;DR: This work suggests a mechanism called stagnation detection that can be added as a module to existing evolutionary algorithms (both with and without prior self-adjusting schemes), and proves an expected runtime on the well-known Jump benchmark that corresponds to an asymptotically optimal parameter setting and outperforms other mechanisms for multimodal optimization like heavy-tailed mutation.
Journal ArticleDOI

Benchmarking discrete optimization heuristics with IOHprofiler

TL;DR: This work compiles and assess a selection of 23 discrete optimization problems that subscribe to different types of fitness landscapes, and provides a new module for IOHprofiler which extents the fixed-target and fixed-budget results for the individual problems by ECDF results, which allows one to derive aggregated performance statistics for groups of problems.
Journal ArticleDOI

Biochemical parameter estimation vs. benchmark functions: A comparative study of optimization performance and representation design

TL;DR: It is state that classic benchmark functions cannot be fully representative of all the features that make real-world optimization problems hard to solve, and this is the case, in particular, of the PE of biochemical systems.
Proceedings ArticleDOI

Benchmarking discrete optimization heuristics with IOHprofiler

TL;DR: This work compiles and assesses a selection of discrete optimization problems that subscribe to different types of fitness landscapes, and compares performances of eleven different heuristics for each selected problem.
Journal ArticleDOI

Salp Swarm Optimization: A critical review

TL;DR: In this paper , a mathematically correct version of SSO, named Amended Salp Swarm Optimizer (ASSO), was proposed to solve all the discussed problems of the original SSO.
References
More filters
Journal ArticleDOI

Parameter Control in Evolutionary Algorithms: Trends and Challenges

TL;DR: More than a decade after the first extensive overview on parameter control, this work revisits the field and presents a survey of the state-of-the-art.
Journal ArticleDOI

COCO: A Platform for Comparing Continuous Optimizers in a Black-Box Setting

TL;DR: COCO as discussed by the authors is an open source platform for comparing continuous optimizers in a black-box setting, which aims at automatizing the tedious and repetitive task of benchmarking numerical optimization algorithms to the greatest possible extent.
Journal ArticleDOI

On the Choice of the Offspring Population Size in Evolutionary Algorithms

TL;DR: Using a simplified but still realistic evolutionary algorithm, a thorough analysis of the effects of the offspring population size is presented and a simple way to dynamically adapt this parameter when necessary is suggested.
Journal ArticleDOI

Tight Bounds on the Optimization Time of a Randomized Search Heuristic on Linear Functions

TL;DR: The standard mutation probability p = 1/n is optimal for all linear functions, and the (1+1) EA is found to be an optimal mutation-based algorithm that turns out to be surprisingly robust since the large neighbourhood explored by the mutation operator does not disrupt the search.
Book ChapterDOI

Optimal fixed and adaptive mutation rates for the leadingones problem

TL;DR: This work reconsiders a classical problem, namely how the (1+1) evolutionary algorithm optimizes the LEADINGONES function, and proves that if a mutation probability of p is used and the problem size is n, then the optimization time is 1/2p2 ((1 - p)-n+1 - (1- p)).
Related Papers (5)