scispace - formally typeset
Search or ask a question
Author

Simon Wietheger

Other affiliations: Hasso Plattner Institute
Bio: Simon Wietheger is an academic researcher from University of Potsdam. The author has contributed to research in topics: Computer science & Mathematics. The author has co-authored 4 publications. Previous affiliations of Simon Wietheger include Hasso Plattner Institute.

Papers
More filters
Journal ArticleDOI
TL;DR: In this article , the NSGA-III with sufficiently many reference points was shown to achieve the complete Pareto front of the 3-objective OneMinMax benchmark in an expected number of O(n log n) iterations.
Abstract: The Non-dominated Sorting Genetic Algorithm II (NSGA-II) is the most prominent multi-objective evolutionary algorithm for real-world applications. While it performs evidently well on bi-objective optimization problems, empirical studies suggest that it is less effective when applied to problems with more than two objectives. A recent mathematical runtime analysis confirmed this observation by proving the NGSA-II for an exponential number of iterations misses a constant factor of the Pareto front of the simple 3-objective OneMinMax problem. In this work, we provide the first mathematical runtime analysis of the NSGA-III, a refinement of the NSGA-II aimed at better handling more than two objectives. We prove that the NSGA-III with sufficiently many reference points -- a small constant factor more than the size of the Pareto front, as suggested for this algorithm -- computes the complete Pareto front of the 3-objective OneMinMax benchmark in an expected number of O(n log n) iterations. This result holds for all population sizes (that are at least the size of the Pareto front). It shows a drastic advantage of the NSGA-III over the NSGA-II on this benchmark. The mathematical arguments used here and in previous work on the NSGA-II suggest that similar findings are likely for other benchmarks with three or more objectives.

6 citations

Journal ArticleDOI
TL;DR: In this paper , the first proven performance guarantees for a classic optimization problem, the NP-complete bi-objective minimum spanning tree problem, were obtained for the non-dominated Sorting Genetic Algorithm-II (NSGA-II) algorithm.
Abstract: The Non-dominated Sorting Genetic Algorithm-II (NSGA-II) is one of the most prominent algorithms to solve multi-objective optimization problems. Recently, the first mathematical runtime guarantees have been obtained for this algorithm, however only for synthetic benchmark problems. In this work, we give the first proven performance guarantees for a classic optimization problem, the NP-complete bi-objective minimum spanning tree problem. More specifically, we show that the NSGA-II with population size $N \ge 4((n-1) w_{\max} + 1)$ computes all extremal points of the Pareto front in an expected number of $O(m^2 n w_{\max} \log(n w_{\max}))$ iterations, where $n$ is the number of vertices, $m$ the number of edges, and $w_{\max}$ is the maximum edge weight in the problem instance. This result confirms, via mathematical means, the good performance of the NSGA-II observed empirically. It also shows that mathematical analyses of this algorithm are not only possible for synthetic benchmark problems, but also for more complex combinatorial optimization problems. As a side result, we also obtain a new analysis of the performance of the global SEMO algorithm on the bi-objective minimum spanning tree problem, which improves the previous best result by a factor of $|F|$, the number of extremal points of the Pareto front, a set that can be as large as $n w_{\max}$. The main reason for this improvement is our observation that both multi-objective evolutionary algorithms find the different extremal points in parallel rather than sequentially, as assumed in the previous proofs.

5 citations

Proceedings ArticleDOI
08 Jul 2022
TL;DR: This work shows that a balanced mutation operator optimizes the problem in O(n log n) if n - B = O(1), and shows abound of Ω(n2), just as classic bit flip mutation.
Abstract: In order to understand better how and why crossover can benefit optimization, we consider pseudo-Boolean functions with an upper bound B on the number of 1s allowed in the bit string (cardinality constraint). We consider the natural translation of the OneMax test function, a linear function where B bits have a weight of 1 + ε and the remaining bits have a weight of 1. The literature gives a bound of Θ(n2) for the (1+1) EA on this function. Part of the difficulty when optimizing this problem lies in having to improve individuals meeting the cardinality constraint by flipping both a 1 and a 0. The experimental literature proposes balanced operators, preserving the number of 1s, as a remedy. We show that a balanced mutation operator optimizes the problem in O(n log n) if n - B = O(1). However, if n - B = Θ(n), we show abound of Ω(n2), just as classic bit flip mutation. Crossover and a simple island model gives O(n2/log n) (uniform crossover) and [EQUATION] (3-ary majority vote crossover). For balanced uniform crossover with Hamming distance maximization for diversity we show a bound of O(n log n). As an additional contribution we analyze and discuss different balanced crossover operators from the literature.

3 citations

Journal ArticleDOI
TL;DR: In this paper , the authors investigated the OneMax test function with an upper bound B on the number of 1-bits allowed in the length-n bit string and showed that a balanced mutation operator optimizes the problem in O(n log n) if n-B = O(1).
Abstract: To understand better how and why crossover can benefit constrained optimization, we consider pseudo-Boolean functions with an upper bound B on the number of 1-bits allowed in the length-n bit string (i.e., a cardinality constraint). We investigate the natural translation of the OneMax test function to this setting, a linear function where B bits have a weight of 1+ 1/n and the remaining bits have a weight of 1. Friedrich et al. [TCS 2020] gave a bound of Θ (n2) for the expected running time of the (1+1) EA on this function. Part of the difficulty when optimizing this problem lies in having to improve individuals meeting the cardinality constraint by flipping a 1 and a 0 simultaneously. The experimental literature proposes balanced operators, preserving the number of 1-bits, as a remedy. We show that a balanced mutation operator optimizes the problem in O(n log n) if n-B = O(1). However, if n-B = Θ (n), we show a bound of Ω (n2), just as for classic bit mutation. Crossover together with a simple island model gives running times of O(n2 / log n) (uniform crossover) and \(O(n\sqrt {n})\) (3-ary majority vote crossover). For balanced uniform crossover with Hamming-distance maximization for diversity, we show a bound of O(n log n). As an additional contribution, we present an extensive analysis of different balanced crossover operators from the literature.

1 citations

Posted Content
TL;DR: This paper establishes a hierarchy of learning power depending on whether $C$-indices are required on all outputs; (a) only on outputs relevant for the class to be learned and (c) only in the limit as final, correct hypotheses.
Abstract: In language learning in the limit, the most common type of hypothesis is to give an enumerator for a language. This so-called $W$-index allows for naming arbitrary computably enumerable languages, with the drawback that even the membership problem is undecidable. In this paper we use a different system which allows for naming arbitrary decidable languages, namely programs for characteristic functions (called $C$-indices). These indices have the drawback that it is now not decidable whether a given hypothesis is even a legal $C$-index. In this first analysis of learning with $C$-indices, we give a structured account of the learning power of various restrictions employing $C$-indices, also when compared with $W$-indices. We establish a hierarchy of learning power depending on whether $C$-indices are required (a) on all outputs; (b) only on outputs relevant for the class to be learned and (c) only in the limit as final, correct hypotheses. Furthermore, all these settings are weaker than learning with $W$-indices (even when restricted to classes of computable languages). We analyze all these questions also in relation to the mode of data presentation. Finally, we also ask about the relation of semantic versus syntactic convergence and derive the map of pairwise relations for these two kinds of convergence coupled with various forms of data presentation.

Cited by
More filters
Journal ArticleDOI
TL;DR: In this article , the NSGA-II optimizes the OneJumpZeroJump benchmark asymptotically faster when crossover is employed, which can be transferred to single-objective optimization.
Abstract: Very recently, the first mathematical runtime analyses for the NSGA-II, the most common multi-objective evolutionary algorithm, have been conducted. Continuing this research direction, we prove that the NSGA-II optimizes the OneJumpZeroJump benchmark asymptotically faster when crossover is employed. Together with a parallel independent work by Dang, Opris, Salehi, and Sudholt, this is the first time such an advantage of crossover is proven for the NSGA-II. Our arguments can be transferred to single-objective optimization. They then prove that crossover can speed up the (mu+1) genetic algorithm in a different way and more pronounced than known before. Our experiments confirm the added value of crossover and show that the observed advantages are even larger than what our proofs can guarantee.

7 citations

Journal ArticleDOI
TL;DR: In this article , the NSGA-III with sufficiently many reference points was shown to achieve the complete Pareto front of the 3-objective OneMinMax benchmark in an expected number of O(n log n) iterations.
Abstract: The Non-dominated Sorting Genetic Algorithm II (NSGA-II) is the most prominent multi-objective evolutionary algorithm for real-world applications. While it performs evidently well on bi-objective optimization problems, empirical studies suggest that it is less effective when applied to problems with more than two objectives. A recent mathematical runtime analysis confirmed this observation by proving the NGSA-II for an exponential number of iterations misses a constant factor of the Pareto front of the simple 3-objective OneMinMax problem. In this work, we provide the first mathematical runtime analysis of the NSGA-III, a refinement of the NSGA-II aimed at better handling more than two objectives. We prove that the NSGA-III with sufficiently many reference points -- a small constant factor more than the size of the Pareto front, as suggested for this algorithm -- computes the complete Pareto front of the 3-objective OneMinMax benchmark in an expected number of O(n log n) iterations. This result holds for all population sizes (that are at least the size of the Pareto front). It shows a drastic advantage of the NSGA-III over the NSGA-II on this benchmark. The mathematical arguments used here and in previous work on the NSGA-II suggest that similar findings are likely for other benchmarks with three or more objectives.

6 citations

Journal ArticleDOI
TL;DR: In this paper , the first proven performance guarantees for a classic optimization problem, the NP-complete bi-objective minimum spanning tree problem, were obtained for the non-dominated Sorting Genetic Algorithm-II (NSGA-II) algorithm.
Abstract: The Non-dominated Sorting Genetic Algorithm-II (NSGA-II) is one of the most prominent algorithms to solve multi-objective optimization problems. Recently, the first mathematical runtime guarantees have been obtained for this algorithm, however only for synthetic benchmark problems. In this work, we give the first proven performance guarantees for a classic optimization problem, the NP-complete bi-objective minimum spanning tree problem. More specifically, we show that the NSGA-II with population size $N \ge 4((n-1) w_{\max} + 1)$ computes all extremal points of the Pareto front in an expected number of $O(m^2 n w_{\max} \log(n w_{\max}))$ iterations, where $n$ is the number of vertices, $m$ the number of edges, and $w_{\max}$ is the maximum edge weight in the problem instance. This result confirms, via mathematical means, the good performance of the NSGA-II observed empirically. It also shows that mathematical analyses of this algorithm are not only possible for synthetic benchmark problems, but also for more complex combinatorial optimization problems. As a side result, we also obtain a new analysis of the performance of the global SEMO algorithm on the bi-objective minimum spanning tree problem, which improves the previous best result by a factor of $|F|$, the number of extremal points of the Pareto front, a set that can be as large as $n w_{\max}$. The main reason for this improvement is our observation that both multi-objective evolutionary algorithms find the different extremal points in parallel rather than sequentially, as assumed in the previous proofs.

5 citations

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper analytically presented that introducing randomness into the population update procedure in MOEAs can be beneficial for the search and proved that the expected running time of a well-established MOEA (SMS-EMOA) for solving a commonly studied bi-objective problem, OneJumpZeroJump, can be exponentially decreased if replacing its deterministic population update mechanism by a stochastic one.
Abstract: Evolutionary algorithms (EAs) have been widely and successfully applied to solve multi-objective optimization problems, due to their nature of population-based search. Population update is a key component in multi-objective EAs (MOEAs), and it is performed in a greedy, deterministic manner. That is, the next-generation population is formed by selecting the first population-size ranked solutions (based on some selection criteria, e.g., non-dominated sorting, crowdedness and indicators) from the collections of the current population and newly-generated solutions. In this paper, we question this practice. We analytically present that introducing randomness into the population update procedure in MOEAs can be beneficial for the search. More specifically, we prove that the expected running time of a well-established MOEA (SMS-EMOA) for solving a commonly studied bi-objective problem, OneJumpZeroJump, can be exponentially decreased if replacing its deterministic population update mechanism by a stochastic one. Empirical studies also verify the effectiveness of the proposed stochastic population update method. This work is an attempt to challenge a common practice for the population update in MOEAs. Its positive results, which might hold more generally, should encourage the exploration of developing new MOEAs in the area.

4 citations

Proceedings ArticleDOI
08 Jul 2022
TL;DR: This work shows that a balanced mutation operator optimizes the problem in O(n log n) if n - B = O(1), and shows abound of Ω(n2), just as classic bit flip mutation.
Abstract: In order to understand better how and why crossover can benefit optimization, we consider pseudo-Boolean functions with an upper bound B on the number of 1s allowed in the bit string (cardinality constraint). We consider the natural translation of the OneMax test function, a linear function where B bits have a weight of 1 + ε and the remaining bits have a weight of 1. The literature gives a bound of Θ(n2) for the (1+1) EA on this function. Part of the difficulty when optimizing this problem lies in having to improve individuals meeting the cardinality constraint by flipping both a 1 and a 0. The experimental literature proposes balanced operators, preserving the number of 1s, as a remedy. We show that a balanced mutation operator optimizes the problem in O(n log n) if n - B = O(1). However, if n - B = Θ(n), we show abound of Ω(n2), just as classic bit flip mutation. Crossover and a simple island model gives O(n2/log n) (uniform crossover) and [EQUATION] (3-ary majority vote crossover). For balanced uniform crossover with Hamming distance maximization for diversity we show a bound of O(n log n). As an additional contribution we analyze and discuss different balanced crossover operators from the literature.

3 citations