scispace - formally typeset
Search or ask a question
Topic

Discrete optimization

About: Discrete optimization is a research topic. Over the lifetime, 4598 publications have been published within this topic receiving 158297 citations. The topic is also known as: discrete optimisation.


Papers
More filters
Proceedings ArticleDOI
23 Jun 2008
TL;DR: It is demonstrated how similar non-convex energies can be formulated and optimized discretely in the context of optical flow estimation and that the proposed discrete-continuous optimization scheme not only finds lower energy solutions than traditional discrete or continuous optimization techniques, but also leads to flow estimates that outperform the current state-of-the-art.
Abstract: Accurate estimation of optical flow is a challenging task, which often requires addressing difficult energy optimization problems. To solve them, most top-performing methods rely on continuous optimization algorithms. The modeling accuracy of the energy in this case is often traded for its tractability. This is in contrast to the related problem of narrow-baseline stereo matching, where the top-performing methods employ powerful discrete optimization algorithms such as graph cuts and message-passing to optimize highly non-convex energies. In this paper, we demonstrate how similar non-convex energies can be formulated and optimized discretely in the context of optical flow estimation. Starting with a set of candidate solutions that are produced by fast continuous flow estimation algorithms, the proposed method iteratively fuses these candidate solutions by the computation of minimum cuts on graphs. The obtained continuous-valued fusion result is then further improved using local gradient descent. Experimentally, we demonstrate that the proposed energy is an accurate model and that the proposed discrete-continuous optimization scheme not only finds lower energy solutions than traditional discrete or continuous optimization techniques, but also leads to flow estimates that outperform the current state-of-the-art.

179 citations

Book ChapterDOI
TL;DR: This chapter discusses a number of questions about this method for trying to solve zero–one integer programming (IP) problems and its relevance for optimizing the original IP problem.
Abstract: Publisher Summary This chapter proposes Lagrangean techniques for discrete optimization problems A simple method for trying to solve zero–one integer programming (IP) problems is discussed This method is used as a starting point for discussing many of the developments since then The behavior of Lagrangean techniques in analyzing and solving zero–one IP problems is typical of their use on other discrete optimization problems The chapter discusses a number of questions about this method and its relevance for optimizing the original IP problem The goal of Lagrangean techniques is to try to establish sufficient optimality conditions: Lagrangean techniques are useful in computing zero–one solutions to IP problems with soft constraints or in parametric analysis of an IP problem over a family of right hand sides Parametric analysis of discrete optimization problems is also discussed The use of Lagrangean techniques as a distinct approach to discrete optimization has proven theoretically and computationally important for three reasons First, dual problems derived from more complex discrete optimization problems can be represented as linear programming (LP) problems, but ones of immense size, which cannot be explicitly constructed and then solved by the simplex algorithm Second, reason for considering the application of Lagrangean techniques to dual problems, in addition to the simplex algorithm, is that the simplex algorithm is exact and the dual problems are relaxation approximations Lagrangean techniques as a distinct approach to discrete optimization problems emphasize the need they satisfy for exploiting special structures, which arise in various models

179 citations

Journal ArticleDOI
TL;DR: Adaptive techniques, which decrease the number of optimization variables and lead to smooth results, are introduced and can be directly joined to conventional shape optimization.
Abstract: Topology optimization of continuum structures is often reduced to a material distribution problem. Up to now this optimization problem has been solved following a rigid scheme. A design space is parametrized by design patches, which are fixed during the optimization process and are identical to the finite element discretization. The structural layout is determined, whether or not there is material in the design patches. Since many design patches are necessary to describe approximately the structural layout, this procedure leads to a large number of optimization variables. Furthermore, due to a lack of clearness and smoothness, the results obtained can often only be used as a conceptual design idea. To overcome these shortcomings adaptive techniques, which decrease the number of optimization variables and generate smooth results, are introduced. First, the use of pure mesh refinement in topology optimization is discussed. Since this technique still leads to unsatisfactory results, a new method is proposed that adapts the effective design space of each design cycle to the present material distribution. Since the effective design space is approximated by cubic or Bezier splines, this procedure does not only decrease the number of design variables and lead to smooth results, but can be directly joined to conventional shape optimization. With examples for maximum stiffness problems of elastic structures the quality of the proposed techniques is demonstrated.

177 citations

Journal ArticleDOI
TL;DR: This paper describes a simple, easily-programmed method for solving discrete optimization problems with monotone objective functions and completely arbitrary (possibly nonconvex) constraints that is computationally feasible for problems in which the number of variables is fairly small.
Abstract: This paper describes a simple, easily-programmed method for solving discrete optimization problems with monotone objective functions and completely arbitrary (possibly nonconvex) constraints. The method is essentially one of partial enumeration, and is closely related to the “lexicographic” algorithm of Gilmore and Gomory for the “knapsack” problem and to the “additive” algorithm of Balas for the general integer linear programming problem. The results of a number of sample computations are reported. These indicate that the method is computationally feasible for problems in which the number of variables is fairly small.

176 citations

Journal Article
TL;DR: A canonical way to turn any smooth parametric family of probability distributions on an arbitrary search space X into a continuous-time black-box optimization method on X, the information-geometric optimization (IGO) method, which achieves maximal invariance properties.
Abstract: We present a canonical way to turn any smooth parametric family of probability distributions on an arbitrary search space X into a continuous-time black-box optimization method on X, the information-geometric optimization (IGO) method. Invariance as a major design principle keeps the number of arbitrary choices to a minimum. The resulting IGO flow is the flow of an ordinary differential equation conducting the natural gradient ascent of an adaptive, time-dependent transformation of the objective function. It makes no particular assumptions on the objective function to be optimized. The IGO method produces explicit IGO algorithms through time discretization. It naturally recovers versions of known algorithms and offers a systematic way to derive new ones. In continuous search spaces, IGO algorithms take a form related to natural evolution strategies (NES). The cross-entropy method is recovered in a particular case with a large time step, and can be extended into a smoothed, parametrization-independent maximum likelihood update (IGO-ML). When applied to the family of Gaussian distributions on Rd, the IGO framework recovers a version of the well-known CMA-ES algorithm and of xNES. For the family of Bernoulli distributions on {0, 1}d, we recover the seminal PBIL algorithm and cGA. For the distributions of restricted Boltzmann machines, we naturally obtain a novel algorithm for discrete optimization on {0, 1}d. All these algorithms are natural instances of, and unified under, the single information-geometric optimization framework. The IGO method achieves, thanks to its intrinsic formulation, maximal invariance properties: invariance under reparametrization of the search space X, under a change of parameters of the probability distribution, and under increasing transformation of the function to be optimized. The latter is achieved through an adaptive, quantile-based formulation of the objective. Theoretical considerations strongly suggest that IGO algorithms are essentially characterized by a minimal change of the distribution over time. Therefore they have minimal loss in diversity through the course of optimization, provided the initial diversity is high. First experiments using restricted Boltzmann machines confirm this insight. As a simple consequence, IGO seems to provide, from information theory, an elegant way to simultaneously explore several valleys of a fitness landscape in a single run.

175 citations


Network Information
Related Topics (5)
Optimization problem
96.4K papers, 2.1M citations
90% related
Optimal control
68K papers, 1.2M citations
84% related
Robustness (computer science)
94.7K papers, 1.6M citations
84% related
Scheduling (computing)
78.6K papers, 1.3M citations
83% related
Linear system
59.5K papers, 1.4M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202313
202236
2021104
2020128
2019113
2018140