scispace - formally typeset
Search or ask a question
Topic

Discrete optimization

About: Discrete optimization is a research topic. Over the lifetime, 4598 publications have been published within this topic receiving 158297 citations. The topic is also known as: discrete optimisation.


Papers
More filters
Journal ArticleDOI
TL;DR: This work studies the multiple objective discrete optimization (MODO) problem and proposes two-stage optimization problems as subproblems to be solved to obtain efficient solutions and proposes a modification of the algorithm that generates a sample of efficient solutions that satisfies a prespecified quality guarantee.
Abstract: We study the multiple objective discrete optimization (MODO) problem and propose two-stage optimization problems as subproblems to be solved to obtain efficient solutions. The mathematical structure of the first level subproblem has similarities to both Tchebycheff type of approaches and a generalization of the lexicographic max-ordering problem that are applicable to multiple objective optimization. We present some results that enable us to develop an algorithm to solve the bicriteria discrete optimization problem for the entire efficient set. We also propose a modification of the algorithm that generates a sample of efficient solutions that satisfies a prespecified quality guarantee. We apply the algorithm to solve the bicriteria knapsack problem. Our computational results on this particular problem demonstrate that our algorithm performs significantly better than an equivalent Tchebycheff counterpart. Moreover, the computational behavior of the sampling version is quite promising.

71 citations

Proceedings Article
12 Feb 2016
TL;DR: This paper proposes the randomized coordinate shrinking classification algorithm to learn the model, forming the RACOS algorithm, for optimization in continuous and discrete domains, and proves that optimization problems with Local Lipschitz continuity can be solved in polynomial time by proper configurations of this framework.
Abstract: Many randomized heuristic derivative-free optimization methods share a framework that iteratively learns a model for promising search areas and samples solutions from the model. This paper studies a particular setting of such framework, where the model is implemented by a classification model discriminating good solutions from bad ones. This setting allows a general theoretical characterization, where critical factors to the optimization are discovered. We also prove that optimization problems with Local Lipschitz continuity can be solved in polynomial time by proper configurations of this framework. Following the critical factors, we propose the randomized coordinate shrinking classification algorithm to learn the model, forming the RACOS algorithm, for optimization in continuous and discrete domains. Experiments on the testing functions as well as on the machine learning tasks including spectral clustering and classification with Ramp loss demonstrate the effectiveness of RACOS.

70 citations

Journal ArticleDOI
TL;DR: A limited marginal moment model is developed that is tractable for zero-one optimization problems with a polynomial sized representation of the convex hull of the feasible region and is often close to the simulated persistency value under various distributions that satisfy the prescribed marginal moments and are generated independently.
Abstract: An important question in discrete optimization under uncertainty is to understand the persistency of a decision variable, i.e., the probability that it is part of an optimal solution. For instance, in project management, when the task activity times are random, the challenge is to determine a set of critical activities that will potentially lie on the longest path. In the spanning tree and shortest path network problems, when the arc lengths are random, the challenge is to pre-process the network and determine a smaller set of arcs that will most probably be a part of the optimal solution under different realizations of the arc lengths. Building on a characterization of moment cones for single variate problems, and its associated semidefinite constraint representation, we develop a limited marginal moment model to compute the persistency of a decision variable. Under this model, we show that finding the persistency is tractable for zero-one optimization problems with a polynomial sized representation of the convex hull of the feasible region. Through extensive experiments, we show that the persistency computed under the limited marginal moment model is often close to the simulated persistency value under various distributions that satisfy the prescribed marginal moments and are generated independently.

70 citations

Journal ArticleDOI
15 Mar 2019-Energy
TL;DR: The results not only demonstrate that the proposed algorithm can achieve the best Pareto front for economic/emission bi-objectives compared to its competitors, but also confirm that the obtained scheduling schemes are completely within the feasible domain.

70 citations

Journal ArticleDOI
TL;DR: A comparison between the proposed algorithm and other existing methods shows the effectiveness and capability of the proposed method to reach the global optimum and rapid convergence to the optimal solution.

70 citations


Network Information
Related Topics (5)
Optimization problem
96.4K papers, 2.1M citations
90% related
Optimal control
68K papers, 1.2M citations
84% related
Robustness (computer science)
94.7K papers, 1.6M citations
84% related
Scheduling (computing)
78.6K papers, 1.3M citations
83% related
Linear system
59.5K papers, 1.4M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202313
202236
2021104
2020128
2019113
2018140