scispace - formally typeset
Search or ask a question
Topic

Discrete optimization

About: Discrete optimization is a research topic. Over the lifetime, 4598 publications have been published within this topic receiving 158297 citations. The topic is also known as: discrete optimisation.


Papers
More filters
Proceedings ArticleDOI
27 Jun 2007
TL;DR: This algorithm is shown to be a better interpretation of continuous PSO into discrete PSO than the older versions and a number of benchmark optimization problems are solved using this concept and quite satisfactory results are obtained.
Abstract: Particle swarm optimization (PSO) as a novel computational intelligence technique, has succeeded in many continuous problems. But in discrete or binary version there are still some difficulties. In this paper a novel binary PSO is proposed. This algorithm proposes a new definition for the velocity vector of binary PSO. It will be shown that this algorithm is a better interpretation of continuous PSO into discrete PSO than the older versions. Also a number of benchmark optimization problems are solved using this concept and quite satisfactory results are obtained.

355 citations

Journal ArticleDOI
TL;DR: This forum article discusses the practical and scientific relevance of publishing papers that use immense computational resources for solving simple problems for which there already exist efficient solution techniques.
Abstract: Topology optimization is a highly developed tool for structural design and is by now being extensively used in mechanical, automotive and aerospace industries throughout the world. Gradient-based topology optimization algorithms may efficiently solve fine-resolution problems with thousands and up to millions of design variables using a few hundred (finite element) function evaluations (and even less than 50 in some commercial codes). Nevertheless, non-gradient topology optimization approaches that require orders of magnitude more function evaluations for extremely low resolution examples keep appearing in the literature. This forum article discusses the practical and scientific relevance of publishing papers that use immense computational resources for solving simple problems for which there already exist efficient solution techniques.

353 citations

Journal ArticleDOI
TL;DR: It is shown that by appropriately choosing what subproblems to use, one can design novel and very powerful MRF optimization algorithms, which are able to derive algorithms that generalize and extend state-of-the-art message-passing methods, and take full advantage of the special structure that may exist in particular MRFs.
Abstract: This paper introduces a new rigorous theoretical framework to address discrete MRF-based optimization in computer vision. Such a framework exploits the powerful technique of Dual Decomposition. It is based on a projected subgradient scheme that attempts to solve an MRF optimization problem by first decomposing it into a set of appropriately chosen subproblems, and then combining their solutions in a principled way. In order to determine the limits of this method, we analyze the conditions that these subproblems have to satisfy and demonstrate the extreme generality and flexibility of such an approach. We thus show that by appropriately choosing what subproblems to use, one can design novel and very powerful MRF optimization algorithms. For instance, in this manner we are able to derive algorithms that: 1) generalize and extend state-of-the-art message-passing methods, 2) optimize very tight LP-relaxations to MRF optimization, and 3) take full advantage of the special structure that may exist in particular MRFs, allowing the use of efficient inference techniques such as, e.g., graph-cut-based methods. Theoretical analysis on the bounds related with the different algorithms derived from our framework and experimental results/comparisons using synthetic and real data for a variety of tasks in computer vision demonstrate the extreme potentials of our approach.

353 citations

Book
01 Sep 1994
TL;DR: In this paper, the authors present an introductory text to optimization theory in normed spaces and cover all areas of nonlinear optimization, with particular emphasis on the application to problems in the calculus of variations, approximation and optimal control theory.
Abstract: This book serves as an introductory text to optimization theory in normed spaces and covers all areas of nonlinear optimization. It presents fundamentals with particular emphasis on the application to problems in the calculus of variations, approximation and optimal control theory. The reader is expected to have a basic knowledge of linear functional analysis.

346 citations

Journal ArticleDOI
TL;DR: This work proposes a simple yet effective unsupervised hashing framework, named Similarity-Adaptive Deep Hashing (SADH), which alternatingly proceeds over three training modules: deep hash model training, similarity graph updating and binary code optimization.
Abstract: Recent vision and learning studies show that learning compact hash codes can facilitate massive data processing with significantly reduced storage and computation. Particularly, learning deep hash functions has greatly improved the retrieval performance, typically under the semantic supervision. In contrast, current unsupervised deep hashing algorithms can hardly achieve satisfactory performance due to either the relaxed optimization or absence of similarity-sensitive objective. In this work, we propose a simple yet effective unsupervised hashing framework, named Similarity-Adaptive Deep Hashing (SADH), which alternatingly proceeds over three training modules: deep hash model training, similarity graph updating and binary code optimization. The key difference from the widely-used two-step hashing method is that the output representations of the learned deep model help update the similarity graph matrix, which is then used to improve the subsequent code optimization. In addition, for producing high-quality binary codes, we devise an effective discrete optimization algorithm which can directly handle the binary constraints with a general hashing loss. Extensive experiments validate the efficacy of SADH, which consistently outperforms the state-of-the-arts by large gaps.

343 citations


Network Information
Related Topics (5)
Optimization problem
96.4K papers, 2.1M citations
90% related
Optimal control
68K papers, 1.2M citations
84% related
Robustness (computer science)
94.7K papers, 1.6M citations
84% related
Scheduling (computing)
78.6K papers, 1.3M citations
83% related
Linear system
59.5K papers, 1.4M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202313
202236
2021104
2020128
2019113
2018140