scispace - formally typeset
Search or ask a question
Book

Combinatorial Optimization

About: The article was published on 1997-11-12 and is currently open access. It has received 2825 citations till now. The article focuses on the topics: Optimization problem & Combinatorial optimization.
Citations
More filters
Journal ArticleDOI
TL;DR: This work presents a new coarsening heuristic (called heavy-edge heuristic) for which the size of the partition of the coarse graph is within a small factor of theSize of the final partition obtained after multilevel refinement, and presents a much faster variation of the Kernighan--Lin (KL) algorithm for refining during uncoarsening.
Abstract: Recently, a number of researchers have investigated a class of graph partitioning algorithms that reduce the size of the graph by collapsing vertices and edges, partition the smaller graph, and then uncoarsen it to construct a partition for the original graph [Bui and Jones, Proc. of the 6th SIAM Conference on Parallel Processing for Scientific Computing, 1993, 445--452; Hendrickson and Leland, A Multilevel Algorithm for Partitioning Graphs, Tech. report SAND 93-1301, Sandia National Laboratories, Albuquerque, NM, 1993]. From the early work it was clear that multilevel techniques held great promise; however, it was not known if they can be made to consistently produce high quality partitions for graphs arising in a wide range of application domains. We investigate the effectiveness of many different choices for all three phases: coarsening, partition of the coarsest graph, and refinement. In particular, we present a new coarsening heuristic (called heavy-edge heuristic) for which the size of the partition of the coarse graph is within a small factor of the size of the final partition obtained after multilevel refinement. We also present a much faster variation of the Kernighan--Lin (KL) algorithm for refining during uncoarsening. We test our scheme on a large number of graphs arising in various domains including finite element methods, linear programming, VLSI, and transportation. Our experiments show that our scheme produces partitions that are consistently better than those produced by spectral partitioning schemes in substantially smaller time. Also, when our scheme is used to compute fill-reducing orderings for sparse matrices, it produces orderings that have substantially smaller fill than the widely used multiple minimum degree algorithm.

5,629 citations

Journal ArticleDOI
TL;DR: This paper compares the running times of several standard algorithms, as well as a new algorithm that is recently developed that works several times faster than any of the other methods, making near real-time performance possible.
Abstract: Minimum cut/maximum flow algorithms on graphs have emerged as an increasingly useful tool for exactor approximate energy minimization in low-level vision. The combinatorial optimization literature provides many min-cut/max-flow algorithms with different polynomial time complexity. Their practical efficiency, however, has to date been studied mainly outside the scope of computer vision. The goal of this paper is to provide an experimental comparison of the efficiency of min-cut/max flow algorithms for applications in vision. We compare the running times of several standard algorithms, as well as a new algorithm that we have recently developed. The algorithms we study include both Goldberg-Tarjan style "push -relabel" methods and algorithms based on Ford-Fulkerson style "augmenting paths." We benchmark these algorithms on a number of typical graphs in the contexts of image restoration, stereo, and segmentation. In many cases, our new algorithm works several times faster than any of the other methods, making near real-time performance possible. An implementation of our max-flow/min-cut algorithm is available upon request for research purposes.

4,463 citations


Cites methods from "Combinatorial Optimization"

  • ...We recommend our favorite text-book on basic graph theory and algorithms [11] for more details on push-relabel and augmenting path methods....

    [...]

Book ChapterDOI
03 Sep 2001
TL;DR: The goal of this paper is to provide an experimental comparison of the efficiency of min-cut/max flow algorithms for applications in vision, comparing the running times of several standard algorithms, as well as a new algorithm that is recently developed.
Abstract: After [10, 15, 12, 2, 4] minimum cut/maximum flow algorithms on graphs emerged as an increasingly useful tool for exact or approximate energy minimization in low-level vision. The combinatorial optimization literature provides many min-cut/max-flow algorithms with different polynomial time complexity. Their practical efficiency, however, has to date been studied mainly outside the scope of computer vision. The goal of this paper is to provide an experimental comparison of the efficiency of min-cut/max flow algorithms for energy minimization in vision. We compare the running times of several standard algorithms, as well as a new algorithm that we have recently developed. The algorithms we study include both Goldberg-style "push-relabel" methods and algorithms based on Ford-Fulkerson style augmenting paths. We benchmark these algorithms on a number of typical graphs in the contexts of image restoration, stereo, and interactive segmentation. In many cases our new algorithm works several times faster than any of the other methods making near real-time performance possible.

3,099 citations


Cites methods from "Combinatorial Optimization"

  • ...We recommend our favorite text-book on basic graph theory and algorithms [11] for more details on push-relabel and augmenting path methods....

    [...]

Book
01 Jan 2000
TL;DR: Economists and workers in the financial world will find useful the presentation of empirical analysis methods and well-formulated theoretical tools that might help describe systems composed of a huge number of interacting subsystems.
Abstract: This book concerns the use of concepts from statistical physics in the description of financial systems. The authors illustrate the scaling concepts used in probability theory, critical phenomena, and fully developed turbulent fluids. These concepts are then applied to financial time series. The authors also present a stochastic model that displays several of the statistical properties observed in empirical data. Statistical physics concepts such as stochastic dynamics, short- and long-range correlations, self-similarity and scaling permit an understanding of the global behaviour of economic systems without first having to work out a detailed microscopic description of the system. Physicists will find the application of statistical physics concepts to economic systems interesting. Economists and workers in the financial world will find useful the presentation of empirical analysis methods and well-formulated theoretical tools that might help describe systems composed of a huge number of interacting subsystems.

2,826 citations

Book
03 Sep 2011
TL;DR: The question the authors are trying to ask is: how many units of water can they send from the source to the sink per unit of time?
Abstract: 1 Defining Network Flow A flow network is a directed graph G = (V,E) in which each edge (u, v) ∈ E has non-negative capacity c(u, v) ≥ 0. We require that if (u, v) ∈ E, then (v, u) / ∈ E. That is, if an edge exists, then the edge between the same vertices going the reverse direction does not exist. Every flow network has a source s and a sink t, and we assume that for every v ∈ V , there is some path s→ · · · → v → · · · → t. Note that this implies that flow networks are connected. Informally, the intuition behind network flow is to think of the edges as pipes and the weights on the edges as the capacity its corresponding pipe per unit of time. The question we are trying to ask is: how many units of water can we send from the source to the sink per unit of time? Formally, a flow in G is a function f : V × V → R that satisfies the following: • Capacity constraint. For all u, v ∈ V , we require 0 ≤ f(u, v) ≤ c(u, v). Our pipe cannot hold more than is allowed as dictated by its capacity. • Flow conservation. For u ∈ V − {s, t}, we require ∑

2,426 citations