scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Efficient implementation of graph algorithms using contraction

TL;DR: The efficient solutions to the component merging problem and the new observation about F-heaps lead to an &Ogr;(n) algorithm for finding a maximum weighted matching in general graphs, giving the fastest algorithm currently known for this problem.
Abstract: The (component) merging problem is a new graph problem. Versions of this problem appear as bottlenecks in various graph algorithms. A new data structure solves this problem efficiently, and two special cases of the problem have even more efficient solutions based on other data structures. The performance of the data structures is sped up by introducing a new algorithmic tool called packets.The algorithms that use these solutions to the component merging problem also exploit new properties of two existing data structures. Specifically, B-trees can be used simultaneously as a priority queue and a concatenable queue. Similarly, F-heaps support some kinds of split operations with no loss of efficiency.An immediate application of the solution to the simplest version of the merging problem is an O(t(m, n)) algorithm for finding minimum spanning trees in undirected graphs without using F-heaps, where t(m, n) = mlog2log2logdn, the graph has n vertices and m edges, and d = max(m/n, 2). Packets also improve the F-heap minimum spanning tree algorithm, giving the fastest algorithm currently known for this problem.The efficient solutions to the merging problem and the new observation about F-heaps lead to an O(n(t(m, n) + nlogn)) algorithm for finding a maximum weighted matching in general graphs. This settles an open problem posed by Tarjan [ 15, p. 123], where the weaker bound of O(nm log (n2/m)) was conjectured.
Citations
More filters
Book
24 Aug 2012
TL;DR: This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach, and is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.
Abstract: Today's Web-enabled deluge of electronic data calls for automated methods of data analysis. Machine learning provides these, developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data. This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach. The coverage combines breadth and depth, offering necessary background material on such topics as probability, optimization, and linear algebra as well as discussion of recent developments in the field, including conditional random fields, L1 regularization, and deep learning. The book is written in an informal, accessible style, complete with pseudo-code for the most important algorithms. All topics are copiously illustrated with color images and worked examples drawn from such application domains as biology, text processing, computer vision, and robotics. Rather than providing a cookbook of different heuristic methods, the book stresses a principled model-based approach, often using the language of graphical models to specify models in a concise and intuitive way. Almost all the models described have been implemented in a MATLAB software package--PMTK (probabilistic modeling toolkit)--that is freely available online. The book is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.

8,059 citations


Cites methods from "Efficient implementation of graph a..."

  • ...This can be found using the algorithm in (Gabow et al. 1984)....

    [...]

Journal ArticleDOI
TL;DR: In this article, a Bayesian approach for learning Bayesian networks from a combination of prior knowledge and statistical data is presented, which is derived from a set of assumptions made previously as well as the assumption of likelihood equivalence, which says that data should not help to discriminate network structures that represent the same assertions of conditional independence.
Abstract: We describe a Bayesian approach for learning Bayesian networks from a combination of prior knowledge and statistical data. First and foremost, we develop a methodology for assessing informative priors needed for learning. Our approach is derived from a set of assumptions made previously as well as the assumption of likelihood equivalence, which says that data should not help to discriminate network structures that represent the same assertions of conditional independence. We show that likelihood equivalence when combined with previously made assumptions implies that the user's priors for network parameters can be encoded in a single Bayesian network for the next case to be seen—a prior network—and a single measure of confidence for that network. Second, using these priors, we show how to compute the relative posterior probabilities of network structures given data. Third, we describe search methods for identifying network structures with high posterior probabilities. We describe polynomial algorithms for finding the highest-scoring network structures in the special case where every node has at most k e 1 parent. For the general case (k > 1), which is NP-hard, we review heuristic search algorithms including local search, iterative local search, and simulated annealing. Finally, we describe a methodology for evaluating Bayesian-network learning algorithms, and apply this approach to a comparison of various approaches.

4,124 citations

Journal ArticleDOI
TL;DR: Using F-heaps, a new data structure for implementing heaps that extends the binomial queues proposed by Vuillemin and studied further by Brown, the improved bound for minimum spanning trees is the most striking.
Abstract: In this paper we develop a new data structure for implementing heaps (priority queues). Our structure, Fibonacci heaps (abbreviated F-heaps), extends the binomial queues proposed by Vuillemin and studied further by Brown. F-heaps support arbitrary deletion from an n-item heap in O(log n) amortized time and all other standard heap operations in O(1) amortized time. Using F-heaps we are able to obtain improved running times for several network optimization algorithms. In particular, we obtain the following worst-case bounds, where n is the number of vertices and m the number of edges in the problem graph: O(n log n + m) for the single-source shortest path problem with nonnegative edge lengths, improved from O(mlog(m/n+2)n);O(n2log n + nm) for the all-pairs shortest path problem, improved from O(nm log(m/n+2)n);O(n2log n + nm) for the assignment problem (weighted bipartite matching), improved from O(nmlog(m/n+2)n);O(mβ(m, n)) for the minimum spanning tree problem, improved from O(mlog log(m/n+2)n); where β(m, n) = min {i | log(i)n ≤ m/n}. Note that β(m, n) ≤ log*n if m ≥ n.Of these results, the improved bound for minimum spanning trees is the most striking, although all the results give asymptotic improvements for graphs of appropriate densities.

2,484 citations

David Heckerman1
01 Jan 2007
TL;DR: In this paper, the authors examine a graphical representation of uncertain knowledge called a Bayesian network, which is easy to construct and interpret, yet has formal probabilistic semantics making it suitable for statistical manipulation.
Abstract: We examine a graphical representation of uncertain knowledge called a Bayesian network. The representation is easy to construct and interpret, yet has formal probabilistic semantics making it suitable for statistical manipulation. We show how we can use the representation to learn new knowledge by combining domain knowledge with statistical data.

1,600 citations

Journal ArticleDOI
David S. Johnson1
TL;DR: This is the fourteenth edition of a quarterly column that provides continuing coverage of new developments in the theory of NP-completeness, and readers who have results they would like mentioned (NP-hardness, PSPACE- hardness, polynomialtime-solvability, etc.), or open problems they wouldlike publicized, should send them to David S. Johnson.

857 citations

References
More filters
Book
01 Jan 1968
TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Abstract: A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid metal cooled fast breeder reactors.

17,939 citations

Book
01 Jan 1974
TL;DR: This text introduces the basic data structures and programming techniques often used in efficient algorithms, and covers use of lists, push-down stacks, queues, trees, and graphs.
Abstract: From the Publisher: With this text, you gain an understanding of the fundamental concepts of algorithms, the very heart of computer science. It introduces the basic data structures and programming techniques often used in efficient algorithms. Covers use of lists, push-down stacks, queues, trees, and graphs. Later chapters go into sorting, searching and graphing algorithms, the string-matching algorithms, and the Schonhage-Strassen integer-multiplication algorithm. Provides numerous graded exercises at the end of each chapter. 0201000296B04062001

9,262 citations

Book
16 Aug 2021

2,526 citations

Journal ArticleDOI
TL;DR: Using F-heaps, a new data structure for implementing heaps that extends the binomial queues proposed by Vuillemin and studied further by Brown, the improved bound for minimum spanning trees is the most striking.
Abstract: In this paper we develop a new data structure for implementing heaps (priority queues). Our structure, Fibonacci heaps (abbreviated F-heaps), extends the binomial queues proposed by Vuillemin and studied further by Brown. F-heaps support arbitrary deletion from an n-item heap in O(log n) amortized time and all other standard heap operations in O(1) amortized time. Using F-heaps we are able to obtain improved running times for several network optimization algorithms. In particular, we obtain the following worst-case bounds, where n is the number of vertices and m the number of edges in the problem graph: O(n log n + m) for the single-source shortest path problem with nonnegative edge lengths, improved from O(mlog(m/n+2)n);O(n2log n + nm) for the all-pairs shortest path problem, improved from O(nm log(m/n+2)n);O(n2log n + nm) for the assignment problem (weighted bipartite matching), improved from O(nmlog(m/n+2)n);O(mβ(m, n)) for the minimum spanning tree problem, improved from O(mlog log(m/n+2)n); where β(m, n) = min {i | log(i)n ≤ m/n}. Note that β(m, n) ≤ log*n if m ≥ n.Of these results, the improved bound for minimum spanning trees is the most striking, although all the results give asymptotic improvements for graphs of appropriate densities.

2,484 citations

Book
Robert E. Tarjan1
01 Jan 1983
TL;DR: This paper presents a meta-trees tree model that automates the very labor-intensive and therefore time-heavy and therefore expensive process of manually selecting trees to grow in a graph.
Abstract: Foundations Disjoint Sets Heaps Search Trees Linking and Cutting Trees Minimum Spanning Trees Shortest Paths Network Flows Matchings

2,120 citations