scispace - formally typeset
Search or ask a question
Proceedings Article

A New 3/2-Approximation Algorithm for the b -Edge Cover Problem.

01 Jan 2016-pp 52-61
TL;DR: A 3/2-approximation algorithm, LSE, for computing a b-Edge Cover of minimum weight in a graph with weights on the edges is described and it is proved that the LSE algorithm computes the same b- Edge Cover as the one obtained by the Greedy algorithm for the problem.
Abstract: We describe a 3/2-approximation algorithm, LSE, for computing a b-Edge Cover of minimum weight in a graph with weights on the edges. The b-Edge Cover problem is a generalization of the better-known Edge Cover problem in graphs, where the objective is to choose a subset C of edges in the graph such that at least a specified number b(v) of edges in C are incident on each vertex v. In the weighted b-Edge Cover problem, we minimize the sum of the weights of the edges in C. We prove that the LSE algorithm computes the same b-Edge Cover as the one obtained by the Greedy algorithm for the problem. However, the Greedy algorithm requires edges to be sorted by their effective weights, and these weights need to be updated after each iteration. These requirements make the Greedy algorithm sequential and impractical for massive graphs. The LSE algorithm avoids the sorting step, and is amenable for parallelization. We implement the algorithm on a serial machine and compare its performance against a collection of approximation algorithms for the b-Edge Cover problem. Our results show that the LSE algorithm is 3× to 5× faster than the Greedy algorithm on a serial processor. The approximate edge covers obtained by the LSE algorithm have weights greater by at most 17% of the optimal weight for problems where we could compute the latter. We also investigate the relationship between the b-Edge Cover and the b-Matching problems, show that the latter has a faster implementation since edge weights are static in this algorithm, and obtain a heuristic solution for the former from the latter.
Citations
More filters
Journal ArticleDOI
TL;DR: This work surveys recent work on approximation algorithms for computing degree-constrained subgraphs in graphs and their applications in combinatorial scientific computing, focusing on practical algorithms that yield good performance on modern computer architectures with multiple threads and interconnected processors.
Abstract: We survey recent work on approximation algorithms for computing degree-constrained subgraphs in graphs and their applications in combinatorial scientific computing. The problems we consider include maximization versions of cardinality matching, edge-weighted matching, vertex-weighted matching and edge-weighted -matching, and minimization versions of weighted edge cover and -edge cover. Exact algorithms for these problems are impractical for massive graphs with several millions of edges. For each problem we discuss theoretical foundations, the design of several linear or near-linear time approximation algorithms, their implementations on serial and parallel computers, and applications. Our focus is on practical algorithms that yield good performance on modern computer architectures with multiple threads and interconnected processors. We also include information about the software available for these problems.

16 citations


Cites methods or result from "A New 3/2-Approximation Algorithm f..."

  • ...The ∆-approximation algorithm was implemented by Khan and Pothen (2016) and compared with the LSE algorithm, which has approximation ratio of 3/2....

    [...]

  • ...These results are obtained in (Khan and Pothen 2016, Ferdous et al. 2018)....

    [...]

Proceedings ArticleDOI
21 May 2018
TL;DR: It is proved that both the MCE and S-LSE algorithms compute the same b-EDGE COVER with at most twice the weight of the minimum weight edge cover, and the parallel depth and work can be bounded for the Suitor and b-Suitor algorithms when edge weights are random.
Abstract: We describe a paradigm for designing parallel algorithms via approximation, and illustrate it on the b-Edge Cover problem. A b-Edge Cover of minimum weight in a graph is a subset C of its edges such that at least a specified number b(v) of edges in C is incident on each vertex v, and the sum of the edge weights in C is minimum. The Greedy algorithm and a variant, the LSE algorithm, provide 3/2-approximation guarantees in the worst-case for this problem, but these algorithms have limited parallelism. Hence we design two new 2-approximation algorithms with greater concurrency. The MCE algorithm reduces the computation of a b-Edge Cover to that of finding a b'-Matching, by exploiting the relationship between these subgraphs in an approximation context. The LSENW is derived from the LSE algorithm using static edge weights rather than dynamically computing effective edge weights. This relaxation gives S-LSE a worse approximation guarantee but makes it more amenable to parallelization. We prove that both the MCE and S-LSE algorithms compute the same b-EDGE COVER with at most twice the weight of the minimum weight edge cover. In practice, the 2-approximation and 3/2-approximation algorithms compute edge covers of weight within 10% the optimal. We implement three of the approximation algorithms, MCE, LSE, and S-LSE, on shared memory multi-core machines, including an Intel Xeon and an IBM Power8 machine with 8 TB memory. The MCE algorithm is the fastest of these by an order of magnitude or more. It computes an edge cover in a graph with billions of edges in 20 seconds using two hundred threads on the IBM Power8. We also show that the parallel depth and work can be bounded for the Suitor and b-Suitor algorithms when edge weights are random.

9 citations


Cites methods from "A New 3/2-Approximation Algorithm f..."

  • ...Earlier, we have proposed a 3/2-approximation algorithm called the LSE algorithm [11], which relaxes the order in which edges are added to the cover, making it more concurrent....

    [...]

  • ...Our earlier paper [11] discusses the GREEDY and LSE...

    [...]

Proceedings Article
01 Jan 2018
TL;DR: One of the 3/2-approximation algorithms, the Dual Cover algorithm, computes the lowest weight edge cover relative to previously known algorithms as well as the new algorithms reported here.
Abstract: We describe two new 3/2-approximation algorithms and a new 2-approximation algorithm for the minimum weight edge cover problem in graphs. We show that one of the 3/2-approximation algorithms, the Dual Cover algorithm, computes the lowest weight edge cover relative to previously known algorithms as well as the new algorithms reported here. The Dual Cover algorithm can also be implemented to be faster than the other 3/2-approximation algorithms on serial computers. Many of these algorithms can be extended to solve the b-Edge Cover problem as well. We show the relation of these algorithms to the K-Nearest Neighbor graph construction in semi-supervised learning and other applications.

8 citations


Cites background or methods from "A New 3/2-Approximation Algorithm f..."

  • ...Khan and Pothen [14] have described a Locally Subdominant Edge algorithm (LSE)....

    [...]

  • ...Among them LSE is the better performing algorithm [14]....

    [...]

  • ...This algorithm [14] finds a set of locally subdominant edges and adds them to the cover at each iteration....

    [...]

Proceedings ArticleDOI
11 Nov 2018
TL;DR: This work describes how a 2-approximation algorithm for computing the b-Edge Cover can be used to solve the adaptive anonymity problem in parallel, and is able to solve adaptive anonymity problems with hundreds of thousands of instances and hundreds of features on a supercomputer in under five minutes.
Abstract: We explore the problem of sharing data that pertains to individuals with anonymity guarantees, where each user requires a desired level of privacy. We propose the first shared- memory as well as distributed memory parallel algorithms for the adaptive anonymity problem that achieves this goal, and produces high quality anonymized datasets. The new algorithm is based on an optimization procedure that iteratively computes weights on the edges of a dissimilarity matrix, and at each iteration computes a minimum weighted b-Edge Cover in the graph. We describe how a 2-approximation algorithm for computing the b-Edge Cover can be used to solve the adaptive anonymity problem in parallel. We are able to solve adaptive anonymity problems with hundreds of thousands of instances and hundreds of features on a supercomputer in under five minutes. Our algorithm scales up to 8K cores on a distributed memory supercomputer, while also providing good speedups on shared memory multiprocessors. On smaller problems where an a Belief Propagation algorithm is feasible, our algorithm is two orders of magnitude faster.

7 citations


Cites methods from "A New 3/2-Approximation Algorithm f..."

  • ...The authors in [15] developed a 3/2− approximate Locally Subdominant Edge algorithm (LSE) which computes the same edge cover as the GREEDY algorithm but is amenable for parallelization....

    [...]

Posted Content
TL;DR: An efficient method for maintaining {\em relaxed complementary slackness} in generalized matching problems and approximation-preserving reductions between the $f-factor and $f$-edge cover problems are included.
Abstract: In this paper we present linear time approximation schemes for several generalized matching problems on nonbipartite graphs. Our results include $O_\epsilon(m)$-time algorithms for $(1-\epsilon)$-maximum weight $f$-factor and $(1+\epsilon)$-approximate minimum weight $f$-edge cover. As a byproduct, we also obtain direct algorithms for the exact cardinality versions of these problems running in $O(m\sqrt{f(V)})$ time. The technical contributions of this work include an efficient method for maintaining {\em relaxed complementary slackness} in generalized matching problems and approximation-preserving reductions between the $f$-factor and $f$-edge cover problems.

7 citations


Cites background from "A New 3/2-Approximation Algorithm f..."

  • ...Our interest in the approximate f -edge cover problem is inspired by a new application to anonymizing data in environments where users have different privacy demands; see [2, 1, 16]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: The University of Florida Sparse Matrix Collection, a large and actively growing set of sparse matrices that arise in real applications, is described and a new multilevel coarsening scheme is proposed to facilitate this task.
Abstract: We describe the University of Florida Sparse Matrix Collection, a large and actively growing set of sparse matrices that arise in real applications The Collection is widely used by the numerical linear algebra community for the development and performance evaluation of sparse matrix algorithms It allows for robust and repeatable experiments: robust because performance results with artificially generated matrices can be misleading, and repeatable because matrices are curated and made publicly available in many formats Its matrices cover a wide spectrum of domains, include those arising from problems with underlying 2D or 3D geometry (as structural engineering, computational fluid dynamics, model reduction, electromagnetics, semiconductor devices, thermodynamics, materials, acoustics, computer graphics/vision, robotics/kinematics, and other discretizations) and those that typically do not have such geometry (optimization, circuit simulation, economic and financial modeling, theoretical and quantum chemistry, chemical process simulation, mathematics and statistics, power networks, and other networks and graphs) We provide software for accessing and managing the Collection, from MATLAB™, Mathematica™, Fortran, and C, as well as an online search capability Graph visualization of the matrices is provided, and a new multilevel coarsening scheme is proposed to facilitate this task

3,456 citations


"A New 3/2-Approximation Algorithm f..." refers background in this paper

  • ...Additionally we consider eight real-world datasets taken from the University of Florida Matrix collection [4] covering mutually exclusive application areas such as medical science, structural engineering, and sensor data....

    [...]

Journal ArticleDOI
TL;DR: It turns out that the ratio between the two grows at most logarithmically in the largest column sum of A when all the components of cT are the same, which reduces to a theorem established previously by Johnson and Lovasz.
Abstract: Let A be a binary matrix of size m × n, let cT be a positive row vector of length n and let e be the column vector, all of whose m components are ones. The set-covering problem is to minimize cTx subject to Ax ≥ e and x binary. We compare the value of the objective function at a feasible solution found by a simple greedy heuristic to the true optimum. It turns out that the ratio between the two grows at most logarithmically in the largest column sum of A. When all the components of cT are the same, our result reduces to a theorem established previously by Johnson and Lovasz.

2,645 citations


"A New 3/2-Approximation Algorithm f..." refers result in this paper

  • ...Chvatal [3] extended the results of Johnson [10] and Lovasz [13] to the minimum cost Set Cover problem....

    [...]

Journal ArticleDOI
TL;DR: For the problem of finding the maximum clique in a graph, no algorithm has been found for which the ratio does not grow at least as fast as n^@e, where n is the problem size and @e>0 depends on the algorithm.

2,472 citations


"A New 3/2-Approximation Algorithm f..." refers methods or result in this paper

  • ...Chvatal [3] extended the results of Johnson [10] and Lovasz [13] to the minimum cost Set Cover problem....

    [...]

  • ...The greedy algorithm which iteratively adds the largest number of uncovered elements to a current solution was shown to be Hn-approximate by Johnson [10] and Lovasz [13]....

    [...]

Book
03 Jul 2003
TL;DR: This chapter discusses probabilistic ingredients, the largest component for a binomial process, and connectedivity and the number of components in a graph-like model.
Abstract: 1. Introduction 2. Probabilistic ingredients 3. Subgraph and component counts 4. Typical vertex degrees 5. Geometrical ingredients 6. Maximum degree, cliques and colourings 7. Minimum degree: laws of large numbers 8. Minimum degree: convergence in distribution 9. Percolative ingredients 10. Percolation and the largest component 11. The largest component for a binomial process 12. Ordering and partitioning problems 13. Connectivity and the number of components References Index

2,271 citations


"A New 3/2-Approximation Algorithm f..." refers background in this paper

  • ...We also consider a random geometric graph (geo 14) [16] that has recently attracted attention in the study of neural networks, astrophysics, etc....

    [...]

Journal ArticleDOI
TL;DR: It is shown that the ratio of optimal integral and fractional covers of a hypergraph does not exceed 1 + log d, where d is the maximum degree and this theorem may replace probabilistic methods in certain circumstances.

1,227 citations


"A New 3/2-Approximation Algorithm f..." refers methods or result in this paper

  • ...Chvatal [3] extended the results of Johnson [10] and Lovasz [13] to the minimum cost Set Cover problem....

    [...]

  • ...The greedy algorithm which iteratively adds the largest number of uncovered elements to a current solution was shown to be Hn-approximate by Johnson [10] and Lovasz [13]....

    [...]