scispace - formally typeset
Search or ask a question

Showing papers on "Approximation algorithm published in 2002"


Journal ArticleDOI
08 Jul 2002
TL;DR: This work presents a 1-pass algorithm for estimating the most frequent items in a data stream using limited storage space, which achieves better space bounds than the previously known best algorithms for this problem for several natural distributions on the item frequencies.
Abstract: We present a 1-pass algorithm for estimating the most frequent items in a data stream using limited storage space. Our method relies on a data structure called a COUNT SKETCH, which allows us to reliably estimate the frequencies of frequent items in the stream. Our algorithm achieves better space bounds than the previously known best algorithms for this problem for several natural distributions on the item frequencies. In addition, our algorithm leads directly to a 2-pass algorithm for the problem of estimating the items with the largest (absolute) change in frequency between two data streams. To our knowledge, this latter problem has not been previously studied in the literature.

1,589 citations


Journal ArticleDOI
TL;DR: The algorithm allows combinatorial auctions to scale up to significantly larger numbers of items and bids than prior approaches to optimal winner determination by capitalizing on the fact that the space of bids is sparsely populated in practice.

1,045 citations


Journal ArticleDOI
TL;DR: This work considers problems requiring to allocate a set of rectangular items to larger rectangular standardized units by minimizing the waste by discussing mathematical models, lower bounds, classical approximation algorithms, recent heuristic and metaheuristic methods and exact enumerative approaches.

806 citations


Proceedings ArticleDOI
05 Jun 2002
TL;DR: This work considers the question of whether there exists a simple and practical approximation algorithm for k-means clustering, and presents a local improvement heuristic based on swapping centers in and out that yields a (9+ε)-approximation algorithm.
Abstract: In k-means clustering we are given a set of n data points in d-dimensional space ℜd and an integer k, and the problem is to determine a set of k points in ℜd, called centers, to minimize the mean squared distance from each data point to its nearest center. No exact polynomial-time algorithms are known for this problem. Although asymptotically efficient approximation algorithms exist, these algorithms are not practical due to the extremely high constant factors involved. There are many heuristics that are used in practice, but we know of no bounds on their performance.We consider the question of whether there exists a simple and practical approximation algorithm for k-means clustering. We present a local improvement heuristic based on swapping centers in and out. We prove that this yields a (9+e)-approximation algorithm. We show that the approximation factor is almost tight, by giving an example for which the algorithm achieves an approximation factor of (9-e). To establish the practical value of the heuristic, we present an empirical study that shows that, when combined with Lloyd's algorithm, this heuristic performs quite well in practice.

639 citations


Journal ArticleDOI
01 Aug 2002
TL;DR: This work presents the first constant-factor approximation algorithm for the metric k-median problem, and improves upon the best previously known result of O(log k log log log k), which was obtained by refining and derandomizing a randomized O( log n log log n)-approximation algorithm of Bartal.
Abstract: We present the first constant-factor approximation algorithm for the metric k-median problem. The k-median problem is one of the most well-studied clustering problems, i.e., those problems in which the aim is to partition a given set of points into clusters so that the points within a cluster are relatively close with respect to some measure. For the metric k-median problem, we are given n points in a metric space. We select k of these to be cluster centers and then assign each point to its closest selected center. If point j is assigned to a center i, the cost incurred is proportional to the distance between i and j. The goal is to select the k centers that minimize the sum of the assignment costs. We give a 62/3-approximation algorithm for this problem. This improves upon the best previously known result of O(log k log log k), which was obtained by refining and derandomizing a randomized O(log n log log n)-approximation algorithm of Bartal.

623 citations


Book
01 Jan 2002
TL;DR: This chapter discusses low-Degree Hypergraphs, Cryptographic Functions, and Interactive Proof Systems, and some of the algorithms used in these systems.
Abstract: Preface. 1. Basics. 2. Approximation Algorithms. 3. Closest Vector Problem. 4. Shortest Vector Problem. 5. Sphere Packings. 6. Low-Degree Hypergraphs. 7. Basis Reduction Problems. 8. Cryptographic Functions. 9. Interactive Proof Systems. Index

544 citations


Journal ArticleDOI
TL;DR: The first nontrivial polynomial-time approximation algorithms for a general family of classification problems of this type are provided, the metric labeling problem, which contains as special cases a number of standard classification frameworks, including several arising from the theory of Markov random fields.
Abstract: In a traditional classification problem, we wish to assign one of k labels (or classes) to each of n objects, in a way that is consistent with some observed data that we have about the problem. An active line of research in this area is concerned with classification when one has information about pairwise relationships among the objects to be classified; this issue is one of the principal motivations for the framework of Markov random fields, and it arises in areas such as image processing, biometry, and document analysis. In its most basic form, this style of analysis seeks to find a classification that optimizes a combinatorial function consisting of assignment costs---based on the individual choice of label we make for each object---and separation costs---based on the pair of choices we make for two "related" objects.We formulate a general classification problem of this type, the metric labeling problem; we show that it contains as special cases a number of standard classification frameworks, including several arising from the theory of Markov random fields. From the perspective of combinatorial optimization, our problem can be viewed as a substantial generalization of the multiway cut problem, and equivalent to a type of uncapacitated quadratic assignment problem.We provide the first nontrivial polynomial-time approximation algorithms for a general family of classification problems of this type. Our main result is an O(log k log log k)-approximation algorithm for the metric labeling problem, with respect to an arbitrary metric on a set of k labels, and an arbitrary weighted graph of relationships on a set of objects. For the special case in which the labels are endowed with the uniform metric---all distances are the same---our methods provide a 2-approximation algorithm.

502 citations


Proceedings ArticleDOI
19 May 2002
TL;DR: A simple and natural greedy algorithm for the metric uncapacitated facility location problem achieving an approximation guarantee of 1.61 and proving a lower bound of 1+2/e on the approximability of the k-median problem.
Abstract: We present a simple and natural greedy algorithm for the metric uncapacitated facility location problem achieving an approximation guarantee of 1.61. We use this algorithm to find better approximation algorithms for the capacitated facility location problem with soft capacities and for a common generalization of the k-median and facility location problems. We also prove a lower bound of 1+2/e on the approximability of the k-median problem. At the end, we present a discussion about the techniques we have used in the analysis of our algorithm, including a computer-aided method for proving bounds on the approximation factor.

487 citations


Book
01 Jan 2002
TL;DR: The Robbins-Monro Algorithm is applied to Stochastic Approximation Algorithms with Expanding Truncations as well as to applications to Signal Processing.
Abstract: Preface. Acknowledgments. 1. Robbins-Monro Algorithm. 2. Stochastic Approximation Algorithms with Expanding Truncations. 3. Asymptotic Properties of Stochastic Approximation Algorithms. 4. Optimization by Stochastic Approximation. 5. Applications To Signal Processing. 6. Application to Systems and Control. 7. Appendices. References. Index.

444 citations


Proceedings ArticleDOI
09 Jun 2002
TL;DR: This paper proposes the first distributed approximation algorithm to construct a MCDS for the unit-disk-graph with a emph constant approximation ratio, and emph linear time and emphlinear message complexity.
Abstract: A connected dominating set (CDS) for a graph G(V,E) is a subset V1 of V, such that each node in V--V1 is adjacent to some node in V1, and V1 induces a connected subgraph. A CDS has been proposed as a virtual backbone for routing in wireless ad hoc networks. However, it is NP-hard to find a minimum connected dominating set (MCDS). Approximation algorithms for MCDS have been proposed in the literature. Most of these algorithms suffer from a very poor approximation ratio, and from high time complexity and message complexity. Recently, new distributed heuristics for constructing a CDS were developed, with constant approximation ratio of 8. These new heuristics are based on a construction of a spanning tree, which makes it very costly in terms of communication overhead to maintain the CDS in the case of mobility and topology changes.In this paper, we propose the first distributed approximation algorithm to construct a MCDS for the unit-disk-graph with a emph constant approximation ratio, and emph linear time and emph linear message complexity. This algorithm is fully localized, and does not depend on the spanning tree. Thus, the maintenance of the CDS after changes of topology guarantees the maintenance of the same approximation ratio. In this algorithm each node requires knowledge of its single-hop neighbors, and only a constant number of two-hop and three-hop neighbors. The message length is O( log n) bits.

420 citations


Journal ArticleDOI
TL;DR: In this paper, the authors compare the performance of two multiple-objective genetic local search (MOGLS) algorithms to the best performers in the previous experiments using the same test instances and conclude that the MOGLS algorithm generates better approximations to the nondominated set in the same number of functions evaluations than the other algorithms.
Abstract: Multiple-objective metaheuristics, e.g., multiple-objective evolutionary algorithms, constitute one of the most active fields of multiple-objective optimization. Since 1985, a significant number of different methods have been proposed. However, only few comparative studies of the methods were performed on large-scale problems. We continue two comparative experiments on the multiple-objective 0/1 knapsack problem reported in the literature. We compare the performance of two multiple-objective genetic local search (MOGLS) algorithms to the best performers in the previous experiments using the same test instances. The results of our experiment indicate that our MOGLS algorithm generates better approximations to the nondominated set in the same number of functions evaluations than the other algorithms.

Journal ArticleDOI
TL;DR: This paper shows how the stability number can be computed as the solution of a conic linear program (LP) over the cone of copositive matrices of a graph by solving semidefinite programs (SDPs) of increasing size (lift-and-project method).
Abstract: Lovasz and Schrijver [SIAM J. Optim., 1 (1991), pp. 166--190] showed how to formulate increasingly tight approximations of the stable set polytope of a graph by solving semidefinite programs (SDPs) of increasing size (lift-and-project method). In this paper we present a similar idea. We show how the stability number can be computed as the solution of a conic linear program (LP) over the cone of copositive matrices. Subsequently, we show how to approximate the copositive cone ever more closely via a hierarchy of linear or semidefinite programs of increasing size (liftings). The latter idea is based on recent work by Parrilo [Structured Semidefinite Programs and Semi-algebraic Geometry Methods in Robustness and Optimization, Ph. D. thesis, California Institute of Technology, Pasadena, CA, 2000]. In this way we can compute the stability number $\alpha(G)$ of any graph $G(V,E)$ after at most $\alpha(G)^2$ successive liftings for the LP-based approximations. One can compare this to the $n - \alpha(G)-1$ bound for the LP-based lift-and-project scheme of Lovasz and Schrijver. Our approach therefore requires fewer liftings for families of graphs where $\alpha(G) < O(\sqrt{n})$. We show that the first SDP-based approximation for $\alpha(G)$ in our series of increasingly tight approximations coincides with the $\vartheta'$-function of Schrijver [IEEE Trans. Inform. Theory, 25 (1979), pp. 425--429]. We further show that the second approximation is tight for complements of triangle-free graphs and for odd cycles.

Journal ArticleDOI
TL;DR: This paper analyzes some deficiencies of the dominant pruning algorithm and proposes two better approximation algorithms: total dominant pruned and partial dominant prune, which utilize 2-hop neighborhood information more effectively to reduce redundant transmissions.
Abstract: Unlike in a wired network, a packet transmitted by a node in an ad hoc wireless network can reach all neighbors. Therefore, the total number of transmissions (forward nodes) is generally used as the cost criterion for broadcasting. The problem of finding the minimum number of forward nodes is NP-complete. Among various approximation approaches, dominant pruning (Lim and Kim 2001) utilizes 2-hop neighborhood information to reduce redundant transmissions. In this paper, we analyze some deficiencies of the dominant pruning algorithm and propose two better approximation algorithms: total dominant pruning and partial dominant pruning. Both algorithms utilize 2-hop neighborhood information more effectively to reduce redundant transmissions. Simulation results of applying these two algorithms show performance improvements compared with the original dominant pruning. In addition, two termination criteria are discussed and compared through simulation under both the static and dynamic environments.

Journal ArticleDOI
TL;DR: This work surveys recent advances obtained for the two-dimensional bin packing problem, with special emphasis on exact algorithms and effective heuristic and metaheuristic approaches.

Proceedings ArticleDOI
09 Jun 2002
TL;DR: The main contribution of this work is a completely distributed algorithm for finding small WCDS's and the performance of this algorithm is shown to be very close to that of the centralized approach.
Abstract: We present a series of approximation algorithms for finding a small weakly-connected dominating set (WCDS) in a given graph to be used in clustering mobile ad hoc networks. The structure of a graph can be simplified using WCDS's and made more succinct for routing in ad hoc networks. The theoretical performance ratio of these algorithms is O(ln Δ) compared to the minimum size WCDS, where Δ is the maximum degree of the input graph. The first two algorithms are based on the centralized approximation algorithms of Guha and Khuller cite guha-khuller-1998 for finding small connected dominating sets (CDS's). The main contribution of this work is a completely distributed algorithm for finding small WCDS's and the performance of this algorithm is shown to be very close to that of the centralized approach. Comparisons between our work and some previous work (CDS-based) are also given in terms of the size of resultant dominating sets and graph connectivity degradation.

Journal ArticleDOI
TL;DR: Two destributed heuristics with constant performance ratios are proposed, which require only single-hop neighborhood knowledge, and a message length of O (1) and O(n log n), respectively.
Abstract: A connected dominating set (CDS) for a graph G(V, E) is a subset V' of V, such that each node in V — V' is adjacent to some node in V', and V' induces a connected subgraph. CDSs have been proposed as a virtual backbone for routing in wireless ad hoc networks. However, it is NP-hard to find a minimum connected dominating set (MCDS). An approximation algorithm for MCDS in general graphs has been proposed in the literature with performance guarantee of 3 + In Δ where Δ is the maximal nodal degree [1]. This algorithm has been implemented in distributed manner in wireless networks [2]–[4]. This distributed implementation suffers from high time and message complexity, and the performance ratio remains 3 + In Δ. Another distributed algorithm has been developed in [5], with performance ratio of Θ(n). Both algorithms require two-hop neighborhood knowledge and a message length of Ω (Δ). On the other hand, wireless ad hoc networks have a unique geometric nature, which can be modeled as a unit-disk graph (UDG), and thus admits heuristics with better performance guarantee. In this paper we propose two destributed heuristics with constant performance ratios. The time and message complexity for any of these algorithms is O(n), and O(n log n), respectively. Both of these algorithms require only single-hop neighborhood knowledge, and a message length of O (1).

Book ChapterDOI
TL;DR: This algorithm uses an idea of cost scaling, a greedy algorithm of Jain, Mahdian and Saberi, and a greedy augmentation procedure of Charikar, Guha and Khuller to solve the uncapacitated metric facility location problem.
Abstract: In this paper we present a 1.52-approximation algorithm for the uncapacitated metric facility location problem. This algorithm uses an idea of cost scaling, a greedy algorithm of Jain, Mahdian and Saberi, and a greedy augmentation procedure of Charikar, Guha and Khuller. We also present a 2.89-approximation for the capacitated metric facility location problem with soft capacities.

Journal ArticleDOI
TL;DR: This paper shows how to approximate the optimal solution by approximating the cone of copositive matrices via systems of linear inequalities, and, more refined, linear matrix inequalities (LMI's).
Abstract: The problem of minimizing a (non-convex) quadratic function over the simplex (the standard quadratic optimization problem) has an exact convex reformulation as a copositive programming problem. In this paper we show how to approximate the optimal solution by approximating the cone of copositive matrices via systems of linear inequalities, and, more refined, linear matrix inequalities (LMI's). In particular, we show that our approach leads to a polynomial-time approximation scheme for the standard quadratic optimzation problem. This is an improvement on the previous complexity result by Nesterov who showed that a 2/3-approximation is always possible. Numerical examples from various applications are provided to illustrate our approach.

Journal ArticleDOI
TL;DR: In this article, a (1 + ε)-approximation algorithm for the k-center problem with running time O(nlog k) + (k/ź)^ O(k 1-1/d) was presented.
Abstract: In this paper we present an n^ O(k1-1/d) -time algorithm for solving the k -center problem in \realsd , under Lźfty - and L2 -metrics. The algorithm extends to other metrics, and to the discrete k -center problem. We also describe a simple (1+ź) -approximation algorithm for the k -center problem, with running time O(nlog k) + (k/ź)^ O(k1-1/d) . Finally, we present an n^ O(k1-1/d) -time algorithm for solving the L -capacitated k -center problem, provided that L=Ω(n/k1-1/d) or L=O(1) .

Journal ArticleDOI
TL;DR: New randomized distributed algorithms for the dominating set problem are described and analyzed that run in polylogarithmic time, independent of the diameter of the network, and that return a dominating set of size within a logarithic factor from optimal, with high probability.
Abstract: The dominating set problem asks for a small subset D of nodes in a graph such that every node is either in D or adjacent to a node in D. This problem arises in a number of distributed network applications, where it is important to locate a small number of centers in the network such that every node is nearby at least one center. Finding a dominating set of minimum size is NP-complete, and the best known approximation is logarithmic in the maximum degree of the graph and is provided by the same simple greedy approach that gives the well-known logarithmic approximation result for the closely related set cover problem.We describe and analyze new randomized distributed algorithms for the dominating set problem that run in polylogarithmic time, independent of the diameter of the network, and that return a dominating set of size within a logarithmic factor from optimal, with high probability. In particular, our best algorithm runs in O(log n log Δ) rounds with high probability, where n is the number of nodes, Δ is one plus the maximum degree of any node, and each round involves a constant number of message exchanges among any two neighbors; the size of the dominating set obtained is within O (log Δ) of the optimal in expectation and within O(log n) of the optimal with high probability. We also describe generalizations to the weighted case and the case of multiple covering requirements.

Proceedings ArticleDOI
09 Jun 2002
TL;DR: A general approach leading to a polynomial algorithm is presented for minimizing maximum power for a class of graph properties called textbf monotone properties and a new approximation algorithm for the problem of minimizing the total power for obtaining a 2-node-connected graph is obtained.
Abstract: Topology control problems are concerned with the assignment of power values to the nodes of an ad hoc network so that the power assignment leads to a graph topology satisfying some specified properties. This paper considers such problems under several optimization objectives, including minimizing the maximum power and minimizing the total power. A general approach leading to a polynomial algorithm is presented for minimizing maximum power for a class of graph properties called textbf monotone properties. The difficulty of generalizing the approach to properties that are not monotone is discussed. Problems involving the minimization of total power are known to be bf NP -complete even for simple graph properties. A general approach that leads to an approximation algorithm for minimizing the total power for some monotone properties is presented. Using this approach, a new approximation algorithm for the problem of minimizing the total power for obtaining a 2-node-connected graph is obtained. It is shown that this algorithm provides a constant performance guarantee. Experimental results from an implementation of the approximation algorithm are also presented.

Journal ArticleDOI
TL;DR: In this article, rank-two relaxation was proposed to solve the MAX-CUT problem and a specialized version of the Goemans-Williamson technique was developed to achieve better practical performance.
Abstract: The Goemans--Williamson randomized algorithm guarantees a high-quality approximation to the MAX-CUT problem, but the cost associated with such an approximation can be excessively high for large-scale problems due to the need for solving an expensive semidefinite relaxation. In order to achieve better practical performance, we propose an alternative, rank-two relaxation and develop a specialized version of the Goemans--Williamson technique. The proposed approach leads to continuous optimization heuristics applicable to MAX-CUT as well as other binary quadratic programs, for example the MAX-BISECTION problem. A computer code based on the rank-two relaxation heuristics is compared with two state-of-the-art semidefinite programming codes that implement the Goemans--Williamson randomized algorithm, as well as with a purely heuristic code for effectively solving a particular MAX-CUT problem arising in physics. Computational results show that the proposed approach is fast and scalable and, more importantly, attains a higher approximation quality in practice than that of the Goemans--Williamson randomized algorithm. An extension to MAX-BISECTION is also discussed, as is an important difference between the proposed approach and the Goemans--Williamson algorithm; namely, that the new approach does not guarantee an upper bound on the MAX-CUT optimal value.

Journal ArticleDOI
Amotz Bar-Noy1, Sudipto Guha1
TL;DR: This work considers the following fundamental scheduling problem, and gives constant factor approximation algorithms for four variants of the problem, depending on the type of the machines and the weight of the jobs (identical vs. arbitrary).
Abstract: We consider the following fundamental scheduling problem. The input to the problem consists of n jobs and k machines. Each of the jobs is associated with a release time, a deadline, a weight, and a processing time on each of the machines. The goal is to find a nonpreemptive schedule that maximizes the weight of jobs that meet their respective deadlines. We give constant factor approximation algorithms for four variants of the problem, depending on the type of the machines (identical vs. unrelated) and the weight of the jobs (identical vs. arbitrary). All these variants are known to be NP-hard, and the two variants involving unrelated machines are also MAX-SNP hard. The specific results obtained are as follows: For identical job weights and unrelated machines: a greedy $2$-approximation algorithm. For identical job weights and k identical machines: the same greedy algorithm achieves a tight $\frac{(1+1/k)^k}{(1+1/k)^k-1}$ approximation factor. For arbitrary job weights and a single machine: an LP formulation achieves a 2-approximation for polynomially bounded integral input and a 3-approximation for arbitrary input. For unrelated machines, the factors are 3 and 4, respectively. For arbitrary job weights and k identical machines: the LP-based algorithm applied repeatedly achieves a $\frac{(1+1/k)^k}{(1+1/k)^k-1}$ approximation factor for polynomially bounded integral input and a $\frac{(1+1/2k)^k}{(1+1/2k)^k-1}$ approximation factor for arbitrary input. For arbitrary job weights and unrelated machines: a combinatorial $(3+2\sqrt{2} \approx 5.828)$-approximation algorithm.

Book ChapterDOI
Maxim Sviridenko1
27 May 2002
TL;DR: A new approximation algorithm for the metric uncapacitated facility location problem is designed, of LP rounding type and is based on a rounding technique developed in [5,6,7].
Abstract: We design a new approximation algorithm for the metric uncapacitated facility location problem. This algorithm is of LP rounding type and is based on a rounding technique developed in [5,6,7].

Journal ArticleDOI
TL;DR: Two simple randomized approximation algorithms are described, which are guaranteed to deliver feasible schedules with expected objective function value within factors of 1.7451 and 1.6853, respectively, of the optimum of two linear programming relaxations of the problem.
Abstract: We consider the scheduling problem of minimizing the average weighted completion time of n jobs with release dates on a single machine. We first study two linear programming relaxations of the problem, one based on a time-indexed formulation, the other on a completion-time formulation. We show their equivalence by proving that a O(n log n) greedy algorithm leads to optimal solutions to both relaxations. The proof relies on the notion of mean busy times of jobs, a concept which enhances our understanding of these LP relaxations. Based on the greedy solution, we describe two simple randomized approximation algorithms, which are guaranteed to deliver feasible schedules with expected objective function value within factors of 1.7451 and 1.6853, respectively, of the optimum. They are based on the concept of common and independent $\alpha$-points, respectively. The analysis implies in particular that the worst-case relative error of the LP relaxations is at most 1.6853, and we provide instances showing that it is at least $e/(e-1) \approx 1.5819$. Both algorithms may be derandomized; their deterministic versions run in O(n2) time. The randomized algorithms also apply to the on-line setting, in which jobs arrive dynamically over time and one must decide which job to process without knowledge of jobs that will be released afterwards.

Book ChapterDOI
27 May 2002
TL;DR: Improved approximation algorithms for the MAX 2-SAT and MAX DI-CUT problems are obtained, which are essentially the best performance ratios that can be achieved using any combination of prerounding rotations and skewed distributions of hyperplanes, and even using more general families of rounding procedures.
Abstract: Improving and extending recent results of Matuura and Matsui, and less recent results of Feige and Goemans, we obtain improved approximation algorithms for the MAX 2-SAT and MAX DI-CUT problems These approximation algorithms start by solving semidefinite programming relaxations of these problems They then rotate the solution obtained, as suggested by Feige and Goemans Finally, they round the rotated vectors using random hyperplanes chosen according to skewed distributions The performance ratio obtained by the MAX 2-SAT algorithm is at least 0940, while that obtained by the MAX DI-CUT algorithm is at least 0874 We show that these are essentially the best performance ratios that can be achieved using any combination of prerounding rotations and skewed distributions of hyperplanes, and even using more general families of rounding procedures The performance ratio obtained for the MAX 2-SAT problem is fairly close to the inapproximability bound of about 0954 obtained by Hastad The performance ratio obtained for the MAX DI-CUT problem is very close to the performance ratio of about 0878 obtained by Goemans and Williamson for the MAX CUT problem

Posted Content
Neal E. Young1
TL;DR: In this article, the authors explore how to avoid the time bottleneck for randomized rounding algorithms for packing and covering linear programs (either mixed integer linear programs or linear programs with no negative coefficients).
Abstract: Randomized rounding is a standard method, based on the probabilistic method, for designing combinatorial approximation algorithms. In Raghavan's seminal paper introducing the method (1988), he writes: "The time taken to solve the linear program relaxations of the integer programs dominates the net running time theoretically (and, most likely, in practice as well)." This paper explores how this bottleneck can be avoided for randomized rounding algorithms for packing and covering problems (linear programs, or mixed integer linear programs, having no negative coefficients). The resulting algorithms are greedy algorithms, and are faster and simpler to implement than standard randomized-rounding algorithms. This approach can also be used to understand Lagrangian-relaxation algorithms for packing/covering linear programs: such algorithms can be viewed as as (derandomized) randomized-rounding schemes.

Journal ArticleDOI
01 May 2002
TL;DR: A constructive version of this theorem is presented here, with applications to approximation algorithms, and can be viewed as a generalization of randomized rounding.
Abstract: Let P be a linear relaxation of an integer polytope Z such that the integrality gap of P with respect to Z is at most r, as verified by a polytime heuristic A, which on any positive cost function c returns an integer solution (an extreme point of Z) whose cost is at most r times the optimal cost over P. Then for any point x* in P (a fractional solution), rx* dominates some convex combination of extreme points of Z. A constructive version of this theorem is presented here, with applications to approximation algorithms, and can be viewed as a generalization of randomized rounding.

Book
25 Feb 2002
TL;DR: This book discusses algorithms, graph theory, and the importance of exploration in the solving of optimization problems.
Abstract: 1 Basics I: Graphs.- 1.1 Introduction to graph theory.- 1.2 Excursion: Random graphs.- 2 Basics II: Algorithms.- 2.1 Introduction to algorithms.- 2.2 Excursion: Fibonacci heaps and amortized time.- 3 Basics III: Complexity.- 3.1 Introduction to complexity theory.- 3.2 Excursion: More NP-complete problems.- 4 Special Terminal Sets.- 4.1 The shortest path problem.- 4.2 The minimum spanning tree problem.- 4.3 Excursion: Matroids and the greedy algorithm.- 5 Exact Algorithms.- 5.1 The enumeration algorithm.- 5.2 The Dreyfus-Wagner algorithm.- 5.3 Excursion: Dynamic programming.- 6 Approximation Algorithms.- 6.1 A simple algorithm with performance ratio 2.- 6.2 Improving the time complexity.- 6.3 Excursion: Machine scheduling.- 7 More on Approximation Algorithms.- 7.1 Minimum spanning trees in hypergraphs.- 7.2 Improving the performance ratio I.- 7.3 Excursion: The complexity of optimization problems.- 8 Randomness Helps.- 8.1 Probabilistic complexity classes.- 8.2 Improving the performance ratio II.- 8.3 An almost always optimal algorithm.- 8.4 Excursion: Primality and cryptography.- 9 Limits of Approximability.- 9.1 Reducing optimization problems.- 9.2 APX-completeness.- 9.3 Excursion: Probabilistically checkable proofs.- 10 Geometric Steiner Problems.- 10.1 A characterization of rectilinear Steiner minimum trees.- 10.2 The Steiner ratios.- 10.3 An almost linear time approximation scheme.- 10.4 Excursion: The Euclidean Steiner problem.- Symbol Index.

Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of nonpreemptive scheduling to minimize average (weighted) completion time, allowing for release dates, parallel machines, and precedence constraints.
Abstract: We consider the problem of nonpreemptive scheduling to minimize average (weighted) completion time, allowing for release dates, parallel machines, and precedence constraints. Recent work has led to constant-factor approximations for this problem based on solving a preemptive or linear programming relaxation and then using the solution to get an ordering on the jobs. We introduce several new techniques which generalize this basic paradigm. We use these ideas to obtain improved approximation algorithms for one-machine scheduling to minimize average completion time with release dates. In the process, we obtain an optimal randomized on-line algorithm for the same problem that beats a lower bound for deterministic on-line algorithms. We consider extensions to the case of parallel machine scheduling, and for this we introduce two new ideas: first, we show that a preemptive one-machine relaxation is a powerful tool for designing parallel machine scheduling algorithms that simultaneously produce good approximations and have small running times; second, we show that a nongreedy "rounding" of the relaxation yields better approximations than a greedy one. We also prove a general theorem relating the value of one-machine relaxations to that of the schedules obtained for the original m-machine problems. This theorem applies even when there are precedence constraints on the jobs. We apply this result to obtain improved approximation ratios for precedence graphs such as in-trees, out-trees, and series-parallel graphs.