scispace - formally typeset
Search or ask a question

Showing papers on "Approximation algorithm published in 2005"


Journal ArticleDOI
TL;DR: This work explains how to obtain region-based free energy approximations that improve the Bethe approximation, and corresponding generalized belief propagation (GBP) algorithms, and describes empirical results showing that GBP can significantly outperform BP.
Abstract: Important inference problems in statistical physics, computer vision, error-correcting coding theory, and artificial intelligence can all be reformulated as the computation of marginal probabilities on factor graphs. The belief propagation (BP) algorithm is an efficient way to solve these problems that is exact when the factor graph is a tree, but only approximate when the factor graph has cycles. We show that BP fixed points correspond to the stationary points of the Bethe approximation of the free energy for a factor graph. We explain how to obtain region-based free energy approximations that improve the Bethe approximation, and corresponding generalized belief propagation (GBP) algorithms. We emphasize the conditions a free energy approximation must satisfy in order to be a "valid" or "maxent-normal" approximation. We describe the relationship between four different methods that can be used to generate valid approximations: the "Bethe method", the "junction graph method", the "cluster variation method", and the "region graph method". Finally, we explain how to tell whether a region-based approximation, and its corresponding GBP algorithm, is likely to be accurate, and describe empirical results showing that GBP can significantly outperform BP.

1,827 citations


Journal ArticleDOI
TL;DR: The adaptive cross approximation algorithm is extended to electromagnetic compatibility-related problems of moderate electrical size and it is concluded that formoderate electrical size problems the memory and CPU time requirements for the ACA algorithm scale as N/sup 4/3/logN.
Abstract: This paper presents the adaptive cross approximation (ACA) algorithm to reduce memory and CPU time overhead in the method of moments (MoM) solution of surface integral equations. The present algorithm is purely algebraic; hence, its formulation and implementation are integral equation kernel (Green's function) independent. The algorithm starts with a multilevel partitioning of the computational domain. The interactions of well-separated partitioning clusters are accounted through a rank-revealing LU decomposition. The acceleration and memory savings of ACA come from the partial assembly of the rank-deficient interaction submatrices. It has been demonstrated that the ACA algorithm results in O(NlogN) complexity (where N is the number of unknowns) when applied to static and electrically small electromagnetic problems. In this paper the ACA algorithm is extended to electromagnetic compatibility-related problems of moderate electrical size. Specifically, the ACA algorithm is used to study compact-range ground planes and electromagnetic interference and shielding in vehicles. Through numerical experiments, it is concluded that for moderate electrical size problems the memory and CPU time requirements for the ACA algorithm scale as N/sup 4/3/logN.

608 citations


Journal ArticleDOI
TL;DR: This paper shows that every efficient algorithm for the smallest grammar problem has approximation ratio at least 8569/8568 unless P=NP, and bound approximation ratios for several of the best known grammar-based compression algorithms, including LZ78, B ISECTION, SEQUENTIAL, LONGEST MATCH, GREEDY, and RE-PAIR.
Abstract: This paper addresses the smallest grammar problem: What is the smallest context-free grammar that generates exactly one given string /spl sigma/? This is a natural question about a fundamental object connected to many fields such as data compression, Kolmogorov complexity, pattern identification, and addition chains. Due to the problem's inherent complexity, our objective is to find an approximation algorithm which finds a small grammar for the input string. We focus attention on the approximation ratio of the algorithm (and implicitly, the worst case behavior) to establish provable performance guarantees and to address shortcomings in the classical measure of redundancy in the literature. Our first results are concern the hardness of approximating the smallest grammar problem. Most notably, we show that every efficient algorithm for the smallest grammar problem has approximation ratio at least 8569/8568 unless P=NP. We then bound approximation ratios for several of the best known grammar-based compression algorithms, including LZ78, B ISECTION, SEQUENTIAL, LONGEST MATCH, GREEDY, and RE-PAIR. Among these, the best upper bound we show is O(n/sup 1/2/). We finish by presenting two novel algorithms with exponentially better ratios of O(log/sup 3/n) and O(log(n/m/sup */)), where m/sup */ is the size of the smallest grammar for that input. The latter algorithm highlights a connection between grammar-based compression and LZ77.

457 citations


Journal ArticleDOI
TL;DR: A factor 4 approximation for minimization on complete graphs, and a factor O(logn) approximation for general graphs are demonstrated, and the APX-hardness of minimization of complete graphs is proved.

399 citations


Journal ArticleDOI
TL;DR: A polynomial time approximation scheme (PTAS) for MKP, which appears to be the strongest special case of GAP that is not APX-hard, and a PTAS-preserving reduction from an arbitrary instance of MKP to an instance with distinct sizes and profits.
Abstract: The multiple knapsack problem (MKP) is a natural and well-known generalization of the single knapsack problem and is defined as follows. We are given a set of $n$ items and $m$ bins (knapsacks) such that each item $i$ has a profit $p(i)$ and a size $s(i)$, and each bin $j$ has a capacity $c(j)$. The goal is to find a subset of items of maximum profit such that they have a feasible packing in the bins. MKP is a special case of the generalized assignment problem (GAP) where the profit and the size of an item can vary based on the specific bin that it is assigned to. GAP is APX-hard and a 2-approximation, for it is implicit in the work of Shmoys and Tardos [Math. Program. A, 62 (1993), pp. 461-474], and thus far, this was also the best known approximation for MKP\@. The main result of this paper is a polynomial time approximation scheme (PTAS) for MKP\@. Apart from its inherent theoretical interest as a common generalization of the well-studied knapsack and bin packing problems, it appears to be the strongest special case of GAP that is not APX-hard. We substantiate this by showing that slight generalizations of MKP are APX-hard. Thus our results help demarcate the boundary at which instances of GAP become APX-hard. An interesting aspect of our approach is a PTAS-preserving reduction from an arbitrary instance of MKP to an instance with $O(\log n)$ distinct sizes and profits.

333 citations


Book ChapterDOI
22 Aug 2005
TL;DR: This paper provides a number of approximation algorithms with approximation ratios that depend on either the number of categories, the maximum number of points per category or both, and gives an experimental evaluation of the proposed algorithms using both synthetic and real datasets.
Abstract: In this paper we discuss a new type of query in Spatial Databases, called the Trip Planning Query (TPQ). Given a set of points of interest P in space, where each point belongs to a specific category, a starting point S and a destination E, TPQ retrieves the best trip that starts at S, passes through at least one point from each category, and ends at E. For example, a driver traveling from Boston to Providence might want to stop to a gas station, a bank and a post office on his way, and the goal is to provide him with the best possible route (in terms of distance, traffic, road conditions, etc.). The difficulty of this query lies in the existence of multiple choices per category. In this paper, we study fast approximation algorithms for TPQ in a metric space. We provide a number of approximation algorithms with approximation ratios that depend on either the number of categories, the maximum number of points per category or both. Therefore, for different instances of the problem, we can choose the algorithm with the best approximation ratio, since they all run in polynomial time. Furthermore, we use some of the proposed algorithms to derive efficient heuristics for large datasets stored in external memory. Finally, we give an experimental evaluation of the proposed algorithms using both synthetic and real datasets.

312 citations


Proceedings ArticleDOI
18 Mar 2005
TL;DR: A greedy pursuit algorithm called simultaneous orthogonal matching pursuit is presented, which proves that the algorithm calculates simultaneous approximations whose error is within a constant factor of the optimal simultaneous approximation error.
Abstract: A simple sparse approximation problem requests an approximation of a given input signal as a linear combination of T elementary signals drawn from a large, linearly dependent collection An important generalization is simultaneous sparse approximation Now one must approximate several input signals at once using different linear combinations of the same T elementary signals This formulation appears, for example, when analyzing multiple observations of a sparse signal that have been contaminated with noise A new approach to this problem is presented here: a greedy pursuit algorithm called simultaneous orthogonal matching pursuit The paper proves that the algorithm calculates simultaneous approximations whose error is within a constant factor of the optimal simultaneous approximation error This result requires that the collection of elementary signals be weakly correlated, a property that is also known as incoherence Numerical experiments demonstrate that the algorithm often succeeds, even when the inputs do not meet the hypotheses of the proof

301 citations


Posted Content
TL;DR: In this paper, the spectral norm and cut-norm of random submatrices of a large matrix A are approximated with O(r log r) sample complexity, where r is the numerical rank of A. The numerical rank is always bounded by, and is a stable relaxation of, the rank.
Abstract: We study random submatrices of a large matrix A. We show how to approximately compute A from its random submatrix of the smallest possible size O(r log r) with a small error in the spectral norm, where r = ||A||_F^2 / ||A||_2^2 is the numerical rank of A. The numerical rank is always bounded by, and is a stable relaxation of, the rank of A. This yields an asymptotically optimal guarantee in an algorithm for computing low-rank approximations of A. We also prove asymptotically optimal estimates on the spectral norm and the cut-norm of random submatrices of A. The result for the cut-norm yields a slight improvement on the best known sample complexity for an approximation algorithm for MAX-2CSP problems. We use methods of Probability in Banach spaces, in particular the law of large numbers for operator-valued random variables.

276 citations


Proceedings ArticleDOI
20 Jun 2005
TL;DR: A two-step algorithm is proposed for solving the problem of clustering in domains where the affinity relations are not dyadic (pairwise), but rather triadic, tetradic or higher, which is an instance of the hypergraph partitioning problem.
Abstract: We consider the problem of clustering in domains where the affinity relations are not dyadic (pairwise), but rather triadic, tetradic or higher. The problem is an instance of the hypergraph partitioning problem. We propose a two-step algorithm for solving this problem. In the first step we use a novel scheme to approximate the hypergraph using a weighted graph. In the second step a spectral partitioning algorithm is used to partition the vertices of this graph. The algorithm is capable of handling hyperedges of all orders including order two, thus incorporating information of all orders simultaneously. We present a theoretical analysis that relates our algorithm to an existing hypergraph partitioning algorithm and explain the reasons for its superior performance. We report the performance of our algorithm on a variety of computer vision problems and compare it to several existing hypergraph partitioning algorithms.

276 citations


Journal ArticleDOI
TL;DR: A new fast approximation algorithm for the weighted clustering coecient and the transitivity is presented and a simple graph generator algorithm is given that works according to the preferential attachment rule but also generates graphs with adjustable clusteringCoecient.
Abstract: Since its introduction in the year 1998 by Watts and Strogatz, the clustering coecient has become a frequently used tool for analyzing graphs. In 2002 the transitivity was proposed by Newman, Watts and Strogatz as an alternative to the clustering coecient. As many networks considered in complex systems are huge, the ecient computation of such network parameters is crucial. Several algorithms with polynomial running time can be derived from results known in graph theory. The main contribution of this work is a new fast approximation algorithm for the weighted clustering coecient which also gives very ecient approximation algorithms for the clustering coecient and the transitivity. We namely present an algorithm with running time in O(1) for the clustering coecient, respectively with running time in O(n) for the transitivity. By an experimental study we demonstrate the performance of the proposed algorithms on real-world data as well as on generated graphs. Moreover we give a simple graph generator algorithm that works according to the preferential attachment rule but also generates graphs with adjustable clustering coecient.

272 citations


Proceedings ArticleDOI
23 Oct 2005
TL;DR: An O(log OPT) approximation is obtained for a generalization of the orienteering problem in which the profit for visiting each node may vary arbitrarily with time and the implications for the approximability of several basic optimization problems are interesting.
Abstract: Given an arc-weighted directed graph G = (V, A, /spl lscr/) and a pair of nodes s, t, we seek to find an s-t walk of length at most B that maximizes some given function f of the set of nodes visited by the walk. The simplest case is when we seek to maximize the number of nodes visited: this is called the orienteering problem. Our main result is a quasi-polynomial time algorithm that yields an O(log OPT) approximation for this problem when f is a given submodular set function. We then extend it to the case when a node v is counted as visited only if the walk reaches v in its time window [R(v), D(v)]. We apply the algorithm to obtain several new results. First, we obtain an O(log OPT) approximation for a generalization of the orienteering problem in which the profit for visiting each node may vary arbitrarily with time. This captures the time window problem considered earlier for which, even in undirected graphs, the best approximation ratio known [Bansal, N et al. (2004)] is O(log/sup 2/ OPT). The second application is an O(log/sup 2/ k) approximation for the k-TSP problem in directed graphs (satisfying asymmetric triangle inequality). This is the first non-trivial approximation algorithm for this problem. The third application is an O(log/sup 2/ k) approximation (in quasi-poly time) for the group Steiner problem in undirected graphs where k is the number of groups. This improves earlier ratios (Garg, N et al.) by a logarithmic factor and almost matches the inapproximability threshold on trees (Halperin and Krauthgamer, 2003). This connection to group Steiner trees also enables us to prove that the problem we consider is hard to approximate to a ratio better than /spl Omega/(log/sup 1-/spl epsi// OPT), even in undirected graphs. Even though our algorithm runs in quasi-poly time, we believe that the implications for the approximability of several basic optimization problems are interesting.

20 Nov 2005
TL;DR: It is shown that the k-Anonymity problem is NP-hard even when the attribute values are ternary and the author provides an O(k)-approximation algorithm for the problem.
Abstract: We consider the problem of releasing a table containing personal records, while ensuring individual privacy and maintaining data integrity to the extent possible. One of the techniques proposed in the literature is k-anonymization. A release is considered k-anonymous if the information corresponding to any individual in the release cannot be distinguished from that of at least k-1 other individuals whose information also appears in the release. In order to achieve k-anonymization, some of the entries of the table are either suppressed or generalized (e.g. an Age value of 23 could be changed to the Age range 20-25). The goal is to lose as little information as possible while ensuring that the release is k-anonymous. This optimization problem is referred to as the k-Anonymity problem. We show that the k-Anonymity problem is NP-hard even when the attribute values are ternary and we are allowed only to suppress entries. On the positive side, we provide an O(k)-approximation algorithm for the problem. We also give improved positive results for the interesting cases with specific values of k --- in particular, we give a 1.5-approximation algorithm for the special case of 2-Anonymity, and a 2-approximation algorithm for 3-Anonymity.

Book ChapterDOI
13 Dec 2005
TL;DR: This paper designs a (Δ–1)-approximation algorithm for MDAT problem, where Δ equals the maximum number of sensors within the transmission range of any sensor, and proves that this problem is NP-hard even when all sensors are deployed a grid.
Abstract: Wireless sensor networks promise a new paradigm for gathering data via collaboration among sensors spreading over a large geometrical region. Many real-time applications impose stringent delay requirements and ask for time-efficient schedules of data aggregations in which sensed data at sensors are combined at intermediate sensors along the way towards the data sink. The Minimum Data Aggregation Time (MDAT) problem is to find the schedule that routes data appropriately and has the shortest time for all requested data to be aggregated to the data sink. In this paper we study the MDAT problem with uniform transmission range of all sensors. We assume that, in each time round, data sent by a sensor reaches exactly all sensors within its transmission range, and a sensor receives data if it is the only data that reaches the sensor in this time round. We first prove that this problem is NP-hard even when all sensors are deployed a grid and data on all sensors are required to be aggregated to the data sink. We then design a (Δ–1)-approximation algorithm for MDAT problem, where Δ equals the maximum number of sensors within the transmission range of any sensor. We also simulate the proposed algorithm and compare it with the existing algorithm. The obtained results show that our algorithm has much better performance in practice than the theoretically proved guarantee and outperforms other algorithm.

Proceedings ArticleDOI
22 May 2005
TL;DR: The algorithmic theory of vertex separators, and its relation to the embeddings of certain metric spaces is developed, and an O(√log n) pseudo-approximation for finding balanced vertices in general graphs is exhibited.
Abstract: We develop the algorithmic theory of vertex separators, and its relation to the embeddings of certain metric spaces. Unlike in the edge case, we show that embeddings into L1 (and even Euclidean embeddings) are insufficient, but that the additional structure provided by many embedding theorems does suffice for our purposes.We obtain an O(√log n) approximation for min-ratio vertex cuts in general graphs, based on a new semidefinite relaxation of the problem, and a tight analysis of the integrality gap which is shown to be Θ(√log n). We also prove various approximate max-flow/min-vertex-cut theorems, which in particular give a constant-factor approximation for min-ratio vertex cuts in any excluded-minor family of graphs. Previously, this was known only for planar graphs, and for general excluded-minor families the best-known ratio was O(log n).These results have a number of applications. We exhibit an O(√log n) pseudo-approximation for finding balanced vertex separators in general graphs. In fact, we achieve an approximation ratio of O(√log opt) where opt is the size of an optimal separator, improving over the previous best bound of O(log opt). Likewise, we obtain improved approximation ratios for treewidth: In any graph of treewidth k, we show how to find a tree decomposition of width at most O(k √log k), whereas previous algorithms yielded O(k log k). For graphs excluding a fixed graph as a minor (which includes, e.g., bounded genus graphs), we give a constant-factor approximation for the treewidth; this can be used to obtain the first polynomial-time approximation schemes for problems like minimum feedback vertex set and minimum connected dominating set in such graphs.

Proceedings ArticleDOI
13 Mar 2005
TL;DR: A general search scheme is introduced, of which flooding and random walks are special instances, and a small number of supernodes in an otherwise regular topology can offer sharp savings in the performance of search, both in the case of search by flooding and search by random walk, when it is combined with 1-step replication.
Abstract: We study hybrid search schemes for unstructured peer-to-peer networks. We quantify performance in terms of number of hits, network overhead, and response time. Our schemes combine flooding and random walks, look ahead and replication. We consider both regular topologies and topologies with supernodes. We introduce a general search scheme, of which flooding and random walks are special instances, and show how to use locally maintained network information to improve the performance of searching. Our main findings are: (a) a small number of supernodes in an otherwise regular topology can offer sharp savings in the performance of search, both in the case of search by flooding and search by random walk, particularly when it is combined with 1-step replication. We quantify, analytically and experimentally, that the reason of these savings is that the search is biased towards nodes that yield more information. (b) There is a generalization of search, of which flooding and random walk are special instances, which may take further advantage of locally maintained network information, and yield better performance than both flooding and random walk in clustered topologies. The method determines edge critically and is reminiscent of fundamental heuristics from the area of approximation algorithms.

Proceedings ArticleDOI
24 Apr 2005
TL;DR: Simulation results show that the proposed DPF with GMM approximation algorithms provide robust localization and tracking performance at much reduced communication overhead.
Abstract: Two novel distributed particle filters with Gaussian mixer approximation are proposed to localize and track multiple moving targets in a wireless sensor network. The distributed particle filters run on a set of uncorrelated sensor cliques that are dynamically organized based on moving target trajectories. These two algorithms differ in how the distributive computing is performed. In the first algorithm, partial results are updated at each sensor clique sequentially based on partial results forwarded from a neighboring clique and local observations. In the second algorithm, all individual cliques compute partial estimates based only on local observations in parallel, and forward their estimates to a fusion center to obtain final output. In order to conserve bandwidth and power, the local sufficient statistics (belief) is approximated by a low dimensional Gaussian mixture model (GMM) before propagating among sensor cliques. We further prove that the posterior distribution estimated by distributed particle filter convergence almost surely to the posterior distribution estimated from a centralized Bayesian formula. Moreover, a data-adaptive application layer communication protocol is proposed to facilitate sensor self-organization and collaboration. Simulation results show that the proposed DPF with GMM approximation algorithms provide robust localization and tracking performance at much reduced communication overhead.

Journal ArticleDOI
TL;DR: A general approach leading to a polynomial algorithm is presented for minimizing maximum power for a class of graph properties called monotone properties and a new approximation algorithm for the problem of minimizing the total power for obtaining a 2-node-connected graph is developed.
Abstract: Topology control problems are concerned with the assignment of power values to the nodes of an ad hoc network so that the power assignment leads to a graph topology satisfying some specified properties. This paper considers such problems under several optimization objectives, including minimizing the maximum power and minimizing the total power. A general approach leading to a polynomial algorithm is presented for minimizing maximum power for a class of graph properties called monotone properties. The difficulty of generalizing the approach to properties that are not monotone is discussed. Problems involving the minimization of total power are known to be NP-complete even for simple graph properties. A general approach that leads to an approximation algorithm for minimizing the total power for some monotone properties is presented. Using this approach, a new approximation algorithm for the problem of minimizing the total power for obtaining a 2-node-connected graph is developed. It is shown that this algorithm provides a constant performance guarantee. Experimental results from an implementation of the approximation algorithm are also presented.

Journal ArticleDOI
TL;DR: A new (randomized) reduction from closest vector problem (CVP) to SVP that achieves some constant factor hardness is given, based on BCH codes, that enables the hardness factor to 2/sup log n1/2-/spl epsi//.
Abstract: Let p > 1 be any fixed real. We show that assuming NP n RP, there is no polynomial time algorithm that approximates the Shortest Vector Problem (SVP) in ep norm within a constant factor. Under the stronger assumption NP n RTIME(2poly(log n)), we show that there is no polynomial-time algorithm with approximation ratio 2(log n)1/2−e where n is the dimension of the lattice and e > 0 is an arbitrarily small constant.We first give a new (randomized) reduction from Closest Vector Problem (CVP) to SVP that achieves some constant factor hardness. The reduction is based on BCH Codes. Its advantage is that the SVP instances produced by the reduction behave well under the augmented tensor product, a new variant of tensor product that we introduce. This enables us to boost the hardness factor to 2(log n)1/2-e.

Journal ArticleDOI
TL;DR: Numerical results comparing several SCA algorithms show that SSR has the best trade-off between solution optimality and computation speed.
Abstract: The design of survivable mesh based communication networks has received considerable attention in recent years. One task is to route backup paths and allocate spare capacity in the network to guarantee seamless communications services survivable to a set of failure scenarios. This is a complex multi-constraint optimization problem, called the spare capacity allocation (SCA) problem. This paper unravels the SCA problem structure using a matrix-based model, and develops a fast and efficient approximation algorithm, termed successive survivable routing (SSR). First, per-flow spare capacity sharing is captured by a spare provision matrix (SPM) method. The SPM matrix has a dimension the number of failure scenarios by the number of links. It is used by each demand to route the backup path and share spare capacity with other backup paths. Next, based on a special link metric calculated from SPM, SSR iteratively routes/updates backup paths in order to minimize the cost of total spare capacity. A backup path can be further updated as long as it is not carrying any traffic. Furthermore, the SPM method and SSR algorithm are generalized from protecting all single link failures to any arbitrary link failures such as those generated by Shared Risk Link Groups or all single node failures. Numerical results comparing several SCA algorithms show that SSR has the best trade-off between solution optimality and computation speed.

Proceedings ArticleDOI
22 May 2005
TL;DR: Three approximation algorithms for the allocation problem in combinatorial auctions with complement free bidders are exhibited and lower bounds on the possible approximations achievable for these classes of bidder are proved.
Abstract: We exhibit three approximation algorithms for the allocation problem in combinatorial auctions with complement free bidders. The running time of these algorithms is polynomial in the number of items $m$ and in the number of bidders n, even though the "input size" is exponential in m. The first algorithm provides an O(log m) approximation. The second algorithm provides an O(√ m) approximation in the weaker model of value oracles. This algorithm is also incentive compatible. The third algorithm provides an improved 2-approximation for the more restricted case of "XOS bidders", a class which strictly contains submodular bidders. We also prove lower bounds on the possible approximations achievable for these classes of bidders. These bounds are not tight and we leave the gaps as open problems.

Proceedings ArticleDOI
22 May 2005
TL;DR: This paper presents a monotone PTAS for the generalized assignment problem with any bounded number of parameters per agent, and shows that primal-dual greedy algorithms achieve almost the same approximation ratios for PIPs as randomized rounding.
Abstract: This paper deals with the design of efficiently computable incentive compatible, or truthful, mechanisms for combinatorial optimization problems with multi-parameter agents. We focus on approximation algorithms for NP-hard mechanism design problems. These algorithms need to satisfy certain monotonicity properties to ensure truthfulness. Since most of the known approximation techniques do not fulfill these properties, we study alternative techniques.Our first contribution is a quite general method to transform a pseudopolynomial algorithm into a monotone FPTAS. This can be applied to various problems like, e.g., knapsack, constrained shortest path, or job scheduling with deadlines. For example, the monotone FPTAS for the knapsack problem gives a very efficient, truthful mechanism for single-minded multi-unit auctions. The best previous result for such auctions was a 2-approximation. In addition, we present a monotone PTAS for the generalized assignment problem with any bounded number of parameters per agent.The most efficient way to solve packing integer programs (PIPs) is LP-based randomized rounding, which also is in general not monotone. We show that primal-dual greedy algorithms achieve almost the same approximation ratios for PIPs as randomized rounding. The advantage is that these algorithms are inherently monotone. This way, we can significantly improve the approximation ratios of truthful mechanisms for various fundamental mechanism design problems like single-minded combinatorial auctions (CAs), unsplittable flow routing and multicast routing. Our approximation algorithms can also be used for the winner determination in CAs with general bidders specifying their bids through an oracle.

Journal ArticleDOI
TL;DR: It is proved that if the d-regular multigraph does not contain more than ⌊d/2⌋ copies of any 2-cycle then it can be found a similar decomposition into n2 pairs of cycle covers where each 2- cycle occurs in at most one component of each pair.
Abstract: A directed multigraph is said to be d-regular if the indegree and outdegree of every vertex is exactly d. By Hall's theorem, one can represent such a multigraph as a combination of at most n2 cycle covers, each taken with an appropriate multiplicity. We prove that if the d-regular multigraph does not contain more than ⌊d/2⌋ copies of any 2-cycle then we can find a similar decomposition into n2pairs of cycle covers where each 2-cycle occurs in at most one component of each pair. Our proof is constructive and gives a polynomial algorithm to find such a decomposition. Since our applications only need one such a pair of cycle covers whose weight is at least the average weight of all pairs, we also give an alternative, simpler algorithm to extract a single such pair.This combinatorial theorem then comes handy in rounding a fractional solution of an LP relaxation of the maximum Traveling Salesman Problem (TSP) problem. The first stage of the rounding procedure obtains two cycle covers that do not share a 2-cycle with weight at least twice the weight of the optimal solution. Then we show how to extract a tour from the 2 cycle covers, whose weight is at least 2/3 of the weight of the longest tour. This improves upon the previous 5/8 approximation with a simpler algorithm. Utilizing a reduction from maximum TSP to the shortest superstring problem, we obtain a 2.5-approximation algorithm for the latter problem, which is again much simpler than the previous one.For minimum asymmetric TSP, the same technique gives two cycle covers, not sharing a 2-cycle, with weight at most twice the weight of the optimum. Assuming triangle inequality, we then show how to obtain from this pair of cycle covers a tour whose weight is at most 0.842 log2n larger than optimal. This improves upon a previous approximation algorithm with approximation guarantee of 0.999 log2n. Other applications of the rounding procedure are approximation algorithms for maximum 3-cycle cover (factor 2/3, previously 3/5) and maximum asymmetric TSP with triangle inequality (factor 10/13, previously 3/4).

Journal ArticleDOI
TL;DR: Improved combinatorial approximation algorithms for the uncapacitated facility location problem and a variant of the capacitated facility locations problem is considered and improved approximation algorithms are presented for this.
Abstract: We present improved combinatorial approximation algorithms for the uncapacitated facility location problem. Two central ideas in most of our results are cost scaling and greedy improvement. We present a simple greedy local search algorithm which achieves an approximation ratio of $2.414+\epsilon$ in $\tilde{O}(n^2/\epsilon)$ time. This also yields a bicriteria approximation tradeoff of $(1+\gamma,1+2/\gamma)$ for facility cost versus service cost which is better than previously known tradeoffs and close to the best possible. Combining greedy improvement and cost scaling with a recent primal-dual algorithm for facility location due to Jain and Vazirani, we get an approximation ratio of $1.853$ in $\tilde{O}(n^3)$ time. This is very close to the approximation guarantee of the best known algorithm which is linear programming (LP)-based. Further, combined with the best known LP-based algorithm for facility location, we get a very slight improvement in the approximation factor for facility location, achieving $1.728$. We also consider a variant of the capacitated facility location problem and present improved approximation algorithms for this.

Book ChapterDOI
08 Jun 2005
TL;DR: It is shown for the first time that the integrality gap of the SDP relaxation is precisely π/4, and it is shown that the unified analysis can be used to obtain an O(1/log n)–approximation algorithm for the continuous problem in which the objective matrix is not positive semidefinite.
Abstract: In this paper we study semidefinite programming (SDP) models for a class of discrete and continuous quadratic optimization problems in the complex Hermitian form. These problems capture a class of well–known combinatorial optimization problems, as well as problems in control theory. For instance, they include Max–3–Cut where the Laplacian matrix is positive semidefinite (in particular, some of the edge weights can be negative). We present a generic algorithm and a unified analysis of the SDP relaxations which allow us to obtain good approximation guarantees for our models. Specifically, we give an $(k sin(\frac{\pi}{k}))^{2}/(4\pi)$ –approximation algorithm for the discrete problem where the decision variables are k–ary and the objective matrix is positive semidefinite. To the best of our knowledge, this is the first known approximation result for this family of problems. For the continuous problem where the objective matrix is positive semidefinite, we obtain the well–known π/4 result due to [2], and independently, [12]. However, our techniques simplify their analyses and provide a unified framework for treating these problems. In addition, we show for the first time that the integrality gap of the SDP relaxation is precisely π/4. We also show that the unified analysis can be used to obtain an O(1/log n)–approximation algorithm for the continuous problem in which the objective matrix is not positive semidefinite.

Journal ArticleDOI
TL;DR: In this work, an extension of the k-center facility location problem, where centers are required to service a minimum of clients is studied, and three variants of this problem are shown to be N P-hard.

Proceedings ArticleDOI
23 Jan 2005
TL;DR: The notion of separators is replaced with a more powerful tool from the bidimensionality theory, enabling the first approach to apply to a much broader class of minimization problems than previously possible; and through the use of a structural backbone and thickening of layers it is demonstrated how the second approach can be applied to problems with a "nonlocal" structure.
Abstract: We demonstrate a new connection between fixed-parameter tractability and approximation algorithms for combinatorial optimization problems on planar graphs and their generalizations. Specifically, we extend the theory of so-called "bidimensional" problems to show that essentially all such problems have both subexponential fixed-parameter algorithms and PTASs. Bidimensional problems include e.g. feedback vertex set, vertex cover, minimum maximal matching, face cover, a series of vertex-removal problems, dominating set, edge dominating set, r-dominating set, diameter, connected dominating set, connected edge dominating set, and connected r-dominating set. We obtain PTASs for all of these problems in planar graphs and certain generalizations; of particular interest are our results for the two well-known problems of connected dominating set and general feedback vertex set for planar graphs and their generalizations, for which PTASs were not known to exist. Our techniques generalize and in some sense unify the two main previous approaches for designing PTASs in planar graphs, namely, the Lipton-Tarjan separator approach [FOCS'77] and the Baker layerwise decomposition approach [FOCS'83]. In particular, we replace the notion of separators with a more powerful tool from the bidimensionality theory, enabling the first approach to apply to a much broader class of minimization problems than previously possible; and through the use of a structural backbone and thickening of layers we demonstrate how the second approach can be applied to problems with a "nonlocal" structure.

Journal ArticleDOI
TL;DR: This work studies a fairness criterion, called the Max-Min Fairness problem, for k players who want to allocate among themselves m indivisible goods, and presents a simple 1/(m - k + 1) approximation algorithm which allocates to every player at least 1/k fraction of the value of all but the k - 1 heaviest items.
Abstract: The problem of allocating divisible goods has enjoyed a lot of attention in both mathematics (e.g. the cake-cutting problem) and economics (e.g. market equilibria). On the other hand, the natural requirement of indivisible goods has been somewhat neglected, perhaps because of its more complicated nature. In this work we study a fairness criterion, called the Max-Min Fairness problem, for k players who want to allocate among themselves m indivisible goods. Each player has a specified valuation function on the subsets of the goods and the goal is to split the goods between the players so as to maximize the minimum valuation. Viewing the problem from a game theoretic perspective, we show that for two players and additive valuations the expected minimum of the (randomized) cut-and-choose mechanism is a 1/2-approximation of the optimum. To complement this result we show that no truthful mechanism can compute the exact optimum.We also consider the algorithmic perspective when the (true) additive valuation functions are part of the input. We present a simple 1/(m - k + 1) approximation algorithm which allocates to every player at least 1/k fraction of the value of all but the k - 1 heaviest items. We also give an algorithm with additive error against the fractional optimum bounded by the value of the largest item. The two approximation algorithms are incomparable in the sense that there exist instances when one outperforms the other.

Proceedings ArticleDOI
23 Oct 2005
TL;DR: This work uses a Lagrangian-relaxation based technique to derive faster algorithms for approximately solving several families of SDP relaxations and makes improvements in approximate eigenvalue computations by using random sampling.
Abstract: Semidefinite programming (SDP) relaxations appear in many recent approximation algorithms but the only general technique for solving such SDP relaxations is via interior point methods. We use a Lagrangian-relaxation based technique (modified from the papers of Plotkin, Shmoys, and Tardos (PST), and Klein and Lu) to derive faster algorithms for approximately solving several families of SDP relaxations. The algorithms are based upon some improvements to the PST ideas - which lead to new results even for their framework - as well as improvements in approximate eigenvalue computations by using random sampling.

Book
01 Jan 2005
TL;DR: This book discusses Reliable Algorithms in Unreliable Memories, which aims to provide a guide to the development of scalable and efficient algorithms for smooth and efficient solutions to the challenges faced in the rapidly changing environment.
Abstract: Designing Reliable Algorithms in Unreliable Memories.- From Balanced Graph Partitioning to Balanced Metric Labeling.- Fearful Symmetries: Quantum Computing, Factoring, and Graph Isomorphism.- Exploring an Unknown Graph Efficiently.- Online Routing in Faulty Meshes with Sub-linear Comparative Time and Traffic Ratio.- Heuristic Improvements for Computing Maximum Multicommodity Flow and Minimum Multicut.- Relax-and-Cut for Capacitated Network Design.- On the Price of Anarchy and Stability of Correlated Equilibria of Linear Congestion Games,,.- The Complexity of Games on Highly Regular Graphs.- Computing Equilibrium Prices: Does Theory Meet Practice?.- Efficient Exact Algorithms on Planar Graphs: Exploiting Sphere Cut Branch Decompositions.- An Algorithm for the SAT Problem for Formulae of Linear Length.- Linear-Time Enumeration of Isolated Cliques.- Finding Shortest Non-separating and Non-contractible Cycles for Topologically Embedded Graphs.- Delineating Boundaries for Imprecise Regions.- Exacus: Efficient and Exact Algorithms for Curves and Surfaces.- Min Sum Clustering with Penalties.- Improved Approximation Algorithms for Metric Max TSP.- Unbalanced Graph Cuts.- Low Degree Connectivity in Ad-Hoc Networks.- 5-Regular Graphs are 3-Colorable with Positive Probability.- Optimal Integer Alphabetic Trees in Linear Time.- Predecessor Queries in Constant Time?.- An Algorithm for Node-Capacitated Ring Routing.- On Degree Constrained Shortest Paths.- A New Template for Solving p-Median Problems for Trees in Sub-quadratic Time.- Roll Cutting in the Curtain Industry.- Space Efficient Algorithms for the Burrows-Wheeler Backtransformation.- Cache-Oblivious Comparison-Based Algorithms on Multisets.- Oblivious vs. Distribution-Based Sorting: An Experimental Evaluation.- Allocating Memory in a Lock-Free Manner.- Generating Realistic Terrains with Higher-Order Delaunay Triangulations.- I/O-Efficient Construction of Constrained Delaunay Triangulations.- Convex Hull and Voronoi Diagram of Additively Weighted Points.- New Tools and Simpler Algorithms for Branchwidth.- Treewidth Lower Bounds with Brambles.- Minimal Interval Completions.- A 2-Approximation Algorithm for Sorting by Prefix Reversals.- Approximating the 2-Interval Pattern Problem.- A Loopless Gray Code for Minimal Signed-Binary Representations.- Efficient Approximation Schemes for Geometric Problems?.- Geometric Clustering to Minimize the Sum of Cluster Sizes.- Approximation Schemes for Minimum 2-Connected Spanning Subgraphs in Weighted Planar Graphs.- Packet Routing and Information Gathering in Lines, Rings and Trees.- Jitter Regulation for Multiple Streams.- Efficient c-Oriented Range Searching with DOP-Trees.- Matching Point Sets with Respect to the Earth Mover's Distance.- Small Stretch Spanners on Dynamic Graphs.- An Experimental Study of Algorithms for Fully Dynamic Transitive Closure.- Experimental Study of Geometric t-Spanners.- Highway Hierarchies Hasten Exact Shortest Path Queries.- Preemptive Scheduling of Independent Jobs on Identical Parallel Machines Subject to Migration Delays.- Fairness-Free Periodic Scheduling with Vacations.- Online Bin Packing with Cardinality Constraints.- Fast Monotone 3-Approximation Algorithm for Scheduling Related Machines.- Engineering Planar Separator Algorithms.- Stxxl: Standard Template Library for XXL Data Sets.- Negative Cycle Detection Problem.- An Optimal Algorithm for Querying Priced Information: Monotone Boolean Functions and Game Trees.- Online View Maintenance Under a Response-Time Constraint.- Online Primal-Dual Algorithms for Covering and Packing Problems.- Efficient Algorithms for Shared Backup Allocation in Networks with Partial Information.- Using Fractional Primal-Dual to Schedule Split Intervals with Demands.- An Approximation Algorithm for the Minimum Latency Set Cover Problem.- Workload-Optimal Histograms on Streams.- Finding Frequent Patterns in a String in Sublinear Time.- Online Occlusion Culling.- Shortest Paths in Matrix Multiplication Time.- Computing Common Intervals of K Permutations, with Applications to Modular Decomposition of Graphs.- Greedy Routing in Tree-Decomposed Graphs.- Making Chord Robust to Byzantine Attacks.- Bucket Game with Applications to Set Multicover and Dynamic Page Migration.- Bootstrapping a Hop-Optimal Network in the Weak Sensor Model.- Approximating Integer Quadratic Programs and MAXCUT in Subdense Graphs.- A Cutting Planes Algorithm Based Upon a Semidefinite Relaxation for the Quadratic Assignment Problem.- Approximation Complexity of min-max (Regret) Versions of Shortest Path, Spanning Tree, and Knapsack.- Robust Approximate Zeros.- Optimizing a 2D Function Satisfying Unimodality Properties.

Proceedings ArticleDOI
22 May 2005
TL;DR: This work develops streaming (1 + ε)-approximation algorithms for k-median, k-means, MaxCut, maximum weighted matching (MaxWM), maximum travelling salesperson, maximum spanning tree, and average distance over dynamic geometric data streams.
Abstract: A dynamic geometric data stream consists of a sequence of m insert/delete operations of points from the discrete space 1,…,Δd [26]. We develop streaming (1 + e)-approximation algorithms for k-median, k-means, MaxCut, maximum weighted matching (MaxWM), maximum travelling salesperson (MaxTSP), maximum spanning tree (MaxST), and average distance over dynamic geometric data streams. Our algorithms maintain a small weighted set of points(a coreset) that approximates with probability 2/3 the current point set with respect to the considered problem during the m insert/delete operations of the data stream. They use poly (e-1, log m, log Δ) space and update time per insert/delete operation for constant k and dimension dHaving a coreset one only needs a fast approximation algorithm for the weighted problem to compute a solution quickly. In fact, even an exponential algorithm is sometimes feasible as its running time may still be polynomial in n. For example one can compute in poly(log n, exp(O((1+log (1⁄e)⁄e)d-1))) time a solution to k-median and k-means [21] where n is the size of the current point set and k and d are constants. Finding an implicit solution to MaxCut can be done in poly(log n, exp((1⁄e)O(1))) time. For MaxST and average distance we require poly(log n, e-1) time and for MaxWM we require O(n3) time to do this.