scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the ACM in 2011"


Journal ArticleDOI
TL;DR: In this paper, the authors prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the e1 norm.
Abstract: This article is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individuallyq We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the e1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.

6,783 citations


Journal ArticleDOI
TL;DR: It is argued that the number of samples required to guarantee that with probability at least 1−Δ, the relative error in the estimate is at most &epsis; that such bounds are much more useful in applications than the variance.
Abstract: We analyze the convergence of randomized trace estimators. Starting at 1989, several algorithms have been proposed for estimating the trace of a matrix by 1/MΣi=1M ziTAzi, where the zi are random vectors; different estimators use different distributions for the zis, all of which lead to E(1/MΣi=1MziTAzi) = trace(A). These algorithms are useful in applications in which there is no explicit representation of A but rather an efficient method compute zTAz given z. Existing results only analyze the variance of the different estimators. In contrast, we analyze the number of samples M required to guarantee that with probability at least 1-δ, the relative error in the estimate is at most ϵ. We argue that such bounds are much more useful in applications than the variance. We found that these bounds rank the estimators differently than the variance; this suggests that minimum-variance estimators may not be the best.We also make two additional contributions to this area. The first is a specialized bound for projection matrices, whose trace (rank) needs to be computed in electronic structure calculations. The second is a new estimator that uses less randomness than all the existing estimators.

319 citations


Journal ArticleDOI
TL;DR: The construction of the first truthful mechanisms with approximation guarantees for a variety of multi-parameter domains can be seen as a way of exploiting VCG in a computational tractable way even when the underlying social-welfare maximization problem is NP-hard.
Abstract: We give a general technique to obtain approximation mechanisms that are truthful in expectation. We show that for packing domains, any α-approximation algorithm that also bounds the integrality gap of the LP relaxation of the problem by α can be used to construct an α-approximation mechanism that is truthful in expectation. This immediately yields a variety of new and significantly improved results for various problem domains and furthermore, yields truthful (in expectation) mechanisms with guarantees that match the best-known approximation guarantees when truthfulness is not required. In particular, we obtain the first truthful mechanisms with approximation guarantees for a variety of multiparameter domains. We obtain truthful (in expectation) mechanisms achieving approximation guarantees of O(√m) for combinatorial auctions (CAs), (1 + e) for multiunit CAs with B = Ω(log m) copies of each item, and 2 for multiparameter knapsack problems (multi-unit auctions).Our construction is based on considering an LP relaxation of the problem and using the classic VCG mechanism to obtain a truthful mechanism in this fractional domain. We argue that the (fractional) optimal solution scaled down by α, where α is the integrality gap of the problem, can be represented as a convex combination of integer solutions, and by viewing this convex combination as specifying a probability distribution over integer solutions, we get a randomized, truthful in expectation mechanism. Our construction can be seen as a way of exploiting VCG in a computational tractable way even when the underlying social-welfare maximization problem is NP-hard.

267 citations


Journal ArticleDOI
TL;DR: A method of boosting shape analyses by defining a compositional method, where each procedure is analyzed independently of its callers, which is based on a generalized form of abduction (inference of explanatory hypotheses), which is called bi-abduction.
Abstract: The accurate and efficient treatment of mutable data structures is one of the outstanding problem areas in automatic program verification and analysis. Shape analysis is a form of program analysis that attempts to infer descriptions of the data structures in a program, and to prove that these structures are not misused or corrupted. It is one of the more challenging and expensive forms of program analysis, due to the complexity of aliasing and the need to look arbitrarily deeply into the program heap. This article describes a method of boosting shape analyses by defining a compositional method, where each procedure is analyzed independently of its callers. The analysis algorithm uses a restricted fragment of separation logic, and assigns a collection of Hoare triples to each procedure; the triples provide an over-approximation of data structure usage. Our method brings the usual benefits of compositionality---increased potential to scale, ability to deal with incomplete programs, graceful way to deal with imprecision---to shape analysis, for the first time.The analysis rests on a generalized form of abduction (inference of explanatory hypotheses), which we call bi-abduction. Bi-abduction displays abduction as a kind of inverse to the frame problem: it jointly infers anti-frames (missing portions of state) and frames (portions of state not touched by an operation), and is the basis of a new analysis algorithm. We have implemented our analysis and we report case studies on smaller programs to evaluate the quality of discovered specifications, and larger code bases (e.g., sendmail, an imap server, a Linux distribution) to illustrate the level of automation and scalability that we obtain from our compositional method.This article makes number of specific technical contributions on proof procedures and analysis algorithms, but in a sense its more important contribution is holistic: the explanation and demonstration of how a massive increase in automation is possible using abductive inference.

163 citations


Journal ArticleDOI
TL;DR: In the streaming model, this article shows how to perform several graph computations including estimating the probability distribution after a random walk of length l, the mixing time, and other related quantities such as the conductance of the graph.
Abstract: This article focuses on computations on large graphs (e.g., the web-graph) where the edges of the graph are presented as a stream. The objective in the streaming model is to use small amount of memory (preferably sub-linear in the number of nodes n) and a smaller number of passes.In the streaming model, we show how to perform several graph computations including estimating the probability distribution after a random walk of length l, the mixing time M, and other related quantities such as the conductance of the graph. By applying our algorithm for computing probability distribution on the web-graph, we can estimate the PageRankp of any node up to an additive error of √e p+e in O(√M/α) passes and O(min(nα+1/e√M/α+(1/e)Mα, α n√Mα + (1/e)√M/α)) space, for any α i (0,1]. Specifically, for e = M/n, α = M−1/2, we can compute the approximate PageRank values in O(nM−1/4) space and O(M3/4) passes. In comparison, a standard implementation of the PageRank algorithm will take O(n) space and O(M) passes. We also give an approach to approximate the PageRank values in just O(1) passes although this requires O(nM) space.

155 citations


Journal ArticleDOI
TL;DR: The Lovasz Local Lemma (LLL) is a powerful tool that gives sufficient conditions for avoiding all of a given set of "bad" events, with positive probability as mentioned in this paper.
Abstract: The Lovasz Local Lemma (LLL) is a powerful tool that gives sufficient conditions for avoiding all of a given set of “bad” events, with positive probability. A series of results have provided algorithms to efficiently construct structures whose existence is non-constructively guaranteed by the LLL, culminating in the recent breakthrough of Moser and Tardos [2010] for the full asymmetric LLL. We show that the output distribution of the Moser-Tardos algorithm well-approximates the conditional LLL-distribution, the distribution obtained by conditioning on all bad events being avoided. We show how a known bound on the probabilities of events in this distribution can be used for further probabilistic analysis and give new constructive and nonconstructive results.We also show that when a LLL application provides a small amount of slack, the number of resamplings of the Moser-Tardos algorithm is nearly linear in the number of underlying independent variables (not events!), and can thus be used to give efficient constructions in cases where the underlying proof applies the LLL to super-polynomially many events. Even in cases where finding a bad event that holds is computationally hard, we show that applying the algorithm to avoid a polynomial-sized “core” subset of bad events leads to a desired outcome with high probability. This is shown via a simple union bound over the probabilities of non-core events in the conditional LLL-distribution, and automatically leads to simple and efficient Monte-Carlo (and in most cases RNC) algorithms. We demonstrate this idea on several applications. We give the first constant-factor approximation algorithm for the Santa Claus problem by making a LLL-based proof of Feige constructive. We provide Monte Carlo algorithms for acyclic edge coloring, nonrepetitive graph colorings, and Ramsey-type graphs. In all these applications, the algorithm falls directly out of the non-constructive LLL-based proof. Our algorithms are very simple, often provide better bounds than previous algorithms, and are in several cases the first efficient algorithms known.As a second type of application we show that the properties of the conditional LLL-distribution can be used in cases beyond the critical dependency threshold of the LLL: avoiding all bad events is impossible in these cases. As the first (even nonconstructive) result of this kind, we show that by sampling a selected smaller core from the LLL-distribution, we can avoid a fraction of bad events that is higher than the expectation. MAX k-SAT is an illustrative example of this.

122 citations


Journal ArticleDOI
TL;DR: It is shown that Steiner forest can be solved in polynomial time for series-parallel graphs (graphs of treewidth at most two) by a novel combination of dynamic programming and minimum cut computations, completing the thorough complexity study of Steiner Forest in the range of bounded-treewidth graphs, planar graphs, and bounded-genus graphs.
Abstract: We give the first polynomial-time approximation scheme (PTAS) for the Steiner forest problem on planar graphs and, more generally, on graphs of bounded genus. As a first step, we show how to build a Steiner forest spanner for such graphs. The crux of the process is a clustering procedure called prize-collecting clustering that breaks down the input instance into separate subinstances which are easier to handle; moreover, the terminals in different subinstances are far from each other. Each subinstance has a relatively inexpensive Steiner tree connecting all its terminals, and the subinstances can be solved (almost) separately. Another building block is a PTAS for Steiner forest on graphs of bounded treewidth. Surprisingly, Steiner forest is NP-hard even on graphs of treewidth 3. Therefore, our PTAS for bounded-treewidth graphs needs a nontrivial combination of approximation arguments and dynamic programming on the tree decomposition. We further show that Steiner forest can be solved in polynomial time for series-parallel graphs (graphs of treewidth at most two) by a novel combination of dynamic programming and minimum cut computations, completing our thorough complexity study of Steiner forest in the range of bounded-treewidth graphs, planar graphs, and bounded-genus graphs.

114 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider Fisher and Arrow-Debreu markets under additively separable, piecewise-linear, concave utility functions and obtain the following results: if an equilibrium exists, there is one that is rational and can be written using polynomially many bits.
Abstract: We consider Fisher and Arrow--Debreu markets under additively separable, piecewise-linear, concave utility functions and obtain the following results. For both market models, if an equilibrium exists, there is one that is rational and can be written using polynomially many bits. There is no simple necessary and sufficient condition for the existence of an equilibrium: The problem of checking for existence of an equilibrium is NP-complete for both market models; the same holds for existence of an e-approximate equilibrium, for e = O(n−5). Under standard (mild) sufficient conditions, the problem of finding an exact equilibrium is in PPAD for both market models. Finally, building on the techniques of Chen et al. [2009a] we prove that under these sufficient conditions, finding an equilibrium for Fisher markets is PPAD-hard.

100 citations


Journal ArticleDOI
TL;DR: The smoothed running time of the k-means method is settled, showing that the smoothed number of iterations is bounded by a polynomial in n and 1/σ, where σ is the standard deviation of the Gaussian perturbations.
Abstract: The k-means method is one of the most widely used clustering algorithms, drawing its popularity from its speed in practice. Recently, however, it was shown to have exponential worst-case running time. In order to close the gap between practical performance and theoretical analysis, the k-means method has been studied in the model of smoothed analysis. But even the smoothed analyses so far are unsatisfactory as the bounds are still super-polynomial in the number n of data points.In this article, we settle the smoothed running time of the k-means method. We show that the smoothed number of iterations is bounded by a polynomial in n and 1/σ, where σ is the standard deviation of the Gaussian perturbations. This means that if an arbitrary input data set is randomly perturbed, then the k-means method will run in expected polynomial time on that input set.

97 citations


Journal ArticleDOI
TL;DR: This article discovers that, perhaps surprisingly, dynamic R/W storage is solvable in a completely asynchronous system: it presents DynaStore, an algorithm that solves this problem.
Abstract: This article deals with the emulation of atomic read/write (R/W) storage in dynamic asynchronous message passing systems. In static settings, it is well known that atomic R/W storage can be implemented in a fault-tolerant manner even if the system is completely asynchronous, whereas consensus is not solvable. In contrast, all existing emulations of atomic storage in dynamic systems rely on consensus or stronger primitives, leading to a popular belief that dynamic R/W storage is unattainable without consensus.In this article, we specify the problem of dynamic atomic read/write storage in terms of the interface available to the users of such storage. We discover that, perhaps surprisingly, dynamic R/W storage is solvable in a completely asynchronous system: we present DynaStore, an algorithm that solves this problem. Our result implies that atomic R/W storage is in fact easier than consensus, even in dynamic systems.

79 citations


Journal ArticleDOI
TL;DR: This article devise a deterministic algorithm that employs Δ1+o(1) colors, and runs in polylogarithmic time, and produces an O(a1+η)-coloring, for an arbitrarily small constant η > 0, in time O(log a log n).
Abstract: Consider an n-vertex graph G = (V, E) of maximum degree Δ, and suppose that each vertex v ∈ V hosts a processor. The processors are allowed to communicate only with their neighbors in G. The communication is synchronous, that is, it proceeds in discrete rounds.In the distributed vertex coloring problem, the objective is to color G with Δ + 1, or slightly more than Δ + 1, colors using as few rounds of communication as possible. (The number of rounds of communication will be henceforth referred to as running time.) Efficient randomized algorithms for this problem are known for more than twenty years [Alon et al. 1986; Luby 1986]. Specifically, these algorithms produce a (Δ + 1)-coloring within O(log n) time, with high probability. On the other hand, the best known deterministic algorithm that requires polylogarithmic time employs O(Δ2) colors. This algorithm was devised in a seminal FOCS’87 paper by Linial [1987]. Its running time is O(log*n). In the same article, Linial asked whether one can color with significantly less than Δ2 colors in deterministic polylogarithmic time. By now, this question of Linial became one of the most central long-standing open questions in this area.In this article, we answer this question in the affirmative, and devise a deterministic algorithm that employs Δ1+o(1) colors, and runs in polylogarithmic time. Specifically, the running time of our algorithm is O(f(Δ)log Δ log n), for an arbitrarily slow-growing function f(Δ) = ω(1). We can also produce an O(Δ1+η)-coloring in O(log Δ log n)-time, for an arbitrarily small constant η > 0, and an O(Δ)-coloring in O(Δe log n) time, for an arbitrarily small constant e > 0. Our results are, in fact, far more general than this. In particular, for a graph of arboricity a, our algorithm produces an O(a1+η)-coloring, for an arbitrarily small constant η > 0, in time O(log a log n).

Journal ArticleDOI
TL;DR: To the best of the knowledge, this work is the first algorithm to solve Byzantine agreement against an adaptive adversary, while requiring o(n2) total bits of communication, which is polylogarithmic in n.
Abstract: We describe an algorithm for Byzantine agreement that is scalable in the sense that each processor sends only O(√n) bits, where n is the total number of processors. Our algorithm succeeds with high probability against an adaptive adversary, which can take over processors at any time during the protocol, up to the point of taking over arbitrarily close to a 1/3 fraction. We assume synchronous communication but a rushing adversary. Moreover, our algorithm works in the presence of flooding: processors controlled by the adversary can send out any number of messages. We assume the existence of private channels between all pairs of processors but make no other cryptographic assumptions. Finally, our algorithm has latency that is polylogarithmic in n. To the best of our knowledge, ours is the first algorithm to solve Byzantine agreement against an adaptive adversary, while requiring o(n2) total bits of communication.

Journal ArticleDOI
TL;DR: In this paper, the authors show feasibility of obtaining complete fairness without an honest majority in the two-party setting with respect to boolean AND/OR and Yao's "millionaires' problem".
Abstract: In the setting of secure two-party computation, two mutually distrusting parties wish to compute some function of their inputs while preserving, to the extent possible, various security properties such as privacy, correctness, and more. One desirable property is fairness which guarantees, informally, that if one party receives its output, then the other party does too. Cleve [1986] showed that complete fairness cannot be achieved in general without an honest majority. Since then, the accepted folklore has been that nothing non-trivial can be computed with complete fairness in the two-party setting.We demonstrate that this folklore belief is false by showing completely fair protocols for various nontrivial functions in the two-party setting based on standard cryptographic assumptions. We first show feasibility of obtaining complete fairness when computing any function over polynomial-size domains that does not contain an “embedded XOR”; this class of functions includes boolean AND/OR as well as Yao’s “millionaires’ problem”. We also demonstrate feasibility for certain functions that do contain an embedded XOR, though we prove a lower bound showing that any completely fair protocol for such functions must have round complexity super-logarithmic in the security parameter. Our results demonstrate that the question of completely fair secure computation without an honest majority is far from closed.

Journal ArticleDOI
TL;DR: An overview of the geometric complexity theory (GCT) approach towards the P vs. NP and related problems focusing on its main complexity theoretic results is given, consisting of positivity hypotheses in algebraic geometry and representation theory and easier hardness hypotheses.
Abstract: This article gives an overview of the geometric complexity theory (GCT) approach towards the P vs. NP and related problems focusing on its main complexity theoretic results. These are: (1) two concrete lower bounds, which are currently the best known lower bounds in the context of the P vs. NC and permanent vs. determinant problems, (2) the Flip Theorem, which formalizes the self-referential paradox in the P vs. NP problem, and (3) the Decomposition Theorem, which decomposes the arithmetic P vs. NP and permanent vs. determinant problems into subproblems without self-referential difficulty, consisting of positivity hypotheses in algebraic geometry and representation theory and easier hardness hypotheses.

Journal ArticleDOI
TL;DR: This work considers the quantum interactiveProof system model of computation, which is the (classical) interactive proof system model’s natural quantum computational analogue, and concludes that quantum computing provides no increase in computational power whatsoever over classical computing in the context of interactive proof systems.
Abstract: This work considers the quantum interactive proof system model of computation, which is the (classical) interactive proof system model’s natural quantum computational analogue. An exact characterization of the expressive power of quantum interactive proof systems is obtained: the collection of computational problems having quantum interactive proof systems consists precisely of those problems solvable by deterministic Turing machines that use at most a polynomial amount of space (or, more succinctly, QIP = PSPACE). This characterization is proved through the use of a parallelized form of the matrix multiplicative weights update method, applied to a class of semidefinite programs that captures the computational power of quantum interactive proof systems. One striking implication of this characterization is that quantum computing provides no increase in computational power whatsoever over classical computing in the context of interactive proof systems, for it is well known that the collection of computational problems having classical interactive proof systems coincides with those problems solvable by polynomial-space computations.

Journal ArticleDOI
TL;DR: This algorithm is a simple, randomized, data-oblivious version of the Shellsort algorithm that always runs in O(n log n) time and succeeds in sorting any given input permutation with very high probability.
Abstract: In this article, we describe a randomized Shellsort algorithm. This algorithm is a simple, randomized, data-oblivious version of the Shellsort algorithm that always runs in O(n log n) time and succeeds in sorting any given input permutation with very high probability. Taken together, these properties imply applications in the design of new efficient privacy-preserving computations based on the secure multiparty computation (SMC) paradigm. In addition, by a trivial conversion of this Monte Carlo algorithm to its Las Vegas equivalent, one gets the first version of Shellsort with a running time that is provably O(n log n) with very high probability.

Journal ArticleDOI
TL;DR: Algorithms for binary queries of XPath, which do a precomputation on the document and then output the selected pairs with constant delay are presented.
Abstract: We consider a fragment of XPath 1.0, where attribute and text values may be compared. We show that for any unary query v in this fragment, the set of nodes that satisfy the query in a document t can be calculated in time O(vvv3vtv). We show that for a query in a bigger fragment with Kleene star allowed, the same can be done in time O(2O(vvv)vtv) or in time O(vvv3vtvlogvtv). Finally, we present algorithms for binary queries of XPath, which do a precomputation on the document and then output the selected pairs with constant delay.

Journal ArticleDOI
TL;DR: A reduction from DTs to nearest-neighbor graphs is described that relies on a new variant of randomized incremental constructions using dependent sampling and generalize to higher dimensions.
Abstract: We present several results about Delaunay triangulations (DTs) and convex hulls in transdichotomous and hereditary settings: (i) the DT of a planar point set can be computed in expected time O(sort(n)) on a word RAM, where sort(n) is the time to sort n numbers. We assume that the word RAM supports the shuffle operation in constant time; (ii) if we know the ordering of a planar point set in x- and in y-direction, its DT can be found by a randomized algebraic computation tree of expected linear depth; (iii) given a universe U of points in the plane, we construct a data structure D for Delaunay queries: for any P s U, D can find the DT of P in expected time O(vPv log log vUv); (iv) given a universe U of points in 3-space in general convex position, there is a data structure D for convex hull queries: for any P s U, D can find the convex hull of P in expected time O(vPv (log log vUv)2); (v) given a convex polytope in 3-space with n vertices which are colored with χ g 2 colors, we can split it into the convex hulls of the individual color classes in expected time O(n (log log n)2).The results (i)--(iii) generalize to higher dimensions, where the expected running time now also depends on the complexity of the resulting DT. We need a wide range of techniques. Most prominently, we describe a reduction from DTs to nearest-neighbor graphs that relies on a new variant of randomized incremental constructions using dependent sampling.

Journal ArticleDOI
TL;DR: This work develops a framework for data exchange over probabilistic databases, and makes a case for its coherence and robustness, and shows that the framework and results easily and completely generalize to allow not only the data, but also the schema mapping itself to be Probabilistic.
Abstract: The work reported here lays the foundations of data exchange in the presence of probabilistic data. This requires rethinking the very basic concepts of traditional data exchange, such as solution, universal solution, and the certain answers of target queries. We develop a framework for data exchange over probabilistic databases, and make a case for its coherence and robustness. This framework applies to arbitrary schema mappings, and finite or countably infinite probability spaces on the source and target instances. After establishing this framework and formulating the key concepts, we study the application of the framework to a concrete and practical setting where probabilistic databases are compactly encoded by means of annotations formulated over random Boolean variables. In this setting, we study the problems of testing for the existence of solutions and universal solutions, materializing such solutions, and evaluating target queries (for unions of conjunctive queries) in both the exact sense and the approximate sense. For each of the problems, we carry out a complexity analysis based on properties of the annotation, for various classes of dependencies. Finally, we show that the framework and results easily and completely generalize to allow not only the data, but also the schema mapping itself to be probabilistic.

Journal ArticleDOI
TL;DR: The first nonconstant inapproximability result that almost matches the best-known approximation algorithm for acyclic job shops is shown and it is shown that the problem with two machines and the preemptive variant with three machines have no PTAS.
Abstract: We consider several variants of the job shop problem that is a fundamental and classical problem in scheduling. The currently best approximation algorithms have worse than logarithmic performance guarantee, but the only previously known inapproximability result says that it is NP-hard to approximate job shops within a factor less than 5/4. Closing this big approximability gap is a well-known and long-standing open problem.This article closes many gaps in our understanding of the hardness of this problem and answers several open questions in the literature. It is shown the first nonconstant inapproximability result that almost matches the best-known approximation algorithm for acyclic job shops. The same bounds hold for the general version of flow shops, where jobs are not required to be processed on each machine. Similar inapproximability results are obtained when the objective is to minimize the sum of completion times. It is also shown that the problem with two machines and the preemptive variant with three machines have no PTAS.

Journal ArticleDOI
TL;DR: This work introduces the Linear Programming Approach, a framework that allows the design of efficient algorithms for evaluating functions and shows how to employ the extended LPA to design a polynomial-time optimal algorithm for the class of monotone Boolean functions representable by threshold trees.
Abstract: Let f be a function on a set of variables V. For each x i V, let c(x) be the cost of reading the value of x. An algorithm for evaluating f is a strategy for adaptively identifying and reading a set of variables U s V whose values uniquely determine the value of f. We are interested in finding algorithms which minimize the cost incurred to evaluate f in the above sense. Competitive analysis is employed to measure the performance of the algorithms. We address two variants of the above problem. We consider the basic model in which the evaluation algorithm knows the cost c(x), for each x i V. We also study a novel model where the costs of the variables are not known in advance and some preemption is allowed in the reading operations. This model has applications, for example, when reading a variable coincides with obtaining the output of a job on a CPU and the cost is the CPU time.For the model where the costs of the variables are known, we present a polynomial time algorithm with the best possible competitive ratio γcf for each function f that is representable by a threshold tree and for each fixed cost function c(⋅). Remarkably, the best-known result for the same class of functions is a pseudo-polynomial algorithm with competitiveness 2 γcf. Still in the same model, we introduce the Linear Programming Approach (LPA), a framework that allows the design of efficient algorithms for evaluating functions. We show that different implementations of this approach lead in general to the best algorithms known so far—and in many cases to optimal algorithms—for different classes of functions considered before in the literature.Via the LPA, we are able to determine exactly the optimal extremal competitiveness of monotone Boolean functions. Remarkably, the upper bound which leads to this result, holds for a much broader class of functions, which also includes the whole set of Boolean functions.We also show how to extend the LPA (together with these results) to the model where the costs of the variables are not known beforehand. In particular, we show how to employ the extended LPA to design a polynomial-time optimal (with respect to competitiveness) algorithm for the class of monotone Boolean functions representable by threshold trees.

Journal ArticleDOI
TL;DR: This work investigates a new class of geometric problems based on the idea of online error correction and provides upper and lower bounds on the complexity of online reconstruction for convexity in 2D and 3D.
Abstract: We investigate a new class of geometric problems based on the idea of online error correction. Suppose one is given access to a large geometric dataset though a query mechanism; for example, the dataset could be a terrain and a query might ask for the coordinates of a particular vertex or for the edges incident to it. Suppose, in addition, that the dataset satisfies some known structural property P (for example, monotonicity or convexity) but that, because of errors and noise, the queries occasionally provide answers that violate P. Can one design a filter that modifies the query's answers so that (i) the output satisfies P; (ii) the amount of data modification is minimizedq We provide upper and lower bounds on the complexity of online reconstruction for convexity in 2D and 3D.