scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the ACM in 2000"


Journal ArticleDOI
TL;DR: In this paper, the authors show that alternating tree automata are the key to a comprehensive automata-theoretic framework for branching temporal logics, which can be used to obtain optimal decision procedures, as was shown by Muller et al.
Abstract: Translating linear temporal logic formulas to automata has proven to be an effective approach for implementing linear-time model-checking, and for obtaining many extensions and improvements to this verification method. On the other hand, for branching temporal logic, automata-theoretic techniques have long been thought to introduce an exponential penalty, making them essentially useless for model-checking. Recently, Bernholtz and Grumberg [1993] have shown that this exponential penalty can be avoided, though they did not match the linear complexity of non-automata-theoretic algorithms. In this paper, we show that alternating tree automata are the key to a comprehensive automata-theoretic framework for branching temporal logics. Not only can they be used to obtain optimal decision procedures, as was shown by Muller et al., but, as we show here, they also make it possible to derive optimal model-checking algorithms. Moreover, the simple combinatorial structure that emerges from the automata-theoretic approach opens up new possibilities for the implementation of branching-time model checking and has enabled us to derive improved space complexity bounds for this long-standing problem.

738 citations


Journal ArticleDOI
TL;DR: The performance of an on-line scheduler is best-effort real time scheduling can be significantly improved if the system is designed in such a way that the laxity of every job is proportional to its length.
Abstract: We introduce resource augmentation as a method for analyzing online scheduling problems. In resource augmentation analysis the on-line scheduler is given more resources, say faster processors or more processors, than the adversary. We apply this analysis to two well-known on-line scheduling problems, the classic uniprocessor CPU scheduling problem 1 |ri, pmtn|S Fi, and the best-effort firm real-time scheduling problem 1|ri, pmtn| S wi( 1- Ui). It is known that there are no constant competitive nonclairvoyant on-line algorithms for these problems. We show that there are simple on-line scheduling algorithms for these problems that are constant competitive if the online scheduler is equipped with a slightly faster processor than the adversary. Thus, a moderate increase in processor speed effectively gives the on-line scheduler the power of clairvoyance. Furthermore, the on-line scheduler can be constant competitive on all inputs that are not closely correlated with processor speed. We also show that the performance of an on-line scheduler is best-effort real time scheduling can be significantly improved if the system is designed in such a way that the laxity of every job is proportional to its length.

477 citations


Journal ArticleDOI
TL;DR: A deterministic algorithm for computing a minimum spanning tree of a connected graph that uses pointers, not arrays, and it makes no numeric assumptions on the edge costs.
Abstract: A deterministic algorithm for computing a minimum spanning tree of a connected graph is presented. Its running time is 0(m a(m, n)), where a is the classical functional inverse of Ackermann's function and n (respectively, m) is the number of vertices (respectively, edges). The algorithm is comparison-based : it uses pointers, not arrays, and it makes no numeric assumptions on the edge costs.

351 citations


Journal ArticleDOI
TL;DR: The primary goal is to solve the problem of making abstract interpretations complete by minimally extending or restricting the underlying abstract domains by providing constructive characterizations for the least complete extensions and the greatest complete restrictions of abstract domains.
Abstract: Completeness is an ideal, although uncommon, feature of abstract interpretations, formalizing the intuition that, relatively to the properties encoded by the underlying abstract domains, there is no loss of information accumulated in abstract computations. Thus, complete abstract interpretations can be rightly understood as optimal. We deal with both pointwise completeness, involving generic semantic operations, and (least) fixpoint completeness. Completeness and fixpoint completeness are shown to be properties that depend on the underlying abstract domains only. Our primary goal is then to solve the problem of making abstract interpretations complete by minimally extending or restricting the underlying abstract domains. Under the weak and reasonable hypothesis of dealing with continuous semantic operations, we provide constructive characterizations for the least complete extensions and the greatest complete restrictions of abstract domains. As far as fixpoint completeness is concerned, for merely monotone semantic operators, the greatest restrictions of abstract domains are constructively characterized, while it is shown that the existence of least extensions of abstract domains cannot be, in general, guaranteed, even under strong hypotheses. These methodologies, which in finite settings give rise to effective algorithms, provide advanced formal tools for manipulating and comparing abstract interpretations, useful both in static program analysis and in semantics design. A number of examples illustrating these techniques are given.

250 citations


Journal ArticleDOI
TL;DR: A recursive technique for building suffix trees that yields optimal algorithms in different computational models that match the sorting lower bound and for an alphabet consisting of integers in a polynomial range the authors get the first known linear-time algorithm.
Abstract: The suffix tree of a string is the fundamental data structure of combinatorial pattern matching. We present a recursive technique for building suffix trees that yields optimal algorithms in different computational models. Sorting is an inherent bottleneck in building suffix trees and our algorithms match the sorting lower bound. Specifically, we present the following results. (1) Weiner [1973], who introduced the data structure, gave an optimal 0(n)-time algorithm for building the suffix tree of an n-character string drawn from a constant-size alphabet. In the comparison model, there is a trivial O(n log n)-time lower bound based on sorting, and Weiner's algorithm matches this bound. For integer alphabets, the fastest known algorithm is the O(n log n)time comparison-based algorithm, but no super-linear lower bound is known. Closing this gap is the main open question in stringology. We settle this open problem by giving a linear time reduction to sorting for building suffix trees. Since sorting is a lower-bound for building suffix trees, this algorithm is time-optimal in every alphabet mode. In particular, for an alphabet consisting of integers in a polynomial range we get the first known linear-time algorithm. (2) All previously known algorithms for building suffix trees exhibit a marked absence of locality of reference, and thus they tend to elicit many page faults (I/Os) when indexing very long strings. They are therefore unsuitable for building suffix trees in secondary storage devices, where I/Os dominate the overall computational cost. We give a linear-I/O reduction to sorting for suffix tree construction. Since sorting is a trivial I/O-lower bound for building suffix trees, our algorithm is I/O-optimal.

246 citations


Journal ArticleDOI
TL;DR: A notion of a needed narrowing step is proposed that is sound and complete for a large class of rewrite systems, is optimal with respect to the cost measure that counts the number of distinct steps of a derivation, computes only incomparable and disjoint unifiers, and is efficiently implemented by unification.
Abstract: The narrowing relation over terms constitutes the basis of the most important operational semantics of languages that integrate functional and logic programming paradigms. It also plays an important role in the definition of some algorithms of unification modulo equational theories that are defined by confluent term rewriting systems. Due to the inefficiency of simple narrowing, many refined narrowing strategies have been proposed in the last decade. This paper presents a new narrowing strategy that is optimal in several respects. For this purpose, we propose a notion of a needed narrowing step that, for inductively sequential rewrite systems, extends the Huet and Levy notion of a needed reduction step. We define a strategy, based on this notion, that computes only needed narrowing steps. Our strategy is sound and complete for a large class of rewrite systems, is optimal with respect to the cost measure that counts the number of distinct steps of a derivation, computes only incomparable and disjoint unifiers, and is efficiently implemented by unification.

241 citations


Journal ArticleDOI
TL;DR: A "semiduality" between minimum cuts and maximum spanning tree packings combined with the previously developed random sampling techniques is used and known time bounds for solving the minimum cut problem on undirected graphs are significantly improved.
Abstract: We significantly improve known time bounds for solving the minimum cut problem on undirected graphs. We use a "semiduality" between minimum cuts and maximum spanning tree packings combined with our previously developed random sampling techniques. We give a randomized (Monte Carlo) algorithm that finds a minimum cut in an m-edge, n-vertex graph with high probability in O(m log3n) time. We also give a simpler randomized algorithm that finds all minimum cuts with high probability in O(m log3n) time. This variant has an optimal RNC parallelization. Both variants improve on the previous best time bound of O(n2 log3n). Other applications of the tree-packing approach are new, nearly tight bounds on the number of near-minimum cuts a graph may have and a new data structure for representing them in a space-efficient manner.

237 citations


Journal ArticleDOI
TL;DR: This work analyzes the computational complexity of evaluating policies and of determining whether a sufficiently good policy exists for a Markov decision process, based on a number of confounding factors, including the observability of the system state; the succinctness of the representation; the type of policy; even the number of actions relative to theNumber of states.
Abstract: Controlled stochastic systems occur in science engineering, manufacturing, social sciences, and many other cntexts. If the systems is modeled as a Markov decision process (MDP) and will run ad infinitum, the optimal control policy can be computed in polynomial time using linear programming. The problems considered here assume that the time that the process will run is finite, and based on the size of the input. There are mny factors that compound the complexity of computing the optimal policy. For instance, there are many factors that compound the complexity of this computation. For instance, if the controller does not have complete information about the state of the system, or if the system is represented in some very succint manner, the optimal policy is provably not computable in time polynomial in the size of the input. We analyze the computational complexity of evaluating policies and of determining whether a sufficiently good policy exists for a MDP, based on a number of confounding factors, including the observability of the system state; the succinctness of the representation; the type of policy; even the number of actions relative to the number of states. In almost every case, we show that the decision problem is complete for some known complexity class. Some of these results are familiar from work by Papadimitriou and Tsitsiklis and others, but some, such as our PL-completeness proofs, are surprising. We include proofs of completeness for natural problems in the as yet little-studied classes NPPP.

183 citations


Journal ArticleDOI
TL;DR: A polynomial time approximation algorithm is presented for problems modeled by the divide-and-conquer paradigm, such that all subgraphs for which the optimization problem is nontrivial have large diameters.
Abstract: We present a novel divide-and-conquer paradigm for approximating NP-hard graph optimization problems. The paradigm models graph optimization problems that satisfy two properties: First, a divide-and-conquer approach is applicable. Second, a fractional spreading metric is computable in polynomial time. The spreading metric assigns lengths to either edges or vertices of the input graph, such that all subgraphs for which the optimization problem is nontrivial have large diameters. In addition, the spreading metric provides a lower bound, τ, on the cost of solving the optimization problem. We present a polynomial time approximation algorithm for problems modeled by our paradigm whose approximation factor is O(min{log τ, log log τ, log k log log k}) where k denotes the number of “interesting” vertices in the problem instance, and is at most the number of vertices.We present seven problems that can be formulated to fit the paradigm. For all these problems our algorithm improves previous results. The problems are: (1) linear arrangement; (2) embedding a graph in a d-dimensional mesh; (3) interval graph completion; (4) minimizing storage-time product; (5) subset feedback sets in directed graphs and multicuts in circular networks; (6) symmetric multicuts in directed networks; (7) balanced partitions and p-separators (for small values of p) in directed graphs.

171 citations


Journal ArticleDOI
TL;DR: It is shown that, if the Delaunay triangulation has the ratio property introduced in Miller et al.
Abstract: A silver is a tetrahedon whose four vertices lie close to a plane and whose orthogonal projection to that plane is a convex quadrilateral with no short edge. Silvers are notoriously common in 3-dimensional Delaunay triangulations even for well-spaced point sets. We show that, if the Delaunay triangulation has the ratio property introduced in Miller et al. [1995], then there is an assignment of weights so the weighted Delaunay traingulation contains no silvers. We also give an algorithm to compute such a weight assignment.

161 citations


Journal ArticleDOI
Edith Cohen1
TL;DR: This work presents faster polylog-time algorithms that provide good approximate distances only between “distant” vertices, and presents faster near-linear work algorithms that required near-O(n) time.
Abstract: Shortest paths computations constitute one of the most fundamental network problems. Nonetheless, known parallel shortest-paths algorithms are generally inefficient: they perform significantly more work (product of time and processors) than their sequential counterparts. This gap, known in the literature as the “transitive closure bottleneck,” poses a long-standing open problem. Our main result is an O(mnϵ0+s( m+n1+ϵ0)) work polylog-time randomized algorithm that computes paths within (1 + O(1/polylog n) of shortest from s source nodes to all other nodesin weighted undirected networks with n nodes and m edges (for any fixed ϵ0>0). This work bound nearly matches the O(sm) sequential time. In contrast, previous polylog-time algorithms required min {O(n3), O(m2)} work (even when s=1), and previous near-linear work algorithms required near-O(n) time. We also present faster sequential algorithms that provide good approximate distances only between “distant” vertices: We obtain an O((m + sn)nϵ0 time algorithm that computes paths of weight (1+O(1/polylog n) dist + O(wmax polylog n), where dist is the corresponding distance and wmax is the maximum edge weight. Our chief instrument, which is of independent interest, are efficient constructions of sparse hop sets. A (d,ϵ)-hop set of a network G=(V,E) is a set E* of new weighted edges such that mimimum-weight d-edge paths in (V, E, ∪ E*) have weight within (1+ϵ) of the respective distances in G. We construct hop sets of size O(n1+ϵ0) where ϵ=O(1/polylog n) and d=O(polylog n).

Journal ArticleDOI
TL;DR: A new simple algorithm for learning multiplicity automata with improved time and query complexity is presented, and the learnability of various concept classes is proved, including the class of disjoint DNF and more generally satisfy-O(1) DNF.
Abstract: We study the learnability of multiplicity automata in Angluin's exact learning model, and we investigate its applications. Our starting point is a known theorem from automata theory relating the number of states in a minimal multiplicity automaton for a function to the rank of its Hankel matrix. With this theorem in hand, we present a new simple algorithm for learning multiplicity automata with improved time and query complexity, and we prove the learnability of various concept classes. These include (among others): -The class of disjoint DNF, and more generally satisfy-O(1) DNF. -The class of polynomials over finite fields. -The class of bounded-degree polynomials over infinite fields. -The class of XOR of terms. -Certain classes of boxes in high dimensions. In addition, we obtain the best query complexity for several classes known to be learnable by other methods such as decision trees and polynomials over GF(2).While multiplicity automata are shown to be useful to prove the learnability of some subclasses of DNF formulae and various other classes, we study the limitations of this method. We prove that this method cannot be used to resolve the learnability of some other open problems such as the learnability of general DNF formulas or even k-term DNF for k = o(log n) or satisfy-s DNF formulas for s = o(1). These results are proven by exhibiting functions in the above classes that require multiplicity automata with super-polynomial number of states.

Journal ArticleDOI
TL;DR: This is the first algorithm to handle the general case with complexity polynomial in the resultant degree and simply exponential in n and conjecture its extension to producing an exact rational expression for the sparse resultant.
Abstract: Multivariate resultants generalize the Sylvester resultant of two polynomials and characterize the solvability of a polynomial system. They also reduce the computation of all common roots to a problem in linear algebra. We propose a determinantal formula for the sparse resultant of an arbitrary system of n + 1 polynomials in n variables. This resultant generalizes the classical one and has significantly lower degree for polynomials that are sparse in the sense that their mixed volume is lower than their Bezout number. Our algorithm uses a mixed polyhedral subdivision of the Minkowski sum of the Newton polytopes in order to construct a Newton matrix. Its determinant is a nonzero multiple of the sparse resultant and the latter equals the GCD of at most n + 1 such determinants. This construction implies a restricted version of an effective sparse Nullstellensatz. For an arbitrary specialization of the coefficients, there are two methods that use one extra variable and yield the sparse resultant. This is the first algorithm to handle the general case with complexity polynomial in the resultant degree and simply exponential in n. We conjecture its extension to producing an exact rational expression for the sparse resultant.

Journal ArticleDOI
TL;DR: This work investigates parametric polymorphism in message-based concurrent programming, focusing on behavioral equivalences in a typed process calculus analogous to the polymorphic lambda-calculus of Girard and Reynolds, and observes some surprising interactions between polymorphism and aliasing.
Abstract: We investigate parametric polymorphism in message-based concurrent programming, focusing on behavioral equivalences in a typed process calculus analogous to the polymorphic lambda-calculus of Girard and Reynolds.Polymorphism constrains the power of observers by preventing them from directly manipulating data values whose types are abstract, leading to notions of equivalence much coarser than the standard untyped ones. We study the nature of these constraints through simple examples of concurrent abstract data types and develop basic theoretical machinery for establishing bisimilarity of polymorphic processes.We also observe some surprising interactions between polymorphism and aliasing, drawing examples from both the polymorphic pi-calculus and ML.

Journal ArticleDOI
TL;DR: In this article, it was shown that the optimal deterministic routing in stochastic event graphs is such a sequence, where each letter is distributed as "evenly" as possible and appears with a given rate.
Abstract: The objective pursued in this paper is two-fold. The first part addresses the following combinatorial problem: is it possible to construct an infinite sequence over n letters where each letter is distributed as “evenly” as possible and appears with a given rate? The second objective of the paper is to use this construction in the framework of optimal routing in queuing networks. We show under rather general assumptions that the optimal deterministic routing in stochastic event graphs is such a sequence.

Journal ArticleDOI
TL;DR: The proof of this result is interesting because it is the first to apply topological techniques to the synchronous model and proves tight bounds on the time needed to solve k-set agreement.
Abstract: We prove tight bounds on the time needed to solve k-set agreement. In this problem, each processor starts with an arbitrary input value taken from a fixed set, and halts after choosing an output value. In every execution, at most k distinct output values may be chosen, and every processor's output value must be some processor's input value. We analyze this problem in a synchronous, message-passing model where processors fail by crashing. We prove a lower bound of ⌊f/k⌋+1 degree of coordination required, and the number of faults tolerated, even in idealized models like the synchronous model. The proof of this result is interesting because it is the first to apply topological techniques to the synchronous model.

Journal ArticleDOI
TL;DR: A protocol in which the expected delay of any message is O(1) for an analogous model in which users are synchronized, and messages are generated according to a Poisson distribution with generation rate up to 1/e.
Abstract: We study contention resolution in a multiple-access channel such as the Ethernet channel. In the model that we consider, n users generate messages for the channel according to a probability distribution. Raghavan and Upfal have given a protocol in which the expected delay (time to get serviced) of every message is O(log n) when messages are generated according to a Bernoulli distribution with generation rate up to about 1/10. Our main results are the following protocols: (a) one in which the expected average message delay is O(1) when messages are generated according to a Bernoulli distribution with a generation rate smaller than 1/e, and (b) one in which the expected delay of any message is O(1) for an analogous model in which users are synchronized (i.e., they agree about the time), there are potentially an infinite number of users, and messages are generated according to a Poisson distribution with generation rate up to 1/e. (Each message constitutes a new user.)To achieve (a), we first show how to simulate (b) using n synchronized users, and then show how to build the synchronization into the protocol.

Journal ArticleDOI
TL;DR: This paper presents the first general results on combining tractable constraint classes to obtain larger, more general, tractable classes.
Abstract: Many combinatorial search problems can be expressed as 'constraint satisfaction problems'. This class of problems is known to be NP-hard in general, but a number of restricted constraint classes have been identified which ensure tractability. This paper presents the first general results on combining tractable constraint classes to obtain larger, more general, tractable classes. We give examples to show that many known examples of tractable constraint classes, from a wide variety of different contexts, can be constructed from simpler tractable classes using a general method. We also construct several new tractable classes that have not previously been identified.

Journal ArticleDOI
TL;DR: This paper proves theorems which allow us to show that certain properties of words are not expressible as components of solutions of word equations.
Abstract: Classically, several properties and relations of words, such as “being a power of the same word” can be expressed by using word equations. This paper is devoted to a general study of the expressive power of word equations. As main results we prove theorems which allow us to show that certain properties of words are not expressible as components of solutions of word equations. In particular, “the primitiveness” and “the equal length” are such properties, as well as being “any word over a proper subalphabet”.

Journal ArticleDOI
TL;DR: A simple proof that the Harmonic Harmonic k-server algorithm is competitive is given, and the competitive ratio is proved to be the best currently known fo the algorithm.
Abstract: The k-server problem is a generalization of the paging problems, and is the most studied problem in the area of competive online problems. The Harmonic algorithm is a very natural and simple randomized algorithm for the k-server problem. We give a simple proof that the Harmonic k-server algorithm is competitive. The competitive ratio we prove is the best currently known fo the algorithm. The Harmonic algorithm is memoryless and time-efficient. This is the only such algorithm known to be competitive for the k-server problem.

Journal ArticleDOI
TL;DR: A simple variant of a priority queue, called a soft heap, is introduced, which is optimal for any value of ε in a comparison-based model and can be used to compute exact or approximate medians and percentiles optimally.
Abstract: A simple variant of a priority queue, called a soft heap, is introduced The data structure supports the usual operations: insert, delete, meld, and findmin Its novelty is to beat the logarithmic bound on the complexity of a heap in a comparison-based model To break this information-theoretic barrier, the entropy of the data structure is reduced by artifically raising the values of certain keys Given any mixed sequence of n operations, a soft heap with error rate e (for any 0

Journal ArticleDOI
TL;DR: This work translates two variations on Algol 60 into a purely functional language with polymorphic linear types, and demonstrates that a linearly-typed functional language can be at least as expressive as AlGol.
Abstract: In a linearly-typed functional language, one can define functions that consume their arguments in the process of computing their results. This is reminiscent of state transformations in imperative languages, where execition of an assignment statement alters the contents of the store. We explore this connection by translating two variations on Algol 60 into a purely functional language with polymorphic linear types. On the one hand, the translations lead to a semantic analysis of Algol-like programs, in terms of a model of the linear language. On the other hand, they demonstrate that a linearly-typed functional language can be at least as expressive as Algol.

Journal ArticleDOI
TL;DR: The main claims are that the basic learning and deduction tasks are provably tractable and tractable learning offers viable approaches to a range of issues that have been previously identified as problematic for artificial intelligence systems that are programmed.
Abstract: An architecture is described for designing systems that acquire and ma nipulate large amounts of unsystematized, or so-called commonsense, knowledge. Its aim is to exploit to the full those aspects of computational learning that are known to offer powerful solutions in the acquisition and maintenance of robust knowledge bases. The architecture makes explicit the requirements on the basic computational tasks that are to be performed and is designed to make this computationally tractable even for very large databases. The main claims are that (i) the basic learning and deduction tasks are provably tractable and (ii) tractable learning offers viable approaches to a range of issues that have been previously identified as problematic for artificial intelligence systems that are programmed. Among the issues that learning offers to resolve are robustness to inconsistencies, robustness to incomplete information and resolving among alternatives. Attribute-efficient learning algorithms, which allow learning from few examples in large dimensional systems, are fundamental to the approach. Underpinning the overall architecture is a new principled approach to manipulating relations in learning systems. This approach, of independently quantified arguments, allows propositional learning algorithms to be applied systematically to learning relational concepts in polynomial time and in modular fashion.

Journal ArticleDOI
TL;DR: A graph-theoretic study of privacy in distributed environments with mobile eavesdroppers ("bugs") characterize the feasibility of the two privacy tasks combinatorially, construct protocols for the feasible cases, and analyze their computational complexity.
Abstract: We initiate a graph-theoretic study of privacy in distributed environments with mobile eavesdroppers ("bugs"). For two privacy tasks—distributed database maintenance and message transmission—a computationally unbounded adversary “plays an eavesdrpping game,” coordinating the moment of the bugs among the sites to learn the current memory contents. Many different adversaries are considered, motivated by differences in eavesdropping technologies. We characterize the feasibility of the two privacy tasks combinatorially, construct protocols for the feasible cases, and analyze their computational complexity.

Journal ArticleDOI
TL;DR: This work model the simplest classical gate, namely the N-gate, propose a quantization scheme (which translates between classical and quantum models, and from which emerges a logical interpretation of the notion of quantum parallelism), and apply it to the classical N-Gate model.
Abstract: Motivated by a growing need to understand the computational potential of quantum devices we suggest an approach to the relevant issues via quantum logic and its model theory. By isolating such notions as quantum parallelism and interference within a model-theoretic setting, quite divorced from their customary physical trappings, we seek to lay bare their logical underpinnings and possible computational ramifications.In the first part of the paper, a brief account of the relevant model theory is given, and some new results are derived. In the second part, we model the simplest classical gate, namely the N-gate, propose a quantization scheme (which translates between classical and quantum models, and from which emerges a logical interpretation of the notion of quantum parallelism), and apply it to the classical N-gate model. A class of physical instantiations of the resulting quantum N-gate model is also briefly discussed.

Journal ArticleDOI
TL;DR: This paper investigates the foundations of maximin, minmax regret, and competitive ratio, three central qualitative decision criteria, by characterizing those behaviors that could result from their use by using a constructive representation theorem that uses two choice axioms.
Abstract: The need for computationally efficient decision-making techniques together with the desire to simplify the processes of knowledge acquisition and agent specification have led various researchers in artificial intelligence to examine qualitative decision tools. However, the adequacy of such tools is not clear. This paper investigates the foundations of maximin, minmax regret, and competitive ratio, three central qualitative decision criteria, by characterizing those behaviors that could result from their use. This characterizaton provides two important insights: (1)under what conditions can we employ an agent model based on these basic qualitative decision criteria, and (2) how “rational” are these decision procedures. For the competitive ratio criterion in particular, this latter issue is of central importance to our understanding of current work on on-line algorithms. Our main result is a constructive representation theorem that uses two choice axioms to characterize maximin, minmax regret, and competitive ratio.

Journal ArticleDOI
TL;DR: A deterministic algorithm is presented that computes st-connectivity in undirected graphs using O(supscrpt) space and improves the previous O(3/2) bound of Nisan et al.
Abstract: We present a deterministic algorithm that computes st-connectivity in undirected graphs using O(log 4/3n) space. This improves the previous O(log3/2n) bound of Nisan et al. [1992].

Journal ArticleDOI
TL;DR: It is proved, for a variety of structures, natural-active collapse results, showing that using unrestricted quantification does not give us any extra power, and a set of algorithms for eliminating unbounded quantifications in favor of bounded ones are given.
Abstract: We rework parts of the classical relational theory when the underlying domain is a structure with some interpreted operations that can be used in queries. We identify parts of the classical theory that go through 'as before' when interpreted structure is present, parts that go through only for classes of nicely behaved structures, and parts that only arise in the interpreted case. The first category include a number of results on language equivalence and expressive power characterizations for the active-domain semantics for a variety of logics. Under this semantics, quantifiers range over elements of a relational database. The main kind of results we prove here are generic collapse results: for generic queries, adding operations beyond order, does not give us extra power.The second category includes results on the natural semantics, under which quantifiers range over the entire interpreted structure. We prove, for a variety of structures, natural-active collapse results, showing that using unrestricted quantification does not give us any extra power. Moreover, for a variety of structures, including the real field, we give a set of algorithms for eliminating unbounded quantifications in favor of bounded ones. Furthermore, we extend these collapse results to a new class of higher-order logics that mix unbounded and bounded quantification. We give a set of normal forms for these logics, under special conditions on the interpreted structures. As a by-product, we obtain an elementary proof of the fact that parity test is not definable in the relational calculus with polynomial inequality constraints. We also give examples of structures with nice model-theoretic properties over which the natural-active collapse fails.

Journal ArticleDOI
TL;DR: It is shown that an optimum prefetching/caching schedule for a single disk problem can be computed in polynomial time, thereby settling an open question by Kimbrel and Karlin.
Abstract: We study integrated prefetching and caching problems following the work of Cao et al. [1995] and Kimbrel and Karlin [1996]. Cao et al. and Kimbrel and Karlin gave approximation algorithms for minimizing the total elapsed time in single and parallel disk settings. The total elapsed time is the sum of the processor stall times and the length of the request sequence to be served.We show that an optimum prefetching/caching schedule for a single disk problem can be computed in polynomial time, thereby settling an open question by Kimbrel and Karlin. For the parallel disk problem, we give an approximation algorithm for minimizing stall time. The solution uses a few extra memory blocks in cache. Stall time is an important and harder to approximate measure for this problem. All of our algorithms are based on a new approach which involves formulating the prefetching/caching problems as linear programs.

Journal ArticleDOI
TL;DR: It is shown that a crashing network protocol that works over unreliable links can be driven to arbitrary global states, where each node is in a state reached in some execution, and each link has an arbitrary mixture of packets sent in some executions.
Abstract: A crashing network protocol is an asynchronous protocol whose memory does not survive crashes. We show that a crashing network protocol that works over unreliable links can be driven to arbitrary global states, where each node is in a state reached in some (possibly different) execution, and each link has an arbitrary mixture of packets sent in (possibly different) executions. Our theorem considerably generalizes an earlier result, due to Fekete et al., which states that there is no correct crashing Data Link Protocol. For example, we prove that there is no correct crashing protocol for token passing and for many other resource allocation protocols such as k-exclusion, and the drinking and dining philosophers problems. We further characterize the reachable states caused by crash failures using reliable non-FIFO and reliable FIFO links. We show that with reliable non-FIFO links any acyclic subset of nodes and links can be driven to arbitrary states. We show that with reliable FIFO links, only nodes can be driven to arbitrary states. Overall, we show a strict hierarchy in terms of the set of states reachable by crash failures in the three link models.