scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the ACM in 1997"


Journal ArticleDOI
TL;DR: An algorithm for finding the minimum cut of an undirected edge-weighted graph that has a short and compact description, is easy to implement, and has a surprisingly simple proof of correctness.
Abstract: We present an algorithm for finding the minimum cut of an undirected edge-weighted graph. It is simple in every respect. It has a short and compact description, is easy to implement, and has a surprisingly simple proof of correctness. Its runtime matches that of the fastest algorithm known. The runtime analysis is straightforward. In contrast to nearly all approaches so far, the algorithm uses no flow techniques. Roughly speaking, the algorithm consists of about |V| nearly identical phases each of which is a maximum adjacency search.

764 citations


Journal ArticleDOI
TL;DR: It is shown how this framework can be used to model both old and new constraint solving and optimization schemes, thus allowing one to both formally justify many informally taken choices in existing schemes, and to prove that local consistency techniques can beused also in newly defined schemes.
Abstract: We introduce a general framework for constraint satisfaction and optimization where classical CSPs, fuzzy CSPs, weighted CSPs, partial constraint satisfaction, and others can be easily cast. The framework is based on a semiring structure, where the set of the semiring specifies the values to be associated with each tuple of values of the variable domain, and the two semiring operations (+ and X) model constraint projection and combination respectively. Local consistency algorithms, as usually used for classical CSPs, can be exploited in this general framework as well, provided that certain conditions on the semiring operations are satisfied. We then show how this framework can be used to model both old and new constraint solving and optimization schemes, thus allowing one to both formally justify many informally taken choices in existing schemes, and to prove that local consistency techniques can be used also in newly defined schemes.

709 citations


Journal ArticleDOI
TL;DR: This work analyzes algorithms that predict a binary value by combining the predictions of several prediction strategies, called experts, and shows how this leads to certain kinds of pattern recognition/learning algorithms with performance bounds that improve on the best results currently know in this context.
Abstract: We analyze algorithms that predict a binary value by combining the predictions of several prediction strategies, called experts. Our analysis is for worst-case situations, i.e., we make no assumptions about the way the sequence of bits to be predicted is generated. We measure the performance of the algorithm by the difference between the expected number of mistakes it makes on the bit sequence and the expected number of mistakes made by the best expert on this sequence, where the expectation is taken with respect to the randomization in the predictins. We show that the minimum achievable difference is on the order of the square root of the number of mistakes of the best expert, and we give efficient algorithms that achieve this. Our upper and lower bounds have matching leading constants in most cases. We then show how this leads to certain kinds of pattern recognition/learning algorithms with performance bounds that improve on the best results currently know in this context. We also compare our analysis to the case in which log loss is used instead of the expected number of mistakes.

629 citations


Journal ArticleDOI
TL;DR: This paper investigates the subclasses that arise from restricting the possible constraint types, and shows that any set of constraints that does not give rise to an NP-complete class of problems must satisfy a certain type of algebraic closure condition.
Abstract: Many combinatorial search problems can be expressed as “constraint satisfaction problems” and this class of problems is known to be NP-complete in general. In this paper, we investigate the subclasses that arise from restricting the possible constraint types. We first show that any set of constraints that does not give rise to an NP-complete class of problems must satisfy a certain type of algebraic closure condition. We then investigate all the different possible forms of this algebraic closure property, and establish which of these are sufficient to ensure tractability. As examples, we show that all known classes of tractable constraints over finite domains can be characterized by such an algebraic closure property. Finally, we describe a simple computational procedure that can be used to determine the closure properties of a given set of constraints. This procedure involves solving a particular constraint satisfaction problem, which we call an “indicator problem.”

560 citations


Journal ArticleDOI
TL;DR: A characterization of learnability in the probabilistic concept model, solving an open problem posed by Kearns and Schapire, and shows that the accuracy parameter plays a crucial role in determining the effective complexity of the learner's hypothesis class.
Abstract: Learnability in Valiant's PAC learning model has been shown to be strongly related to the existence of uniform laws of large numbers These laws define a distribution-free convergence property of means to expectations uniformly over classes of random variables Classes of real-valued functions enjoying such a property are also known as uniform Glivenko-Cantelli classes In this paper, we prove, through a generalization of Sauer's lemma that may be interesting in its own right, a new characterization of uniform Glivenko-Cantelli classes Our characterization yields Dudley, Gine´, and Zinn's previous characterization as a corollary Furthermore, it is the first based on a Gine´, and Zinn's previous characterization as a corollary Furthermore, it is the first based on a simple combinatorial quantity generalizing the Vapnik-Chervonenkis dimension We apply this result to obtain the weakest combinatorial condition known to imply PAC learnability in the statistical regression (or “agnostic”) framework Furthermore, we find a characterization of learnability in the probabilistic concept model, solving an open problem posed by Kearns and Schapire These results show that the accuracy parameter plays a crucial role in determining the effective complexity of the learner's hypothesis class

398 citations


Journal ArticleDOI
TL;DR: An algorithm is described that achieves on-line allocation of routes to virtual circuits (both point-to-point and multicast) with a constant competitive ratio with respect to maximum congestin, where n is the number of nodes in the network.
Abstract: In this paper we study the problem of on-line allocation of routes to virtual circuits (both point-to-point and multicast) where the goal is to route all requests while minimizing the required bandwidth. We concentrate on the case of Permanent virtual circuits (i.e., once a circuit is established it exists forever), and describe an algorithm that achieves on O (log n) competitive ratio with respect to maximum congestin, where nis the number of nodes in the network. Informally, our results show that instead of knowing all of the future requests, it is sufficient to increase the bandwidth of the communication links by an O (log n) factor. We also show that this result is tight, that is, for any on-line algorithm there exists a scenario in which Ω(log n) increase in bandwidth is necessary in directed networks. We view virtual circuit routing as a generalization of an on-line load balancing problem, defined as follows: jobs arrive on line and each job must be assigned to one of the machines immediately upon arrival. Assigning a job to a machine increases the machine's load by an amount that depends both on the job and on the machine. The goal is to minimize the maximum load. For the related machines case, we describe the first algorithm that achieves constant competitive ratio. for the unrelated case (with nmachines), we describe a new method that yields O(logn)-competitive algorithm. This stands in contrast to the natural greed approach, whose competitive ratio is exactly n.

363 citations


Journal ArticleDOI
TL;DR: This result implies that every triangulated planar graph is isomorphic to the intersection graph of a disk-packing, which gives a new geometric proof of the planar separator theorem of Lipton and Tarjan, but also generalizes it to higher dimensions.
Abstract: A collection of n balls in d dimensions forms a k-ply system if no point in the space is covered by more than k balls. We show that for every k-ply system G, there is a sphere S that intersects at most O(k1/dn1−1/d) balls of G and divides the remainder of G into two parts: those in the interior and those in the exterior of the sphere S, respectively, so that the larger part contains at most (1−1/(d+2))n balls. This bound of (O(k1/dn1−1/d) is the best possible in both n and k. We also present a simple randomized algorithm to find such a sphere in O(n) time. Our result implies that every k-nearest neighbor graphs of n points in d dimensions has a separator of size O(k1/dn1−1/d). In conjunction with a result of Koebe that every triangulated planar graph is isomorphic to the intersection graph of a disk-packing, our result not only gives a new geometric proof of the planar separator theorem of Lipton and Tarjan, but also generalizes it to higher dimensions. The separator algorithm can be used for point location and geometric divide and conquer in a fixed dimensional space.

274 citations



Journal ArticleDOI
TL;DR: In this article, the authors provide data strutures that maintain a graph as edges are inserted and deleted, and keep track of the following properties with the following times: minimum spanning forests, graph connectivity, graph 2-edge connectivity, and bipartiteness in timeO(n 1/2) per change; 3-edge connections, in time O(n 2/3) per insertion; 4-edge connection, in O(na(n)) per insertion.
Abstract: We provide data strutures that maintain a graph as edges are inserted and deleted, and keep track of the following properties with the following times: minimum spanning forests, graph connectivity, graph 2-edge connectivity, and bipartiteness in timeO(n1/2) per change; 3-edge connectivity, in time O(n2/3) per change; 4-edge connectivity, in time O(na(n)) per change; k-edge connectivity for constant k, in time O(nlogn) per change;2-vertex connectivity, and 3-vertex connectivity, in the O(n) per change; and 4-vertex connectivity, in time O(na(n)) per change. Further results speed up the insertion times to match the bounds of known partially dynamic algorithms.All our algorithms are based on a new technique that transforms an algorithm for sparse graphs into one that will work on any graph, which we call sparsification.

239 citations


Journal ArticleDOI
TL;DR: It follows that determining the winner in Carroll's elections is not NP-complete unless the polynomial hierarchy collapses, and the stronger lower bound and upper bound are provided that matches the lower bound.
Abstract: In 1876, Lewis Carroll proposed a voting system in which the winner is the candidate who with the fewest changes in voters' preferences becomes a Condorcet winner—a candidate who beats all other candidates in pairwise majority-rule elections. Bartholdi, Tovey, and Trick provided a lower bound—NP-hardness—on the computational complexity of determining the election winner in Carroll's system. We provide a stronger lower bound and an upper bound that matches our lower bound. In particular, determining the winner in Carroll's system is complete for parallel access to NP, that is, it is complete for Theta_2p for which it becomes the most natural complete problem known. It follows that determining the winner in Carroll's elections is not NP-complete unless the polynomial hierarchy collapses.

228 citations


Journal ArticleDOI
TL;DR: This work suggests that checkers should be allowed to use stored randomness, and argues that such checkers could profitably be incorporated in software as an aid to efficient debugging and enhanced reliability.
Abstract: We review the field of result-checking, discussing simple checkers and self-correctors. We argue that such checkers could profitably be incorporated in software as an aid to efficient debugging and enhanced reliability. We consider how to modify traditional checking methodologies to make them more appropriate for use in real-time, real-number computer systems. In particular, we suggest that checkers should be allowed to use stored randomness: that is, that they should be allowed to generate, preprocess, and store random bits prior to run-time, and then to use this information repeatedly in a series of run-time checks. In a case study of checking a general real-number linear transformation (e.g., a Fourier Transform), we present a simple checker which uses stored randomness, and a self-corrector which is particularly efficient if stored randomness is employed.

Journal ArticleDOI
TL;DR: It is proved that refinements of networks can be accomplished in a modular way by refining their compponents by defining notions of interface and interaction refinement and generalizing the notions of refinement to refining contexts.
Abstract: We introduce a method to describe systems and their components by functional specification techniques. We define notions of interface and interaction refinement for interactive systems and their components. These notions of refinement allow us to change both the syntactic (the number of channels and sorts of messages at the channels) and the semantic interface (causality flow between messages and interaction granularity) of an interactive system component. We prove that these notions of refinement are compositional with respect to sequential and parallel composition of system components, communication feedback and recursive declarations of system components. According to these proofs, refinements of networks can be accomplished in a modular way by refining their compponents. We generalize the notions of refinement to refining contexts. Finally, full abstraction for specifications is defined, and compositionality with respect to this abstraction is shown, too.

Journal ArticleDOI
TL;DR: The first formal complexity model for contention in shared-memory multiprocessors is introduced and certain counting networks outperform conventional single-variable counters at high levels of contention, providing the first formal model explaining this phenomenon.
Abstract: Most complexity measures for concurrent algorithms for asynchronous shared-memory architectures focus on process steps and memory consumption. In practice, however, performance of multiprocessor algorithms is heavily influenced by contention, the extent to which processess access the same location at the same time. Nevertheless, even though contention is one of the principal considerations affecting the performance of real algorithms on real multiprocessors, there are no formal tools for analyzing the contention of asynchronous shared-memory algorithms.This paper introduces the first formal complexity model for contention in shared-memory multiprocessors. We focus on the standard multiprocessor architecture in which n asynchronous processes communicate by applying read, write, and read-modify-write operations to a shared memory. To illustrate the utility of our model, we use it to derive two kinds of results: (1) lower bounds on contention for well-known basic problems such as agreement and mutual exclusion, and (2) trade-offs between the length of the critical path (maximal number of accesses to shared variables performed by a single process in executing the algorithm) and contention for these algorithms. Furthermore, we give the first formal contention analysis of a variety of counting networks, a class of concurrent data structures inplementing shared counters. Experiments indicate that certain counting networks outperform conventional single-variable counters at high levels of contention. Our analysis provides the first formal model explaining this phenomenon.

Journal ArticleDOI
TL;DR: This work introduces a new framework for the study of reasoning, and gives Learning to Reason algorithms for classes of propositional languages for which there are no efficient reasoning algorithms, when represented as a traditional (formula-based) knowledge base.
Abstract: We introduce a new framework for the study of reasoning. The Learning (in order) to Reason approach developed here views learning as an integral part of the inference process, and suggests that learning and reasoning should be studied together.The Learning to Reason framework combines the interfaces to the world used by known learning models with the reasoning task and a performance criterion suitable for it. In this framework, the intelligent agent is given access to its favorite learning interface, and is also given a grace period in with it can interact with this interface and construct a representation KB of the world W. The reasoning performance is measured only after this period, when the agent is presented with queries a from some query language, relevant to the world, and has to answer whether W implies a.The approach is meant to overcome the main computational difficulties in the traditional treatment of reasoning which stem from its separation from the “world”. Since the agent interacts with the world when construction its knowledge representation it can choose a representation that is useful for the task at hand. Moreover, we can now make explicit the dependence of the reasoning performance on the environment the agent interacts with.We show how previous results from learning theory and reasoning fit into this framwork and illustrate the usefulness of the Learning to Reason approach by exhibiting new results that are not possible in the traditional setting. First, we give Learning to Reason algorithms for classes of propositional languages for which there are no efficient reasoning algorithms, when represented as a traditional (formula-based) knowledge base. Second, we exhibit a Learning to Reason algorithm for a class of propositional languages that is not know to be learnable in the traditional sense.

Journal ArticleDOI
TL;DR: An algorithm that constructs a path oninline-equation 6-g-P from Pub Fmt italic to t
Abstract: Given a convex polytope P with n faces in R3 , points s,t∈6P , and a parameter 0 s to t whose length is at most 1+edP s,t , where dPs,t is the length of the shortest path between s and t on 6P . The algorithm runs in Onlog1/e+ 1/e3 time, and is relatively simple. The running time is On+1/e3 if we only want the approximate shortest path distance and not the path itself. We also present an extension of the algorithm that computes approximate shortest path distances from a given source point on 6P to all vertices of P .

Journal ArticleDOI
TL;DR: Two strategies for reducing the clock period of a two-phase, level-clocked circuit are investigated: clock tuning, which adjusts the waveforms that clock the circuit, and retiming, which relocates circuit latches, which can be used to convert a circuit with edge-triggered latches into a faster level-Clocked one.
Abstract: We investigate two strategies for reducing the clock period of a two-phase, level-clocked circuit: clock tuning, which adjusts the waveforms that clock the circuit, and retiming, which relocates circuit latches. These methods can be used to convert a circuit with edge-triggered latches into a faster level-clocked one. We model a two-phase circuit as a graph G 5 (V, E) whose vertex set V is a collection of combinational logic blocks, and whose edge set E is a set of interconnections. Each interconnection passes through zero or more latches, where each latch is clocked by one of two periodic, nonoverlapping waveforms, or phases. We give efficient polynomial-time algorithms for problems involving the timing verification and optimization of two-phase circuitry. Included are algorithms for —verifying proper timing: O(VE) time. —minimizing the clock period by clock tuning: O(VE) time. —retiming to achieve a given clock period when the phases are symmetric: O(VE 1 V lg V) time. —retiming to achieve a given clock period when either the duty cycle (high time) of one phase or the ratio of the phases’ duty cycles is fixed: O(V) time. We give fully polynomial-time approximation schemes for clock period minimization, within any given relative error e . 0, by —retiming and tuning when the duty cycles of the two phases are required to be equal: O((VE 1 V lg V)lg(V/e)) time. —retiming and tuning when either the duty cycle of one phase is fixed or the ratio of the phases’ duty cycles is fixed: O(V lg(V/e)) time. —simultaneous retiming and clock tuning with no conditions on the duty cycles of the two phases: O(V(1/e)lg(1/e) 1 (VE 1 V lg V)lg(V/e)) time. The first two of these approximation algorithms can be used to obtain the optimum clock period in the special case where all propagation delays are integers. We generalize most of the results for two-phase clocking schemes to simple multiphase clocking disciplines, including ones with overlapping phases. Typically, the algorithms to verify and optimize This research was supported in part by the Defense Advanced Research Projects Agency under Grant N00014-91-J-1698. Authors’ present addresses: A. T. Ishii, NEC USA C&C Research Laboratories, Princeton, NJ 08540; C. E. Leiserson, Massachusetts Institute of Technology, Laboratory for Computer Science, Cambridge, MA 02139; M. C. Papaefthymiou, Advanced Computer Architecture Laboratory, Room 2218 EECS Building, Ann Arbor, MI 48109-2122. Permission to make digital / hard copy of part or all of this work for personal or classroom use is granted without fee provided that the copies are not made or distributed for profit or commercial advantage, the copyright notice, the title of the publication, and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery (ACM), Inc. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and /or a fee. q 1997 ACM 0004-5411/97/0100-0148 $03.50 Journal of the ACM, Vol. 44, No. 1, January 1997, pp. 148–199. the timing of k-phase circuitry are at most a factor of k slower than the corresponding algorithms for two-phase circuitry. Our algorithms have been implemented in TIM, a timing package for two-phase, level-clocked circuitry developed at MIT.

Journal ArticleDOI
TL;DR: The logical formulation shows that some of the most tantalizing questions in complexity theory boil down to a single question: the relative power of inflationary vs. noninflationary 1st-order operators.
Abstract: We establish a general connection between fixpoint logic and complexity. On one side, we have fixpoint logic, parameterized by the choices of 1st-order operators (inflationary or noninflationary) and iteration constructs (deterministic, nondeterministic, or alternating). On the other side, we have the complexity classes between P and EXPTIME. Our parameterized fixpoint logics capture the complexity classes P, NP, PSPACE, and EXPTIME, but equally is achieved only over ordered structures.There is, however, an inherent mismatch between complexity and logic—while computational devices work on encodings of problems, logic is applied directly to the underlying mathematical structures. To overcome this mismatch, we use a theory of relational complexity, which bridges the gap between standard complexity and fixpoint logic. On one hand, we show that questions about containments among standard complexity classes can be translated to questions about containments among relational complexity classes. On the other hand, the expressive power of fixpoint logic can be precisely characterized in terms of relational complexity classes. This tight, three-way relationship among fixpoint logics, relational complexity and standard complexity yields in a uniform way logical analogs to all containments among the complexity classes P, NP, PSPACE, and EXPTIME. The logical formulation shows that some of the most tantalizing questions in complexity theory boil down to a single question: the relative power of inflationary vs. noninflationary 1st-order operators.

Journal ArticleDOI
Prasad Jayanti1
TL;DR: This paper formally defines robustness and other desirable properties of hierarchies and establishes the unique importance of every nontrivial robust hierarchy, which is necessarily a “coarsening” of hrm.
Abstract: The problem of implementing a shared object of one type from shared objects of other types has been extensively researched. Recent focus has mostly been on wait-free implementations, which permit every process to complete its operations on implemented objects, regardless of the speeds of other processes. It is known that shared objects of different types have differing abilities to support wait-free implementations. It is therefore natural to want to arrange types in a hierarchy that reflects their relative abilities to support wait-free implementations. In this paper, we formally define robustness and other desirable properties of hierarchies. Roughly speaking, a hierarchy is robust if each type is “stronger” than any combination of lower level types. We study two specific hierarchies: one, that we call hrm in which the level of a type is based on the ability of an unbounded number of objects of that type, and another hierarchy, that we call hr1, in which a type's level is based on the ability of a fixed number of objects of that type. We prove that resource bounded hierarchies, such as hr1 and its variants, are not robust. We also establish the unique importance of hrm: every nontrivial robust hierarchy, if one exists, is necessarily a “coarsening” of hrm.

Journal ArticleDOI
TL;DR: A model is attempted to provide a general guide to how general results and mean-inGFUL LOWER BOUNDS can be provided for specific networks in the future, along with general technology and results.
Abstract: IN THIS PAPER, WE STUDY THE PROBLEM OF EMULATING T(SUBSCRIPT G) STEPS OF AN N(SUBSCRIPT G)-NODE GUEST NETWORK ON AN N(SUBSCRIPT H)-NODE HOST NET- WORK. WE CALL AN EMULATION `WORK-PRESERVING'' IF THE TIME REQUIRED BY THE HOST, T(SUBSCRIPT H), IS O(T(SUBSCRIPT G)N(SUBSCRIPT G)/N(SUBSCRIPT H) BECAUSE THEN BOTH THE GUES AND HOST NETWORKS PERFORM THE SAME TOTAL WORK, (GREEK SYMBOL)(T(SUBSCRIPT G)N(SUBSCRIPT G), TO WITHIN A CONSTANT FACTOR. WE SAY THAT AN EMULATION IS `REAL-TIME'' IF T(SUBSCRIPT H)=O(T(SUBSCRIPT G), BECAUSE THEN THE HOST EMULATES THE GUEST WITH CONSTANT DELAY. ALTHOUGH MANY ISOLATED EMULATION RESULTS HAVE BEEN PROVED FOR SPECIFIC NETWORKS IN THE PAST, AND MEASURES SUCH AS DILATION AND CONGESTION WERE KNOWN TO BE IMPOR- TANT, THE FIELD HAS LACKED A MODEL WITHIN WHICH GENERAL RESULTS AND MEAN- INGFUL LOWER BOUNDS CAN BE PROVED. WE ATTEMPT TO PROVIDE SUCH A MODEL, ALONG WITH CORRESPONDING GENERAL TECHNIQUES AND SPECIFIC RESULTS IN THIS PAPERS. SOME OF THE MORE INTERESTING AND DIVERSE CONSEQUENCES OF THIS WORK INCLUDE: 1. A PROOF THAT A LINEAR ARRAY CAN EMULATE (MUCH LARGER) BUTTERFLY IN A WORK-PRESERVING FASHION, BUT THAT A BUTTERFLY CANNOT EMULATE AN EX- PANDER (OF ANY SIZE) IN A WORK-PRESERVING FASHION, 2. A PROOF THAT A MESH CAN BE EMULATED IN REAL TIME IN A WORK-PRESERVING FASHION ON A BUTTERFLY, EVEN THOUGH ANY O(1)-TO-1 EMBEDDING OF A MESH

Journal ArticleDOI
TL;DR: This paper identifies two new complementary properties on the restrictiveness of the constraints in a network—constraint tightness and constraint looseness—and shows their usefulness for estimating the level of local consistency needed to ensure global consistency, and for estimating that a solution can be found in a backtrack-free manner.
Abstract: Constraint networks are a simple representation and reasoning framework with diverse applications. In this paper, we identify two new complementary properties on the restrictiveness of the constraints in a network—constraint tightness and constraint looseness—and we show their usefulness for estimating the level of local consistency needed to ensure global consistency, and for estimating the level of local consistency present in a network. In particular, we present a sufficient condition, based on constraint tightness and the level of local consistency, that guarantees that a solution can be found in a backtrack-free manner. The condition can be useful in applications where a knowledge base will be queried over and over and the preprocessing costs can be amortized over many queries. We also present a sufficient condition for local consistency, based on constraint looseness, that is straightforward and inexpensive to determine. The condition can be used to estimate the level of local consistency of a network. This in turn can be used in deciding whether it would be useful to preprocess the network before a backtracking search, and in deciding which local consistency conditions, if any, still need to be enforced if we want to ensure that a solution can be found in a backtrack-free manner. Two definitions of local consistency are employed in characterizing the conditions: the traditional variable-based notion and a recently introduced definition of local consistency called relational consistency.

Journal ArticleDOI
TL;DR: The complexity of the cell probe or decision assignment tree model for two natural cell sizes, 1 bit and logn bits, is analyzed and a classification of the Complexity based on algebraic properties of M is obtained.
Abstract: Let M be a fixed finite monoid. We consider the problem of implementing a data type containing a vector 2 = (21,22,. . .,z,,) E M", initially (1,1,. . ., 1) wifh two kinds of operations, for each i E { 1,. . . , n}, a E M, an operation changei,a which changes xi to a and a single operation product returning n:==, xi. This i s the dynamic word problem. If we in addition for each j E { 1, . . . , n} have an operation pref ixj returning ni=, xi, we talk about the dynamic prefix problem. We analyze the complexity of these problems in the cell probe or decision assignment tree model for two natural cell sizes, 1 bit and logn bits. We obtain a classification of the Complexity based on algebraic properties of M.

Journal ArticleDOI
TL;DR: This paper develops a framework for computing upper and lower bounds of an exponential form for a large class of single resource systems with Markov additive inputs and concludes with two applications to admission control in multimedia systems.
Abstract: In this paper, we develop a framework for computing upper and lower bounds of an exponential form for a large class of single resource systems with Markov additive inputs. Specifically, the bounds are on quantities such as backlog, queue length, and response time. Explicit or computable expressions for our bounds are given in the context of queuing theory and numerical comparisons with other bounds and exact results are presented. The paper concludes with two applications to admission control in multimedia systems.

Journal ArticleDOI
TL;DR: A high-level language is defined, a logic of time and knowledge is used, which is used to reason about termination conditions and to state general conditions for the existence of sound and complete termination conditions in a broad domain.
Abstract: Inspired by the success of the distributed computing community in apply logics of knowledge and time to reasoning about distributed protocols, we aim for a similarly powerful and high-level abstraction when reasoning about control problems involving uncertainty. This paper concentrates on robot motion planning with uncertainty in both control and sensing, a problem that has already been well studied within the robotics community. First, a new and natural problem in this domain is defined: does there exists a sound and complete termination condition for a motion, given initial and goal locations? If yes, how to construct it? Then we define a high-level language, a logic of time and knowledge, which we use to reason about termination conditions and to state general conditions for the existence of sound and complete termination conditions in a broad domain. Finally, we show that sound termination conditions that are optimal in a precise sense provide a natural example of knowledge-based programs with multiple implementations.

Journal ArticleDOI
TL;DR: It is shown that the constructive transformations are precisely the transformations that can be expressed in said extensions of complete standard languages, which are not complete for the determinate transformations.
Abstract: Object-oriented applications of database systems require database transformations involoving nonstandard functionalities such as set manipulation and object creation, that is, the introduction of new domain elements. To deal with thse functionalities, Abiteboul and Kanellakis [1989] introduced the “determinate” transformations as a generalization of the standard domain-preserving transformations. The obvious extensions of complete standard database programming languages, however, are not complete for the determinate transformations. To remedy this mismatch, the “constructive” transformations are proposed. It is shown that the constructive transformations are precisely the transformations that can be expressed in said extensions of complete standard languages. Thereto, a close correspondence between object creation and the construction of hereditarily finite sets is established.A restricted version of the main completeness result for the case where only list manipulations are involved is also presented.

Journal ArticleDOI
TL;DR: This paper describes several algorithms that solve important problems on directed graphs, including breadth-first search, topological sort, strong connectivity, and and the single source shorest path problem.
Abstract: Some parallel algorithms have the property that, as they are allowed to take more time, the total work that they do is reduced. This paper describes several algorithms with this property. These algorithms solve important problems on directed graphs, including breadth-first search, topological sort, strong connectivity, and and the single source shorest path problem. All of the algorithms run on the EREW PRAM model of parallel computer, except the algorithm for strong connectivity, which runs on the probabilistic EREW PRAM.

Journal ArticleDOI
TL;DR: For the first time, network flow techniques are applied to a mesh refinement problem, which reduces the problem to a sequence of bidirected flwo problems (or, equivalently, to b-matching problems).
Abstract: We investigate a problem arising in the computer-aided design of cars, planes, ships, trains, and other motor vehicles and machines: refine a mesh of curved polygons, which approximates the surface of a workpiece, into quadrilaterals so that the resulting mesh is suitable for a numerical analysis. This mesh refinement problem turns out to be strongly NP-hardIn commercial CAD systems, this problem is usually solved using a gree dy approach. However, these algorithms leave the user a lot of patchwork to do afterwards. We introduce a new global approach, which is based on network flow techniques. Abstracting from all geometric and numerical aspects, we obtain an undirected graph with upper and lower capacities on the edges and some additional node constraints. We reduce this problem to a sequence of bidirected flwo problems (or, equivalently, to b-matching problems). For the first time, network flow techniques are applied to a mesh refinement problem.This approach avoids the local traps of greedy approaches and yields solutions that require significantly less additional patchwork.

Journal ArticleDOI
TL;DR: A deterministic protocol is provided for maintenance at each processor of the network, of a current and accurate copy of a common database, with only polylogarithmic overhead in both time and communication complexities.
Abstract: A basic task in distributed computation is the maintenance at each processor of the network, of a current and accurate copy of a common database. A primary example is the maintenance, for routing and other purposes, of a record of the current topology of the system.Such a database must be updated in the wake of locally generated changes to its contents. Due to previous disconnections of parts of the network, a maintenance protocol may need to update processors holding widely varying versions of the database.We provide a deterministic protocol for this problem, with only polylogarithmic overhead in both time and communication complexities. Previous deterministic solutions required polynomial overhead in at least one of these measures.

Journal ArticleDOI
TL;DR: By varying these hardware parameters, the extent to which complex hardware can speed up routing is studied, and a hierarchy of time bounds for worst-case permutation routing is obtained.
Abstract: We study the extent to which complex hardware can speed up routing. Specifically, we consider the following questions. How much does adaptive routing improve over oblivious routing? How much does randomness help? How does it help if each node can have a large number of neighbors? What benefit is available if a node can send packets to several neighbors within a single time step? Some of these features require complex networking hardware, and it is thus important to investigate whether the performance justifies the investment. By varying these hardware parameters, we obtain a hierarchy of time bounds for worst-case permutation routing.

Journal ArticleDOI
TL;DR: A precise and formal characterization of the loss of information in Mycroft's strictness analysis method and a generalization of this method, called e-analysis, that reasons about exhaustive evaluation in nonflat domains are established.
Abstract: Strictness analysis is an important technique for optimization of lazy functional languages. It is well known that all strictness analysis methods are incomplete, i.e., fail to report some strictness properties. In this paper, we provide a precise and formal characterization of the loss of information that leads to this incompletenss. Specifically, we establish the following characterization theorem for Mycroft's strictness analysis method and a generalization of this method, called ee-analysis, that reasons about exhaustive evaluation in nonflat domains: Mycroft's method will deduce a strictness property for program P iff the property is independent of any constant appearing in any evaluation of P. To prove this, we specify a small set of equations, called E-axioms, that capture the information loss in Mycroft's method and develop a new proof technique called E-rewriting. E-rewriting extends the standard notion of rewriting to permit the use of reductions using E-axioms interspersed with standard reduction steps. E-axioms are a syntactic characterization of information loss and E-rewriting provides and algorithm-independent proof technique for characterizing the power of analysis methods. It can be used to answer questions on completeness and incompleteness of Mycroft's method on certain natural classes of programs. Finally, the techniques developed in this paper provide a general principle for establishing similar results for other analysis methods such as those based on abstract interpretation. As a demonstration of the generality of our technique, we give a characterization theorem for another variation of Mycroft's method called dd-analysis.

Journal ArticleDOI
TL;DR: It is shown that a Turing machine with two single-head one-dimensional tapes cannot recognize the set.
Abstract: We show that a Turing machine with two single-head one-dimensional tapes cannot recognize the set.