scispace - formally typeset
Search or ask a question

Showing papers on "Weak consistency published in 1999"


01 Jan 1999
TL;DR: This thesis presents the first implementation-independent specifications of existing ANSI isolation levels and a number of levels that are widely used in commercial systems, e.g., Cursor Stability, Snapshot Isolation, and specifies a variety of guarantees for predicate-based operations in an implementation- independent manner.
Abstract: Current commercial databases allow application programmers to trade off consistency for performance. However, existing definitions of weak consistency levels are either imprecise or they disallow efficient implementation techniques such as optimism. Ruling out these techniques is especially unfortunate because commercial databases support optimistic mechanisms. Furthermore, optimism is likely to be the implementation technique of choice in the geographically distributed and mobile systems of the future. This thesis presents the first implementation-independent specifications of existing ANSI isolation levels and a number of levels that are widely used in commercial systems, e.g., Cursor Stability, Snapshot Isolation. It also specifies a variety of guarantees for predicate-based operations in an implementation-independent manner. Two new levels are defined that provide useful consistency guarantees to application writers; one is the weakest level that ensures consistent reads, while the other captures some useful consistency properties provided by pessimistic implementations. We use a graph-based approach to define different isolation levels in a simple and intuitive manner. The thesis describes new implementation techniques for supporting different weak consistency levels in distributed client-server environments. The mechanisms are based on optimism and make use of multipart timestamps. A new technique is presented that allows multipart timestamps to scale well with the number of clients and servers in our system; the technique takes advantage of loosely synchronized clocks for removing old information in multipart timestamps. This thesis also presents the results of a simulation study to evaluate the performance of our optimistic schemes in data-shipping client-server systems The results show that the cost of providing serializability relative to mechanisms that provide lower consistency guarantees is negligible for low-contention workloads; furthermore, even for workloads with moderate to high-contention workloads, the cost of serializability is low. The simulation study also shows that our mechanisms based on multipart timestamps impose very low CPU, memory, and network costs while providing strong consistency guarantees to read-only and executing transactions. Thesis Supervisor: Barbara H. Liskov Title: Ford Professor of Engineering

236 citations


Journal ArticleDOI
TL;DR: In this paper, the authors developed a new stabilization operator which globally reconstructs the derivatives not present in the local element function space, which is seen to engender a stronger consistency leading to better convergence and improved accuracy.

137 citations


Proceedings ArticleDOI
01 May 1999
TL;DR: Timed consistency generalizes several existing consistency criteria and it is well suited for interactive and collaborative applications, where the action of one user must be seen by others in a timely fashion.
Abstract: Ordering and time are two different aspects of consistency of shared objects in a distributed system. One avoids conflicts between operations, the other addresses how quickly the effects of an operation are perceived by the rest of the system. Consistency models such as sequential consistency and causal consistency do not consider the particular time at which an operation is executed to establish a valid order among all the operations of a computation. Timed consistency models require that if a write operation is executed at time t, it must be visible to all nodes by time t +Δ. Timed consistency generalizes several existing consistency criteria and it is well suited for interactive and collaborative applications, where the action of one user must be seen by others in a timely fashion.

107 citations


Book ChapterDOI
06 Jul 1999
TL;DR: It is shown that under certain reasonable assumptions about the memory system, it is possible to conclude sequential consistency for any number of processors, memory locations, and data values by model checking two finite-state lemmas about process and merge invariants.
Abstract: In shared-memory multiprocessors sequential consistency offers a natural tradeoff between the flexibility afforded to the implementor and the complexity of the programmer's view of the memory. Sequential consistency requires that some interleaving of the local temporal orders of read/write events at different processors be a trace of serial memory. We develop a systematic methodology for proving sequential consistency for memory systems with three parameters --number of processors, number of memory locations, and number of data values. From the definition of sequential consistency it suffices to construct a non-interfering observer that watches and reorders read/write events so that a trace of serial memory is obtained. While in general such an observer must be unbounded even for fixed values of the parameters --checking sequential consistency is undecidable!-- we show that for two paradigmatic protocol classes--lazy caching and snoopy cache coherence--there exist finite-state observers. In these cases, sequential consistency for fixed parameter values can thus be checked by language inclusion between finite automata. In order to reduce the arbitrary-parameter problem to the fixed-parameter problem, we develop a novel framework for induction over the number of processors. Classical induction schemas, which are based on process invariants that are inductive with respect to an implementation preorder that preserves the temporal sequence of events, are inadequate for our purposes, because proving sequential consistency requires the reordering of events. Hence we introduce merge invariants, which permit certain reorderings of read/write events. We show that under certain reasonable assumptions about the memory system, it is possible to conclude sequential consistency for any number of processors, memory locations, and data values by model checking two finite-state lemmas about process and merge invariants: they involve two processors each accessing a maximum of three locations, where each location stores at most two data values. For both lazy caching and snoopy cache coherence we are able to discharge the two lemmas using the model checker MOCHA.

58 citations


Journal ArticleDOI
TL;DR: It is demonstrated how analytic and non-asymptotic evaluations of both the underfitting and overfitting sets of selected models can provide conditions ensuring the strong consistency of the model selection criterion used.

53 citations


Proceedings Article
31 Jul 1999
TL;DR: It is proved that generalized arc-consistency on all-different constraints lies between neighborhood inverse consistency and, under a simple restriction, path inverse consistency on the binary representation of the problem.
Abstract: We perform a comprehensive theoretical and experimental analysis of the use of all-different constraints. We prove that generalized arc-consistency on such constraints lies between neighborhood inverse consistency and, under a simple restriction, path inverse consistency on the binary representation of the problem. By generalizing the arguments of Kondrak and van Beek, we prove that a search algorithm that maintains generalized arc-consistency on all-different constraints dominates a search algorithm that maintains arc-consistency on the binary representation. Our experiments show the practical value of achieving these high levels of consistency. For example, we can solve almost all benchmark quasigroup completion problems up to order 25 with just a few branches of search. These results demonstrate the benefits of using non-binary constraints like all-different to identify structure in problems.

50 citations


Journal ArticleDOI
TL;DR: In this paper, the authors prove weak consistency of U-statistics for stationary ergodic and mixing sequences when the kernel function is unbounded, extending by this earlier results of Aaronson, Burton, Dehling, Gilat, Hill and Weiss.
Abstract: Motivated by the problem of estimating the fractal dimension of a strange attractor, we prove weak consistency of U-statistics for stationary ergodic and mixing sequences when the kernel function is unbounded, extending by this earlier results of Aaronson, Burton, Dehling, Gilat, Hill and Weiss. We apply the obtained results to show consistency of the Takens estimator for the correlation dimension.

47 citations


Journal ArticleDOI
Qihua Wang1
TL;DR: In this article, a semiparametric method with the primary data is employed to obtain the estimators ofsandg(·) based on the least-squares criterion with the help of validation data.

43 citations


Proceedings ArticleDOI
09 Jan 1999
TL;DR: Analysis shows how to apply Lamport clocks to verify TSO and Alpha specifications at the architectural level.
Abstract: Cache coherence protocols of current shared-memory multiprocessors are difficult to verify. Our previous work proposed an extension of Lamport's logical clocks for showing that multiprocessors can implement sequential consistency (SC) with an SGI Origin 2000-like directory protocol and a Sun Gigaplane-like split-transaction bus protocol. Many commercial multiprocessors, however, implement more relaxed models, such as SPARC Total Store Order (TSO), a variant of processor consistency, and Compaq (DEC) Alpha, a variant of weak consistency. This paper applies Lamport clocks to both a TSO and an Alpha implementation. Both implementations are based on the same Sun Gigaplane-like split-transaction bus protocol we previously used, but the TSO implementation places a first-in-first-out write buffer between a processor and its cache, while the Alpha implementation uses a coalescing write buffer. Both write buffers satisfy read requests for pending writes (i.e., do bypassing) without requiring the write to be immediately written to cache. Analysis shows how to apply Lamport clocks to verify TSO and Alpha specifications at the architectural level.

37 citations


Proceedings ArticleDOI
14 Jun 1999
TL;DR: It is shown that checking the consistency of a regulation comes to generate some particular consequences of some first order formulas, and Inoue's inference rule, SOL-resolution, is applied for generating, from some clauses, their consequences which satisfy a given condition.
Abstract: This paper addresses the problem of regulation consistency checking. Regulations are sets of rules which express what is obligatory, permitted, forbidden and under which conditions. We first define a first order language to model regulations. Then we introduce a definition of regulation consistency. We show that checking the consistency of a regulation comes to generate some particular consequences of some first order formulas. Then, we show that we can apply Inoue's inference rule, SOL-resolution, which is complete for generating, from some clauses, their consequences which satisfy a given condition.

27 citations


Proceedings ArticleDOI
08 Nov 1999
TL;DR: A new propagation algorithm is given, that intensifies the use of the most contracting pruning functions based on box/spl phi/ consistency, and the resulting algorithm is finally shown to outperform the original scheme for enforcing box consistency on a set of standard benchmarks.
Abstract: Interval constraint solvers use local consistencies-among which one worth mentioning is box consistency-for computing verified solutions of real constraint systems. Though among the most efficient ones, the algorithm for enforcing box consistency suffers from the use of time-consuming operators. This paper first introduces box/sub /spl phi// consistency, a weakening of box consistency; this new notion then allows us to devise an adaptive algorithm that computes box consistency by enforcing box/sub /spl phi// consistency, decreasing the /spl phi/ parameter as variables' domains get tightened, then achieving eventually box/sub 0/ consistency, which is equivalent to box consistency. A new propagation algorithm is also given, that intensifies the use of the most contracting pruning functions based on box/spl phi/ consistency. The resulting algorithm is finally shown to outperform the original scheme for enforcing box consistency on a set of standard benchmarks.

Proceedings ArticleDOI
R. Debruyne1
08 Nov 1999
TL;DR: It is shown that maintaining a local consistency which is stronger than arc consistency during searching can be advantageous, and a new local consistency is proposed, called Max-RPCEn (Max-R PC Enhanced), that is weaker than Max- RPC and that has almost the same CPU time requirements.
Abstract: Filtering techniques are essential for efficient solution searching in a constraint network (CN). However, for a long time, it has been considered that to efficiently reduce the search space, the best choice is the limited local consistency achieved by forward checking. However, more recently, it has been shown that maintaining arc consistency (which is a more pruningful local consistency) during searching outperforms forward checking on hard and large constraint networks. In this paper, we show that maintaining a local consistency which is stronger than arc consistency during searching can be advantageous. According to a comparison of local consistencies that are more pruningful than the arc consistency which can be used on large CNs, max-restricted path consistency (Max-RPC) is one of the most promising local consistencies. We propose a new local consistency, called Max-RPCEn (Max-RPC Enhanced), that is stronger than Max-RPC and that has almost the same CPU time requirements.

Proceedings Article
18 Jul 1999
TL;DR: A generic framework for inverse local consistency is proposed, which includes most of the previously defined levels and allows a rich set of new levels to be defined, and which generalizes the AC7 algorithm used for arc consistency, and produces from any instance its locally consistent closure at the chosen level.
Abstract: Local consistency enforcing is at the core of CSP (Constraint Satisfaction Problem) solving. Although arc consistency is still the most widely used level of local consistency, researchers are going on investigating more powerful levels, such as path consistency, k-consistency, (i,j)-consistency. Recently, more attention has been turned to inverse local consistency levels, such as path inverse consistency, k-inverse consistency, neighborhood inverse consistency, which do not suffer from the drawbacks of the other local consistency levels (changes in the constraint definitions and in the constraint graph, prohibitive memory requirements).In this paper, we propose a generic framework for inverse local consistency, which includes most of the previously defined levels and allows a rich set of new levels to be defined. The first benefit of such a generic framework is to allow a user to define and test many different inverse local consistency levels, in accordance with the problem or even the instance he/she has to solve. The second benefit is to allow a generic algorithm to be defined. This algorithm, which is parameterized by the chosen inverse local consistency level, generalizes the AC7 algorithm used for arc consistency, and produces from any instance its locally consistent closure at the chosen level.

Proceedings ArticleDOI
05 Jan 1999
TL;DR: A general, unified and formal framework where uniform and hybrid memory consistency models can be defined is proposed and used to define the following memory models: atomic consistency, sequential consistency, causal consistency, PRAM consistency, slow memory, weak ordering, release consistency, entry consistency and scope consistency.
Abstract: The behavior of distributed shared memory systems is dictated by the memory consistency model. Several memory consistency models have been proposed in the literature and they fit basically in two categories: uniform and hybrid models. To provide a better understanding of the semantics of the memory models, researchers have proposed formalisms to define them. Unfortunately, most of the work has been done in the definition of uniform memory models. We propose a general, unified and formal framework where uniform and hybrid memory consistency models can be defined. To prove the generality of the framework, we use it to define the following memory models: atomic consistency, sequential consistency, causal consistency, PRAM consistency, slow memory, weak ordering, release consistency, entry consistency and scope consistency.

Proceedings ArticleDOI
01 May 1999
TL;DR: It is shown that sequential consistency and linearizability cannot be distinguished by the timing conditions previously considered in the context of counting networks; thus, in contexts where these constraints apply, it is possible to rely on the stronger semantics oflinearizability, which simplifies proofs and enhances compositionality.
Abstract: We compare the impact of timing conditions on implementing sequentially consistent and linearizable counters using (uniform) counting networks in distributed systems. For counting problems in application domains which do not require linearizability but will run correctly if only sequential consistency is provided, the results of our investigation, and their potential payoffs, are threefold: First, we show that sequential consistency and linearizability cannot be distinguished by the timing conditions previously considered in the context of counting networks; thus, in contexts where these constraints apply, it is possible to rely on the stronger semantics of linearizability, which simplifies proofs and enhances compositionality. Second, we identify local timing conditions that support sequential consistency but not linearizability; thus, we suggest weaker, easily implementable timing conditions that are likely to be sufficient in many applications. Third, we show that any kind of synchronization that is too weak to support even sequential consistency may violate it significantly for some counting networks; hence, we identify timing conditions that are to be totally ruled out for specific applications that rely critically on either sequential consistency or linearizability.

Proceedings Article
01 May 1999
TL;DR: The paper shows that for networks of point relations in partially ordered time, the usual constraint propagation approach does not determine global consistency, although for the branching time model, the question remains open.
Abstract: Most work on temporal interval relations and associated automated reasoning methods assumes linear (totally ordered) time. Although checking the consistency of temporal interval constraint networks is known to be NP-hard in general, many tractable subclasses of linear-time temporal relations are known for which a standard O(ns) constraint propagation algorithm actually determines the global consistency. In the very special case in which the relations are all ~pointizable," meaning that the network can be replaced by an equivalent point constraint network (on the start and finish points of the intervals), an O(n21 algorithm to check consistency exists. This paper explores the situation in nonlinear temporal models, showing that the familiar results no longer pertain. In particular, the paper shows that for networks of point relations in partially ordered time, the usual constraint propagation approach does not determine global consistency, although for the branching time model, the question remains open. Nonetheless, the paper presents an O(na) algorithm for consistency of a significant subset of the pointizable temporal interval relations in a general partially ordered time model. Beyond checking consistency, for the consistent case the algorithm produces am example scenario satisfying all the given constraints. The latter result benefits planning applications, where actual quantitative values cam be assigned to the intervals.

Book ChapterDOI
04 Jan 1999
TL;DR: It turns out that different notions of correctness give rise to different consistency relations, and each notion of consistency is formally characterised and placed in a spectrum of consistency relations.
Abstract: The structuring of the specification and development of distributed systems according to viewpoints, as advocated by the Reference Model for Open Distributed Processing, raises the question of when such viewpoint specifications may be considered consistent with one another. In this paper, we analyse the notion of consistency in the context of formal process specification. It turns out that different notions of correctness give rise to different consistency relations. Each notion of consistency is formally characterised and placed in a spectrum of consistency relations. An example illustrates the use of these relations for consistency checking.

Book
01 Sep 1999
TL;DR: The timestamped anti-entropy protocol is extremely robust in the face of site and network failure, and it scales well to large numbers of replicas.

Journal ArticleDOI
TL;DR: The effort to incorporate the transaction function into the Directory in order to support the strong consistency requirement is described, which indicates the usefulness of transaction capability outweighs the overhead consideration.