scispace - formally typeset
Search or ask a question

Showing papers on "Weak consistency published in 1998"


Journal ArticleDOI
TL;DR: This study compares three consistency approaches: adaptive TTL, polling-every-time and invalidation, through analysis, implementation, and trace replay in a simulated environment and shows that strong cache consistency can be maintained for the Web with little or no extra cost than the current weak consistency approaches.
Abstract: As the Web continues to explode in size, caching becomes increasingly important. With caching comes the problem of cache consistency. Conventional wisdom holds that strong cache consistency is too expensive for the Web, and weak consistency methods, such as Time-To-Live (TTL), are most appropriate. This study compares three consistency approaches: adaptive TTL, polling-every-time and invalidation, through analysis, implementation, and trace replay in a simulated environment. Our analysis shows that weak consistency methods save network bandwidth mostly at the expense of returning stale documents to users. Our experiments show that invalidation generates a comparable amount of network traffic and server workload to adaptive TTL and has similar average client response times, while polling-every-time results in more control messages, higher server workload, and longer client response times. We show that, contrary to popular belief, strong cache consistency can be maintained for the Web with little or no extra cost than the current weak consistency approaches, and it should be maintained using an invalidation-based protocol.

369 citations


Journal ArticleDOI
TL;DR: A simple algebraic property is described which characterises all possible constraint types for which strong k-consistency is sufficient to ensure global consistency, for each k > 2.

241 citations


Journal ArticleDOI
TL;DR: A new consistency model for systems that maintain coherence at large granularity, called Scope Consistency (ScC), which offers most of the performance advantages of the EC model without requiring explicit bindings between data and synchronization variables.
Abstract: Systems that maintain coherence at large granularity, such as shared virtual memory systems, suffer from false sharing and extra communication. Relaxed memory consistency models have been used to alleviate these problems, but at a cost in programming complexity. Release Consistency (RC) and Lazy Release Consistency (LRC) are accepted to offer a reasonable tradeoff between performance and programming complexity. Entry Consistency (EC) offers a more relaxed consistency model, but it requires explicit association of shared data objects with synchronization variables. The programming burden of providing such associations can be substantial. This paper proposes a new consistency model for such systems, called Scope Consistency (ScC), which offers most of the performance advantages of the EC model without requiring explicit bindings between data and synchronization variables. Instead, ScC dynamically detects the associations implied by the programmer, using a programming interface similar to that of RC or LRC. We propose two ScC protocols: one that uses hardware support for fine-grained remote writes (automatic updates or AU) and the other, an all-software protocol. We compare the AU-based ScC protocol with Automatic Update Release Consistency (AURC), a modified LRC protocol that also takes advantage of automatic update support. AURC already improves performance substantially over an all-software LRC protocol. For three of the five applications we used, ScC further improves the speedups achieved by AURC by about 10%.

195 citations


Journal ArticleDOI
TL;DR: A learning algorithm based on soft consistency and completeness conditions is proposed that combines in a single process rule and feature selection and it is tested on different databases.

148 citations



Journal ArticleDOI
TL;DR: Hybrid consistency as mentioned in this paper is a consistency condition for shared memory multiprocessors that combines the expressiveness of strong consistency conditions (e.g., sequential consistency, linearizability) and the efficiency of weak consistency conditions.
Abstract: Hybrid consistency, a consistency condition for shared memory multiprocessors, attempts to capture the guarantees provided by contemporary high-performance architectures. It combines the expressiveness of strong consistency conditions (e.g., sequential consistency, linearizability) and the efficiency of weak consistency conditions (e.g., pipelined RAM, causal memory). Memory access operations are classified as either strong or weak. A global ordering of strong operations at different processes is guaranteed, but there is very little guarantee on the ordering of weak operations at different processes, except for what is implied by their interleaving with the strong operations. A formal and precise definition of this condition is given and an algorithm for providing hybrid consistency on distributed memory machines is presented. The response time of the algorithm is proved to be within a constant multiplicative factor of the (theoretical) optimal time bounds.

58 citations


Journal ArticleDOI
TL;DR: If the arguments of the relations are interpreted as non-empty open sets within an arbitrary topological space, a complete consistency checking procedure can be provided by means of a composition table, compared with the case where regions are required to be planar and bounded by Jordan curves, for which the consistency problem is known to be NP-hard.
Abstract: This paper examines the problem of testing consistency of sets of topological relations which are instances of the RCC-8 relation set Leeds92a. Representations of these relations as constraints within a number of logical frameworks are considered. It is shown that, if the arguments of the relations are interpreted as non-empty open sets within an arbitrary topological space, a complete consistency checking procedure can be provided by means of a composition table. This result is contrasted with the case where regions are required to be planar and bounded by Jordan curves, for which the consistency problem is known to be NP-hard. In order to investigate the completeness of compositional reasoning, the notion of k-compactness of a set of relations w.r.t. a theory is introduced. This enables certain consistency properties of relational networks to be examined independently of any specific interpretation of the domain of entities constrained by the relations.

33 citations


01 Jan 1998
TL;DR: In this article, a collection of essays on probability models for complex systems is presented, where the main issues covered here are: (1) existence of general compositional probability measures, (2) subsystems of compositional systems, and (3) Gibbs representation of the compositional probabilities.
Abstract: This thesis is a collection of essays on probability models for complex systems. Chapter 1 is an introduction to the thesis. The main point made here is the importance of probabilistic modeling to complex problems of machine perception. Chapter 2 studies minimum complexity regression. The results include: (1) weak consistency of the regression, (2) divergence of estimates in $L\sp2$-norm with an arbitrary complexity assignment, and (3) condition on complexity measure to ensure strong consistency. Chapter 3 proposes compositionality as a general principle for probabilistic modeling. The main issues covered here are: (1) existence of general compositional probability measures, (2) subsystems of compositional systems, and (3) Gibbs representation of compositional probabilities. Chapter 4 and 5 establish some useful properties of probabilistic context-free grammars (PCFGs). The following problems are discussed: (1) consistency of estimated PCFGs, (2) finiteness of entropy, momentum, etc, of estimated PCFGs, (3) branching rates and re-normalization of inconsistent PCFGs, and (4) identifiability of parameters of PCFGs. Chapter 6 proposes a probabilistic feature based model for languages. Issues dealt with in the chapter include: (1) formulation of such grammars using maximum entropy principle, (2) modified maximum-likelihood type scheme for parameter estimation, (3) a novel pseudo-likelihood type estimation which is more efficient for sentence analysis. Chapter 7 develops a novel model on the origin of scale invariance of natural images. After presenting the evidence of scale invariance, the chapter goes on to: (1) argue for a 1/$r\sp3$ law of size of object, (2) establish a 2D Poisson model on the origin of scale invariance, and (3) show numerical simulation results for this model. Chapter 8 is a theoretical extension of Chapter 7. A general approach to construct scale and translation invariant distributions using wavelet expansion is formulated and applied to construct scale and translation invariant distributions on the spaces of generalized functions and functions defined on the whole integer lattice.

21 citations


Journal ArticleDOI
TL;DR: In this paper, necessary and sufficient conditions for the weak consistency of the sample median of independent, but not identically distributed random variables are given and discussed, and the conditions for weak consistency are discussed.
Abstract: Necessary and sufficient conditions for the weak consistency of the sample median of independent, but not identically distributed random variables are given and discussed.

18 citations


Proceedings ArticleDOI
26 May 1998
TL;DR: It is shown that memory consistency conditions such as sequential consistency and linearizability can be extended to this general model and algorithms to implement these consistency conditions in a distributed system are provided.
Abstract: The traditional distributed shared memory (DSM) model provides atomicity at levels of read and write on single objects. Therefore, multi-object operations such as double compare and swap, and atomic m-register assignment cannot be efficiently expressed in this model. We extend the traditional DSM model to allow operations to span multiple objects. We show that memory consistency conditions such as sequential consistency and linearizability can be extended to this general model. We also provide algorithms to implement these consistency conditions in a distributed system.

15 citations


Patent
22 Apr 1998
TL;DR: In this article, the authors present a solution to the problem of consistency management suitable for respective type of transactions in a transaction generation system, where a transaction is generated in a system and a level of consistency is determined based on the type of the generated transaction.
Abstract: PROBLEM TO BE SOLVED: To execute consistency management suitable for respective type of transactions. SOLUTION: One of a first level, a second level or a third level to show the level of consistency management, is set for the type of respective transactions (S300). When a transaction is generated in a system (S301), a level of consistency management is determined based on the type of the generated transaction (S302 and S303). In the case of the first level, consistency management is executed by using a strict consistency management method and update transmission from a main site to respective sites is immediately executed (S304). In the case of the second level consistency management is executed by using the strict consistency management method and update transmission from the main site to the respective sites is delayed and executed (S305). In the case of the third level, consistency management is executed by using a weak consistency management method and update transmission from the main site to the respective sites is delayed and executed (S306). COPYRIGHT: (C)1999,JPO

01 Jan 1998
TL;DR: A novel multiple-view model of a distributed data warehouse that allows views to set their own constraints by enforcing individual conditions for all pairs of paths is proposed and an algorithm to ensure that views are updated consistently is proposed.
Abstract: We propose and analyze a novel multiple-view model of a distributed data warehouse. Views are represented in a hierarchical fashion, incorporating data from base sources as well as possibly other views. Current approaches to maintain consistency in such a model require that data stored in a view derived from base data via diierent paths be from the same state of the base relation. This type of consistency criterion is too restrictive for some applications. Hence, we propose relaxing the synchronization constraints at the view level and develop a model that allows views to set their own constraints by enforcing individual conditions for all pairs of paths. We deene a correctness criteria for updates in this particular model, and analyze the new requirements necessary for maintaining the consistency of data. Finally, we propose an algorithm to ensure that views are updated consistently.

Posted Content
TL;DR: In this paper, an asymptotic theory for nonlinear regression with integrated processes is developed for the case of parametric nonlinear cointegration and sufficient conditions for weak consistency are given and a limit distribution theory is provided.
Abstract: An asymptotic theory is developed for nonlinear regression with integrated processes. The models allow for nonlinear effects from unit root time series and therefore deal with the case of parametric nonlinear cointegration. The theory covers integrable, asymptotically homogeneous and explosive functions. Sufficient conditions for weak consistency are given and a limit distribution theory is provided. In general, the limit theory is mixed normal with mixing variates that depend on the sojourn time of the limiting Brownian motion of the integrated process. The rates of convergence depend on the properties of the nonlinear regression function, and are shown to be as slow as n^{1/4} for integrable functions, to be generally polynomial in n^{1/2} for homogeneous functions, and to be path dependent in the case of explosive functions.

Proceedings ArticleDOI
30 Mar 1998
TL;DR: This work introduces a novel DSM model called Mume, a low level layer close to the level of the message passing interface that allows efficient implementations of different memory access semantics, accommodating particular data access patterns.
Abstract: Distributed shared memory (DSM) is a paradigm for programming distributed systems, which provides an alternative to the message passing model. DSM offers the agents of the system a shared address space through which they can communicate with each other. The main problem of a DSM implementation on top of a message passing system is performance. Performance of an implementation is closely related to the consistency the DSM system offers: strong consistency (all agents agree about how memory events happen) and is more expensive to implement than weak consistency (disagreements are allowed). There have been many DSM systems proposals, each one supporting different consistency levels. Experience has shown that no one is well suited for the whole range of problems. In some cases, strong consistent primitives are not needed, while in other cases, the weak semantics provided are useless. This is also true for different implementations of the same memory model, since performance is also affected by the data access patterns of the applications. We introduce a novel DSM model called Mume. Mume is a low level layer close to the level of the message passing interface. The Mume interface provides only the minimum requirements to be considered as a shared memory system. The interface includes three types of synchronization primitives, namely total ordering, causal ordering and mutual exclusion. This allows efficient implementations of different memory access semantics, accommodating particular data access patterns.