scispace - formally typeset
Search or ask a question

Showing papers on "Weak consistency published in 2004"


Journal ArticleDOI
TL;DR: This paper examines the problem of measuring intellectual influence based on data on citations between scholarly publications and finds that the properties of invariance to reference intensity, weak homogeneity, weak consistency, and invarianceto splitting of journals characterize a unique ranking method.
Abstract: This paper examines the problem of measuring intellectual influence based on data on citations between scholarly publications. We follow an axiomatic approach and find that the properties of invariance to reference intensity, weak homogeneity, weak consistency, and invariance to splitting of journals characterize a unique ranking method. This method is different from those regularly used in economics and other social sciences.

260 citations


Journal ArticleDOI
TL;DR: It is shown that even if a matrix will pass a consistency test successfully, it can be contradictory.

168 citations


Journal ArticleDOI
TL;DR: This paper shows that using a weak additional axiom satisfied by most existing soft constraints proposals, it is possible to define a notion of soft arc consistency that extends the classical notion of arc consistency and this even in the case of non-idempotent cost combination operators.

140 citations


Journal ArticleDOI
TL;DR: The goal of memory consistency is to ensure certain declarative properties which can be intuitively understood by a programmer, and hence allow him or her to write a correct program.
Abstract: The traditional assumption about memory is that a read returns the value written by the most recent write. However, in a shared memory multiprocessor several processes independently and simultaneously submit reads and writes resulting in a partial order of memory operations. In this partial order, the definition of most recent write may be ambiguous. Memory consistency models have been developed to specify what values may be returned by a read given that memory operations may only be partially ordered. Before this work, consistency models were defined independently. Each model followed a set of rules which was separate from the rules of every other model. In our work, we have defined a set of four consistency properties. Any subset of the four properties yields a set of rules which constitute a consistency model. Every consistency model previously described in the literature can be defined based on our four properties. Therefore, we present these properties as a unfied theory of shared memory consistency.Our unified theory provides several benefits. First, we claim that these four properties capture the underlying structure of memory consistency. That is, the goal of memory consistency is to ensure certain declarative properties which can be intuitively understood by a programmer, and hence allow him or her to write a correct program. Our unified theory provides a uniform, formal definition of all previously described consistency models, and in addition some combinations of properties produce new models that have not yet been described. We believe these new models will prove to be useful because they are based on declarative properties which programmers desire to be enforced. Finally, we introduce the idea of selecting a consistency model as an on-line activity. Before our work, a shared memory program would run start to finish under a single consistency model. Our unified theory allows the consistency model to change as the program runs while maintaining a consistent definition of what values may be returned by each read.

91 citations


Journal ArticleDOI
TL;DR: In this paper, a kernel method is introduced to estimate a spatial conditional regression under mixing spatial processes, which requires no assumption on the regressor and allows the mixing coefficients decreasing to zero slowly.

60 citations


Proceedings ArticleDOI
08 Mar 2004
TL;DR: This work proves a relationship between causal consistency model and client-centric consistency models, and shows that in fact causal consistency requires all common session guarantees, i.e. read-your-writes, monotonic-writES, monOTonic-reads and writes-follow-reads to be preserved.
Abstract: We discuss relationships between client-centric consistency models (known as session guarantees), and data-centric consistency models. The first group includes: read-your-writes guarantee, monotonic-writes guarantee, monotonic-reads guarantee and writes-follow-reads guarantee. The other group includes: atomic consistency, sequential consistency, causal consistency, processor consistency, PRAM consistency, weak consistency, release consistency, scope consistency and entry consistency. We use a consistent notation to present formal definitions of both kinds of consistency models in the context of replicated shared objects. Next, we prove a relationship between causal consistency model and client-centric consistency models. Apparently, causal consistency is similar to writes-follow-reads guarantee. We show that in fact causal consistency requires all common session guarantees, i.e. read-your-writes, monotonic-writes, monotonic-reads and writes-follow-reads to be preserved.

53 citations


Book ChapterDOI
15 Dec 2004
TL;DR: A formalism for modeling replication in a distributed system with concurrent users sharing information based on actions, which represent operations requested by independent users, and constraints, representing scheduling relations between actions, enables the design of a new, generalised peer-to-peer consistency protocol.
Abstract: We present a formalism for modeling replication in a distributed system with concurrent users sharing information. It is based on actions, which represent operations requested by independent users, and constraints, representing scheduling relations between actions. The formalism encompasses semantics of shared data, such as commutativity or conflict between actions, and user intents such as causal dependence or atomicity. It enables us to reason about the consistency properties of a replication protocol or of classes of protocols. It supports weak consistency (optimistic protocols) as well as the stronger pessimistic protocols. Our approach clarifies the requirements and assumptions common to all replication systems. We are able to prove a number of common properties. For instance consistency properties that appear different operationally are proved equivalent under suitable liveness assumptions. The formalism enables us to design a new, generalised peer-to-peer consistency protocol.

34 citations


Journal ArticleDOI
TL;DR: This paper focuses on a hierarchical caching system based on the time-to-live expiration mechanism and presents a basic model for such system, and introduces threshold-based and randomization-based techniques to enhance and generalize the basic model.
Abstract: Caching is an important means to scale up the growth of the Internet. Weak consistency is a major approach used in Web caching and has been deployed in various forms. The paper investigates some fundamental properties and performance issues associated with an expiration-based caching system. We focus on a hierarchical caching system based on the time-to-live expiration mechanism and present a basic model for such system. By analyzing the intrinsic timing behavior of the basic model, we derive important performance metrics from the perspectives of the caching system and end users, respectively. Based on the results for the basic model, we introduce threshold-based and randomization-based techniques to enhance and generalize the basic model further. Our results offer some important insights into a hierarchical caching system based on the weak consistency paradigm.

27 citations


Journal ArticleDOI
TL;DR: In this paper, a non-parametric estimation of cumulative hazard functions and reliability functions of progressively type-II right censored data is considered, which can also be extended to arbitrary continuous distribution functions.

26 citations


Book ChapterDOI
22 Dec 2004
TL;DR: A flexible consistency model is presented, aggregating a parameterized representation common for all the models along the spectrum delimited by strong consistency and eventual consistency.
Abstract: This paper presents a flexible consistency model, aggregating a parameterized representation common for all the models along the spectrum delimited by strong consistency and eventual consistency A specific model, required by a particular Data Object, is derived from this representation by selecting and combining the proper consistency parameters values.

8 citations


01 Nov 2004
TL;DR: This report shows that a pure backtrack-free filtering algorithm enforcing arc consistency will never exist, and shows that it is easy to obtain a property stronger than arc consistency with a few steps of bisection.
Abstract: Hyvonen and Faltings observed that propagation algorithms with continuous variables are computationally extremely inefficient when unions of intervals are used to precisely store refinements of domains. These algorithms were designed in the hope of obtaining the interesting property of arc consistency, that guarantees every value in domains to be consistent w.r.t. every constraint. In the first part of this report, we show that a pure backtrack-free filtering algorithm enforcing arc consistency will never exist. But surprisingly, we show that it is easy to obtain a property stronger than arc consistency with a few steps of bisection. We define this so-called box-set consistency and detail a generic method to enforce it. In the second part, a concrete algorithm, derived from the lazy version of the generic method is proposed. Correctness is proved and the properties are studied precisely.

Journal ArticleDOI
TL;DR: It is shown that by setting thresholds appropriately, it is possible that users can impose a consistency QoS requirement on the object that they wish to obtain without too much trade-off in system performance and performance bias against leaf users due to their unfavorable locations in the hierarchical structure can be mitigated.

01 Jan 2004
TL;DR: In this paper, a procedura costruttiva di approssimazione di una funzione di ripartizione mediante misture is presented, in which a quadro di riferimento generale per studiare iniziali basate su misture for l'inferenza bayesiana nonparametrica.
Abstract: Riassunto: Combinazioni lineari convesse di distribuzioni (“modelli mistura”) sono ampiamente utilizzate, in diversi contesti applicativi, per modellare l’eterogeneit à d dati, o come strumento per ottenere modelli flessibili. In queste note si discute il ruolo dei modelli mistura nell’inferenza Bayesiana nonparametrica. Basandoci su risultati di Feller, presentiamo una procedura costruttiva di approssimazione di una funzione di ripartizione mediante misture. Ci ò fornisce un quadro di riferimento generale per studiare iniziali basate su misture per l’inferenza bayesiana nonparametrica. Si riprendono alcuni risultati nel caso unidimensionale, in particolare relativi alla propriet à di consistenza della distribuzione finale, e si suggeriscono estensioni al caso di dati multidimensionali.

Proceedings ArticleDOI
17 May 2004
TL;DR: This work considers the problem of A/D conversion for non-bandlimited signals that have a finite rate of innovation, in particular, the class of a continuous periodic stream of Diracs, characterized by a set of time positions and weights.
Abstract: We consider the problem of A/D conversion for non-bandlimited signals that have a finite rate of innovation, in particular, the class of a continuous periodic stream of Diracs, characterized by a set of time positions and weights. Previous research has only considered the sampling of these signals, ignoring quantization which is necessary for any practical application (e.g. UWB, CDMA). In order to achieve accuracy under quantization, we introduce two types of oversampling, namely, oversampling in frequency and oversampling in time. High accuracy is achieved by enforcing the reconstruction to satisfy either three convex sets of constraints related to (1) sampling kernel, (2) quantization and (3) periodic streams of Diracs, which is then said to provide strong consistency, or only the first two, providing weak consistency. We propose three reconstruction algorithms, the first two achieving weak consistency and the third one achieving strong consistency. For these three algorithms, respectively, the experimental MSE performance for time positions decreases as O(1/R/sub t//sup 2/ R/sub f//sup 3/), and O(1/R/sub t//sup 2/ R/sub f//sup 4/), where R/sub t/ and R/sub f/ are the oversampling ratios in time and in frequency, respectively. It is also proved theoretically that our reconstruction algorithms satisfying weak consistency achieve an MSE performance of at least O(1/R/sub t//sup 2/ R/sub f//sup 3/).

Proceedings Article
01 Jan 2004
TL;DR: This paper presents a formal definition of the term data consistency using an event lattice model and concludes that with this definition the consistency of an arbitrary execution of a concurrent system can be determined.
Abstract: Replication of data is a common technique to enhance performance in distributed systems in which multiple activities use shared passive objects. If replicated data is updateable rather than read-only, modifications must be propagated to other copies of the replicated data in order to assure a consistent view. The delay of update propagations affects the observations of activities running on the different nodes of a distributed system. Read operations on the same passive object may return different values depending on the nodes they are executed on and the replica being accessed. A memory consistency model defines the legal ordering of data modifications issued by an activity on a node of the distributed system, as observed by remote activities. This paper presents a formal definition of the term data consistency using an event lattice model. With this definition the consistency of an arbitrary execution of a concurrent system can be determined.

Proceedings ArticleDOI
19 Apr 2004
TL;DR: This paper evaluates the Plurix DSM system for the first time with a real parallel application, a parallel ray-tracer, and shows that the measurements show that the DSM scales quite well for this application even though it is using a strong consistency model.
Abstract: Distributed shared memory (DSM) is a well-known alternative to explicit message passing and remote procedure call. Numerous DSM systems and consistency models have been proposed in the past. The Plurix project implements a DSM operating system (OS) storing data and code, within the DSM. Our DSM storage features a new consistency model (transactional consistency) combining restartable transactions with an optimistic synchronization scheme instead of relying on a hard to use weak consistency model. In this paper we evaluate our system for the first time with a real parallel application, a parallel ray-tracer. The measurements show that our DSM scales quite well for this application even though we are using a strong consistency model.

Book ChapterDOI
15 Dec 2004
TL;DR: It is claimed that, by itself, sequential consistency is not a convergent protocol, and its relationships with several consistency models are analyzed.
Abstract: At instant t, two or more sites could perceive different values for the same distributed object X. However, depending on the consistency protocol used, it might be expected that, after a while, every site in the system should see the same value for this object. In this paper, we present a formalization of the concept of convergence and analyze its relationships with several consistency models. Among other things, we claim that, by itself, sequential consistency is not a convergent protocol.

Journal ArticleDOI
TL;DR: In this article, various screening devices are introduced which help to differentiate between suitable and unsuitable price index formulas, and it is argued that testing for weak consistency in aggregation is a particularly important screening device.
Abstract: In empirical economic research, individual prices are often aggregated into average prices of subaggregates. Then, these average prices are aggregated to produce the average price of the total aggregate. Often, such two stage procedures help to illuminate the underlying forces driving the overall result. Since price data are usually published as price changes, this two stage aggregation is typically based on some price index formula. In this paper, various screening devices are introduced which help to differentiate between suitable and unsuitable formulas. It is argued that testing for weak consistency in aggregation is a particularly important screening device. If a price index formula fails the weak consistency test and, nevertheless, this formula is used for a multi stage price index computation, then the measured overall price change depends on the number of computational stages and also on the precise manner in which the elementary items are partitioned into subaggregates. In other words, the findings are not robust and cannot be considered as particularly reliable. Based on these screening devices, it is examined which price index formulas can be expected to produce consistent results.

Proceedings Article
22 Aug 2004
TL;DR: It is argued that algebraic closure, which can be enforced by applying a path-consistency algorithm, is the only feasible algebraic method for deciding consistency, and a heuristic about when algebraicclosure decides consistency is given.
Abstract: Qualitative spatial and temporal reasoning problems are usually expressed in terms of constraint satisfaction problems, with determining consistency as the main reasoning problem. Because of the high complexity of determining consistency, several notions of local consistency, such as path-consistency, k-consistency and corresponding algorithms have been introduced in the constraint community and adopted for qualitative spatial and temporal reasoning. Since most of these notions of local consistency are equivalent for Allen's Interval Algebra, the first and best known calculus of this kind, it is believed by many that these notions are equivalent in general— which they are not! In this paper we discuss these various notions of consistency and give examples showing their different behaviours in qualitative reasoning. We argue that algebraic closure, which can be enforced by applying a path-consistency algorithm, is the only feasible algebraic method for deciding consistency, and give a heuristic about when algebraic closure decides consistency.

Book ChapterDOI
06 Jun 2004
TL;DR: This paper evaluates their own weak consistency algorithm, called the ”Fast Consistency Algorithm”, and concludes that considering application parameters such as demand in the event or change propagation mechanism to prioritize probabilistic interactions with neighbors with higher demand gives a surprising improvement in the speed of change propagation perceived by most users.
Abstract: Weak consistency algorithms allow us to propagate changes in a large, arbitrary changing storage network in a self-organizing way. These algorithms generate very little traffic overhead. In this paper we evaluate our own weak consistency algorithm, which is called the ”Fast Consistency Algorithm”, and whose main aim is optimizing the propagation of changes introducing a preference for nodes and zones of the network which have greatest demand. We conclude that considering application parameters such as demand in the event or change propagation mechanism to: 1) prioritize probabilistic interactions with neighbors with higher demand, and 2) including little changes on the logical topology, gives a surprising improvement in the speed of change propagation perceived by most users.