scispace - formally typeset
Search or ask a question

Showing papers on "Weak consistency published in 1995"


Proceedings ArticleDOI
01 May 1995
TL;DR: The results show that DSI reduces execution time of a sequentially consistent full-map coherence protocol by as much as 41%.
Abstract: This paper introduces dynamic self-invalidation (DSI), a new technique for reducing cache coherence overhead in shared-memory multiprocessors. DSI eliminates invalidation messages by having a processor automatically invalidate its local copy of a cache block before a conflicting access by another processor. Eliminating invalidation overhead is particularly important under sequential consistency, where the latency of invalidating outstanding copies can increase a program's critical path.DSI is applicable to software, hardware, and hybrid coherence schemes. In this paper we evaluate DSI in the context of hardware directory-based write-invalidate coherence protocols. Our results show that DSI reduces execution time of a sequentially consistent full-map coherence protocol by as much as 41%. This is comparable to an implementation of weak consistency that uses a coalescing write-buffer to allow up to 16 outstanding requests for exclusive blocks. When used in conjunction with weak consistency, DSI can exploit tear-off blocks---which eliminate both invalidation and acknowledgment messages---for a total reduction in messages of up to 26%.

216 citations


Proceedings Article
20 Aug 1995
TL;DR: A specific algorithm, AC-7, is presented, which takes advantage of a simple property common to all binary constraints to eliminate constraint checks that other arc consistency algorithms perform.
Abstract: Constraint satisfaction problems are widely used in artificial intelligence They involve finding values for problem variables subject to constraints that specify which combinations of values are consistent Knowledge about properties of the constraints can permit inferences that reduce the cost of consistency checking In particular, such inferences can be used to reduce the number of constraint checks required in establishing arc consistency, a fundamental constraint-based reasoning technique A general AC-Inference schema is presented and various forms of inference discussed A specific algorithm, AC-7, is presented, which takes advantage of a simple property common to all binary constraints to eliminate constraint checks that other arc consistency algorithms perform The effectiveness of this approach is demonstrated analytically, and experimentally on real-world problems

140 citations


Book ChapterDOI
18 Dec 1995
TL;DR: Sequential consistency and causal consistency constitute two of the main consistency criteria used to define the semantics of accesses in the shared memory model.
Abstract: Sequential consistency and causal consistency constitute two of the main consistency criteria used to define the semantics of accesses in the shared memory model. An execution is sequentially consistent if all processes can agree on a same legal sequential history of all the accesses; if processes perceive distinct legal sequential histories of all the accesses, the execution is only causally consistent (legality means that a read does not get an overwritten value).

61 citations


01 Jan 1995
TL;DR: This thesis proposes to approximate feasible solution regions by 2 k-trees, thus providing a means of combining constraints logically rather than numerically, and proposes simple and stable algorithms for computing labels of arbitrary degrees of consistency using this representation.
Abstract: Ing enieure informaticienne EPFL originaire de Plaaein (FR) accet ee sur proposition du jury: Abstract Constraint Satisfaction Problems (CSPs) are ubiquitous in computer science. Many problems , ranging from resource allocation and scheduling to fault diagnosis and design, involve constraint satisfaction as an essential component. A CSP is given by a set of variables and constraints on small subsets of these variables. It is solved by nding assignments of values to the variables such that all constraints are satissed. In its most general form, a CSP is combinatorial and complex. In this thesis, we consider constraint satisfaction problems with variables in continuous, numerical domains. Contrary to most existing techniques, which focus on computing a single optimal solution, we address the problem of computing a compact representation of the space of all solutions that satisfy the constraints. This has the advantage that no optimization criterion has to be formulated beforehand, and that the space of possibilities can be explored systematically. In certain applications, such as diagnosis and design, these advantages are crucial. In consistency techniques, the solution space is represented by labels assigned to individual variables or combinations of variables. When the labeling is globally consistent, each label contains only those values or combinations of values which appear in at least one solution. This kind of labeling is a compact, sound and complete representation of the solution space, and can be combined with other reasoning methods. In practice, computing a globally consistent labeling is too complex. This is usually tackled in two ways. One way is to enforce consistencies locally, using propagation algorithms. This prunes the search space and hence reduces the subsequent search eeort. The other way is to identify simplifying properties which guarantee that global consistency can be enforced tractably using local propagation algorithms. When constraints are represented by mathematical expressions, implementing local consistency algorithms is diicult because it requires tools for solving arbitrary systems of equations. In this thesis, we propose to approximate feasible solution regions by 2 k-trees, thus providing a means of combining constraints logically rather than numerically. This representation, commonly used in computer vision and image processing, avoids using complex mathematical tools. We propose simple and stable algorithms for computing labels of arbitrary degrees of consistency using this representation. For binary constraints, it is known that simplifying convexity properties reduces the complexity of solving a CSP. These properties guarantee that local degrees of consistency are …

39 citations


Journal ArticleDOI
TL;DR: In this article, it is shown that for any implementation of weak consistency, the time required to execute a read-modify-write, a dequeue or a pop operation is Ω(d), where d is the network delay.
Abstract: In recent years, there is a growing tendency to support high-level synchronization operations, such as read-modify-write, FIFO queues and stacks, as part of the programmer's shared memory model. This paper examines the problem of implementing hybrid consistency with high-level synchronization operations. It is shown that for any implementation of weak consistency, the time required to execute a read-modify-write, a dequeue or a pop operation is Ω(d), where d is the network delay. Following this, an efficient and simple algorithm for providing hybrid consistency that supports most types of high-level synchronization operations and weak read and weak write operations is presented. Weak read and weak write operations are executed instantaneously, while the time required to execute strong operations is O(d). This is within a constant factor of the lower bounds for most of the commonly used types of operations.

20 citations


Journal ArticleDOI
TL;DR: It is discovered that cache interferences cause very little performance degradation in all relaxed memory consistency models; as long as the network is contention-free even when multithreading has saturated the system.
Abstract: Stochastic timed Petri nets are developed to evaluate the relative performance of distributed shared memory models for scalable multiprocessors, using multithreaded processors as building blocks. Four shared memory models are evaluated: the sequential consistency (SC) model by Lamport (1979), the weak consistency (WC) model by Dubois et al. (1986), the processor consistency (PC) model by Goodman (1989), and the release consistency (RC) model by Gharachorloo et al. (1990). We assumed a scalable network with a sufficient bandwidth to absorb the increased traffic from multithreading, coherent caches, and memory event reordering. The embedded Markov chains are solved to reveal the performance attributes. Under saturated conditions, we find that multithreading contributes more than 50% of the performance improvement, while the improvement from memory consistency models varies between 20% to 40% of the total performance gain. Petri net models are effective to predict the performance of processors with a larger number of contexts than that can be simulated in previous benchmark studies. The accuracy of these memory performance models was validated with the simulation results from Stanford University. Our analytical results reveal the lowest performance of the SC model amongst four memory consistency models. The PC model requires to use larger write buffers, while the WC and RC models require smaller write buffers. The PC model may perform even lower than the SC model, if a small buffer was used. The performance of the WC model depends heavily on the synchronization rate in user code. For a low synchronization rate, the WC model performs as well as the RC model. With sufficient multithreading and network bandwidth, the RC model shows the best performance among the four models. Furthermore, we discovered that cache interferences cause very little performance degradation in all relaxed memory consistency models; as long as the network is contention-free even when multithreading has saturated the system. >

13 citations


Proceedings Article
20 Aug 1995
TL;DR: A more efficient method for checking an important subclass of functional constraints, increasing functional constraints is proposed, rather than checking them many times as in a typical consistency check process, in the new method they only need to be checked once.
Abstract: Central to solving Constraint Satisfaction Problem (CSP) is the problem of consistency check. Past research has produced many general and specific consistency algorithms. Specific algorithms are efficient specializations of the general ones for specific constraints. Functional, anti-functional and monotonic constraints are three important classes of specific constraints. They form the basis of the current constraint programming languages. This paper proposes a more efficient method for checking an important subclass of functional constraints, increasing functional constraints. Rather than checking them many times as in a typical consistency check process, in the new method they (almost all of them) only need to be checked once. This results in a substantia] saving in computation.

13 citations


01 Jan 1995
TL;DR: In this article, the necessary and sufficient conditions on weak consistency of the LS estimate of β were given for weak consistency in a multiple regression model with independent random errors, distributed identically or not.
Abstract: Consider the multiple regression model . Suppose that the random errors e1, e2,… are independent, distributed identically or not. and possess finite moment of order r, 1≤r2. This paper gives the necessary and sufficient conditions on for weak consistency of the LS estimate of β.

10 citations


Journal ArticleDOI
TL;DR: In this article, the notion of a functional solution of the Euler equations for incompressible fluid flows was introduced, and it was shown that the functional solution can be constructed under very weak a priori estimates on approximate solution sequences of the equation.
Abstract: We consider the notion of a functional solution of the Euler equations for incompressible fluid flows. We show that a functional solution can be constructed under “very weak” a priori estimates on approximate solution sequences of the equation; an estimate uniform in Lloc1 together with weak consistency with the equation is sufficient to construct a solution. We prove that if we have an estimate uniform in Lloc2 available for the approximate solution sequence, then the structured functional solution just described becomes a measure-valued solution in the sense of DiPerna & Majda. We also show that a functional solution can be obtained from a measure-valued solution. We give an example showing that a much higher concentration of energy than in the case of measure-valued solutions is allowed by the approximation procedure of a functional solution.

8 citations


01 Jan 1995
TL;DR: Although the reachability of objects in a GC-consistent cut is inherited from the underlying database, many other interesting properties of the cut are unrelated to those of the database; this weak consistency is related to the low cost of building GC- Consistent cuts.
Abstract: We introduce a new method for concurrent garbage collection in object-oriented databases. For this purpose, we define a {\em cut} of a database to be a collection containing one or more copies of every page in the database; the copies may have been made at different times during the operation of the database. We define a class of cuts called {\em GC-consistent cuts}, and prove formally that a garbage collector can correctly determine which objects to delete by examining a GC-consistent cut of a database instead of the database itself. We show that GC-consistent cuts can synchronize the concurrent collector with the mutator, \ie perform the task usually assigned to a write barrier: while a database is in operation, a GC-consistent cut of it can be built in an efficient and inobtrusive way, and, while still under construction, can be used by a garbage collector. We investigate other fundamental properties of GC-consistent cuts. We compare their consistency properties with those of causal cuts of distributed systems. We show that although the reachability of objects in a GC-consistent cut is inherited from the underlying database, many other interesting properties of the cut are unrelated to those of the database; this weak consistency is related to the low cost of building GC-consistent cuts.

5 citations


Proceedings ArticleDOI
05 Jun 1995
TL;DR: This paper presents results from a detailed simulation analysis evaluating the benefits and losses in performance resulting from using synchronous versus asynchronous operations within HARP as well as comparing it with a traditional replication protocol.
Abstract: This paper evaluates the performance of HARP, a hierarchical replication protocol based on nodes organized into a logical hierarchy. The scheme is based an communication with nearby replicas and scales well for thousands of replicas. It proposes a new service interface that provides different levels of asynchrony, allowing strong consistency and weak consistency to be integrated into the same framework. Further it provides the ability to offer different levels of staleness, by querying from different levels of the hierarchy. We present results from a detailed simulation analysis evaluating the benefits and losses in performance resulting from using synchronous versus asynchronous operations within HARP as well as comparing it with a traditional replication protocol. >


Posted Content
TL;DR: In this paper, the weak consistency of the quasi maximum likelihood estimator of the parameters of a vector autoregressive model with GARCH(l,q) errors is proved.
Abstract: We provide conditions that enable to prove the weak consistency of the quasi maximum likelihood estimator of the parameters of a vector autoregressive model with GARCH(l,q) errors. The BEKK representation of Engle and Kroner (1995) is used t.o parametrize the multivariate GARCH process.

Journal ArticleDOI
TL;DR: An attempt is made to unify many algorithms for solving constraint satisfaction problems into a common framework that provides a direct way of practical implementation of the CSP model for real problem‐solving.
Abstract: Constraint Satisfaction Problem (CSP) involves finding values for variables to satisfy a set of constraints. Consistency check is the key technique in solving this class of problems. Past research has developed many algorithms for such a purpose, e.g., node consistency, are consistency, generalized node and arc consistency, specific methods for checking specific constraints, etc. In this article, an attempt is made to unify these algorithms into a common framework. This framework consists of two parts. the first part is a generic consistency check algorithm, which allows and encourages each individual constraint to be checked by its specific consistency methods. Such an approach provides a direct way of practical implementation of the CSP model for real problem-solving. the second part is a general schema for describing the handling of each type of constraint. the schema characterizes various issues of constraint handling in constraint satisfaction, and provides a common language for expressing, discussing, and exchanging constraint handling techniques. © 1995 John Wiley & Sons, Inc.

Book ChapterDOI
03 Oct 1995
TL;DR: This paper presents a domain filtering procedure that tightly combines the use of arc and path-consistency, each one helping the other to achieve further or faster work.
Abstract: The resolution of constraint satisfaction problems heavily relies on the use of local consistency enforcement procedures which are used to filter the problems before or during their resolution. While procedures based on arc-consistency are almost a standard, path-consistency checking is often neglected because it is costly and it filters out pairs of assignments instead of single assignments. This paper presents a domain filtering procedure that tightly combines the use of arc and path-consistency, each one helping the other to achieve further or faster work. We show, on an experimental evaluation, that the proposed procedure offers a considerable filtering power at relatively low cost.