scispace - formally typeset
Search or ask a question

Showing papers on "Weak consistency published in 2016"


Journal ArticleDOI
TL;DR: In this article, the authors provide a general minimax theory for community detection for the stochastic block model (SBM) and show that the minimax rates are exponential, different from polynomial rates we often see in statistical literature.
Abstract: Recently, network analysis has gained more and more attention in statistics, as well as in computer science, probability and applied mathematics. Community detection for the stochastic block model (SBM) is probably the most studied topic in network analysis. Many methodologies have been proposed. Some beautiful and significant phase transition results are obtained in various settings. In this paper, we provide a general minimax theory for community detection. It gives minimax rates of the mis-match ratio for a wide rage of settings including homogeneous and inhomogeneous SBMs, dense and sparse networks, finite and growing number of communities. The minimax rates are exponential, different from polynomial rates we often see in statistical literature. An immediate consequence of the result is to establish threshold phenomenon for strong consistency (exact recovery) as well as weak consistency (partial recovery). We obtain the upper bound by a range of penalized likelihood-type approaches. The lower bound is achieved by a novel reduction from a global mis-match ratio to a local clustering problem for one node through an exchangeability property.

189 citations


Proceedings ArticleDOI
11 Jan 2016
TL;DR: This work proposes the first proof rule for establishing that a particular choice of consistency guarantees for various operations on a replicated database is enough to ensure the preservation of a given data integrity invariant.
Abstract: Large-scale distributed systems often rely on replicated databases that allow a programmer to request different data consistency guarantees for different operations, and thereby control their performance. Using such databases is far from trivial: requesting stronger consistency in too many places may hurt performance, and requesting it in too few places may violate correctness. To help programmers in this task, we propose the first proof rule for establishing that a particular choice of consistency guarantees for various operations on a replicated database is enough to ensure the preservation of a given data integrity invariant. Our rule is modular: it allows reasoning about the behaviour of every operation separately under some assumption on the behaviour of other operations. This leads to simple reasoning, which we have automated in an SMT-based tool. We present a nontrivial proof of soundness of our rule and illustrate its use on several examples.

136 citations


Journal ArticleDOI
TL;DR: This article provides a structured and comprehensive overview of different consistency notions that appeared in distributed systems, and in particular storage systems research, in the last four decades, and defines precisely many of these, in particular where the previous definitions were ambiguous.
Abstract: Over the years, different meanings have been associated with the word consistency in the distributed systems community. While in the ’80s “consistency” typically meant strong consistency, later defined also as linearizability, in recent years, with the advent of highly available and scalable systems, the notion of “consistency” has been at the same time both weakened and blurred. In this article, we aim to fill the void in the literature by providing a structured and comprehensive overview of different consistency notions that appeared in distributed systems, and in particular storage systems research, in the last four decades. We overview more than 50 different consistency notions, ranging from linearizability to eventual and weak consistency, defining precisely many of these, in particular where the previous definitions were ambiguous. We further provide a partial order among different consistency predicates, ordering them by their semantic “strength,” which we believe will be useful in future research. Finally, we map the consistency semantics to different practical systems and research prototypes. The scope of this article is restricted to non-transactional semantics, that is, those that apply to single storage object operations. As such, our article complements the existing surveys done in the context of transactional, database consistency semantics.

118 citations


Journal ArticleDOI
TL;DR: The new average-case consistency measure of an interval-valued reciprocal preference relation is determined as the average consistency degree of all reciprocal preference relations associated to the interval- values, and an average- case consistency improving method is developed.
Abstract: Measuring consistency of preferences is very important in decision-making. This paper addresses this key issue for interval-valued reciprocal preference relations. Existing studies implement one of two different measures: the "classical" consistency measure, and the "boundary" consistency measure. The classical consistency degree of an interval-valued reciprocal preference relation is determined by its associated reciprocal preference relation with highest consistency degree, while the boundary consistency degree is determined by its two associated boundary reciprocal preference relations. However, the consistency index of an interval-valued reciprocal preference relation should be determined by taking into account all its associated reciprocal preference relations. Motivated by this, a new consistency measure for interval-valued reciprocal preference relations, the average-case consistency measure, is suggested and introduced. The new average-case consistency measure of an interval-valued reciprocal preference relation is determined as the average consistency degree of all reciprocal preference relations associated to the interval-valued reciprocal preference relation. Furthermore, the analysis and comparison of the different consistency measure internal mechanisms is used to justify the validity of the average-case consistency measure. Finally, an average-case consistency improving method which aims to obtain a modified interval-valued reciprocal preference relation with a required average consistency degree is developed.

83 citations


Journal ArticleDOI
01 May 2016
TL;DR: The consistency and the completing algorithms of InLPRs are discussed by interacting with the experts and these algorithms can serve as assistant tools for the experts to present their preferences.
Abstract: The incomplete LPRs are improved by interacting with experts.An interactive algorithm is presented to reach weak consistency.Missing entries are estimated by a consistency-based interactive algorithm.A self-adaptive evolution algorithm is developed to obtain consistent LPRs. Incomplete linguistic preference relations (InLPRs) are generally inevitable in group decision making problems due to several reasons. Two vital issues of InLPRs are the consistency and the estimation of missing entries. The initial InLPR may be not consistent, which means that some of its entries do not reflect the real opinions of the experts accurately. Thus, there are deviations between some initial provided values and real opinions. Therefore, it is valuable to elicit the providers to realize and repair the deviations. In this paper, we discuss the consistency and the completing algorithms of InLPRs by interacting with the experts. Servicing as the minimum condition of consistency, the weak consistency of InLPRs is defined and a weak consistency reaching algorithm is designed to guarantee the logical correctness of InLPRs. Then two distinct completing algorithms are presented to estimate the missing entries. The former not only estimates all possible linguistic terms and represents them by the extended hesitant fuzzy linguistic terms sets but also keeps weak consistency during the computing procedures. The later can automatically revise the existing entries using the new opinions supplemented by the experts during interactions. All the proposed algorithms interact with the experts to elicit and mine their actual opinions more accurately. A real case study is also presented to clarify the advantages of our proposal. Moreover, these algorithms can serve as assistant tools for the experts to present their preferences.

61 citations


Journal ArticleDOI
TL;DR: A lp distance-based method is proposed to formulate the underlying optimization problems as goal programming (GP) models for ordinal and additive consistency problems respectively, and the proposed model can preserve the initial preference information as much as possible.

55 citations


Journal ArticleDOI
TL;DR: An approach to group decision making with interval linguistic preference relations is developed, which is based on the consistency and consensus analysis, and the associated numerical examples are offered to illustrate the application of the procedure.
Abstract: Preference relations are a powerful tool to address decision-making problems. In some situations, because of the complexity of decision-making problems and the inherent uncertainty, the decision makers cannot express their preferences by using numerical values. Interval linguistic preference relations, which are more reliable and informative for the decision-makers’ preferences, are a good choice to cope with this issue. Just as with the other types of preference relations, the consistency and consensus analysis is very importance to ensure the reasonable ranking order by using interval linguistic preference relations. Considering this situation, this paper introduces a consistency concept for interval linguistic preference relations. To measure the consistency of interval linguistic preference relations, a consistency measure is defined. Then, a consistency-based programming model is built, by which the consistent linguistic preference relations with respect to each object can be obtained. To cope with the inconsistency case, two models for deriving the adjusted consistent linguistic preference relations are constructed. Then, a consistency-based programming model to estimate the missing values is built. After that, we present a group consensus index and present some of its desirable properties. Furthermore, a group consensus-based model to determine the weights of the decision makers with respect to each object is established. Finally, an approach to group decision making with interval linguistic preference relations is developed, which is based on the consistency and consensus analysis. Meanwhile, the associated numerical examples are offered to illustrate the application of the procedure.

54 citations


Proceedings ArticleDOI
01 Jan 2016
TL;DR: Criteria to check whether applications that rely on a database providing only weak consistency are robust are robust, i.e., behave as if they used adatabase providing serializability.
Abstract: To achieve scalability, modern Internet services often rely on distributed databases with consistency models for transactions weaker than serializability At present, application programmers often lack techniques to ensure that the weakness of these consistency models does not violate application correctness We present criteria to check whether applications that rely on a database providing only weak consistency are robust, ie, behave as if they used a database providing serializability When this is the case, the application programmer can reap the scalability benefits of weak consistency while being able to easily check the desired correctness properties Our results handle systematically and uniformly several recently proposed weak consistency models, as well as a mechanism for strengthening consistency in parts of an application

47 citations


Journal ArticleDOI
01 Jul 2016
TL;DR: A new distributed graph database, called Weaver, is introduced, which enables efficient, transactional graph analyses as well as strictly serializable ACID transactions on dynamic graphs, and a novel request ordering mechanism called refinable timestamps.
Abstract: Graph databases have become a common infrastructure component. Yet existing systems either operate on offline snapshots, provide weak consistency guarantees, or use expensive concurrency control techniques that limit performance.In this paper, we introduce a new distributed graph database, called Weaver, which enables efficient, transactional graph analyses as well as strictly serializable ACID transactions on dynamic graphs. The key insight that allows Weaver to combine strict serializability with horizontal scalability and high performance is a novel request ordering mechanism called refinable timestamps. This technique couples coarse-grained vector timestamps with a fine-grained timeline oracle to pay the overhead of strong consistency only when needed. Experiments show that Weaver enables a Bitcoin blockchain explorer that is 8x faster than Blockchain.info, and achieves 10.9x higher throughput than the Titan graph database on social network workloads and 4x lower latency than GraphLab on offline graph traversal workloads.

45 citations


Journal ArticleDOI
TL;DR: This work proposes an extension of p-boxes to cover imprecise evaluations of pairs of random numbers and term them bivariate p- boxes, since they are at best (but generally not) equivalent to 2-coherence.
Abstract: A p-box is a simple generalization of a distribution function, useful to study a random number in the presence of imprecision. We propose an extension of p-boxes to cover imprecise evaluations of pairs of random numbers and term them bivariate p-boxes. We analyze their rather weak consistency properties, since they are at best (but generally not) equivalent to 2-coherence. We therefore focus on the relevant subclass of coherent p-boxes, corresponding to coherent lower probabilities on special domains. Several properties of coherent p-boxes are investigated and compared with those of (one-dimensional) p-boxes or of bivariate distribution functions.

40 citations


Journal ArticleDOI
TL;DR: In this paper, a general class of integrated depths for functions is considered, and a comprehensive study of its most important theoretical properties, including measurability and consistency, is given.
Abstract: Several depths suitable for infinite-dimensional functional data that are available in the literature are of the form of an integral of a finite-dimensional depth function. These functionals are characterized by projecting functions into low-dimensional spaces, taking finite-dimensional depths of the projected quantities, and finally integrating these projected marginal depths over a preset collection of projections. In this paper, a general class of integrated depths for functions is considered. Several depths for functional data proposed in the literature during the last decades are members of this general class. A comprehensive study of its most important theoretical properties, including measurability and consistency, is given. It is shown that many, but not all, properties of the integrated depth are shared with the finite-dimensional depth that constitutes its building block. Some pending measurability issues connected with all integrated depth functionals are resolved, a broad new notion of symmetry for functional data is proposed, and difficulties with respect to consistency results are identified. A general universal consistency result for the sample depth version, and for the generalized median, for integrated depth for functions is derived.

Proceedings ArticleDOI
18 Apr 2016
TL;DR: This work proposes the first static analysis tool for proving integrity invariants of applications using databases with hybrid consistency models, which allows a programmer to find minimal consistency guarantees sufficient for application correctness.
Abstract: Designers of a replicated database face a vexing choice between strong consistency, which ensures certain application invariants but is slow and fragile, and asynchronous replication, which is highly available and responsive, but exposes the programmer to unfamiliar behaviours. To bypass this conundrum, recent research has studied hybrid consistency models, in which updates are asynchronous by default, but synchronisation is available upon request. To help programmers exploit hybrid consistency, we propose the first static analysis tool for proving integrity invariants of applications using databases with hybrid consistency models. This allows a programmer to find minimal consistency guarantees sufficient for application correctness.

Proceedings ArticleDOI
05 Oct 2016
TL;DR: A new programming model for distributed data is proposed that makes consistency properties explicit and uses a type system to enforce consistency safety and is implemented in Scala on top of an existing datastore, Cassandra.
Abstract: Distributed applications and web services, such as online stores or social networks, are expected to be scalable, available, responsive, and fault-tolerant. To meet these steep requirements in the face of high round-trip latencies, network partitions, server failures, and load spikes, applications use eventually consistent datastores that allow them to weaken the consistency of some data. However, making this transition is highly error-prone because relaxed consistency models are notoriously difficult to understand and test. In this work, we propose a new programming model for distributed data that makes consistency properties explicit and uses a type system to enforce consistency safety. With the Inconsistent, Performance-bound, Approximate (IPA) storage system, programmers specify performance targets and correctness requirements as constraints on persistent data structures and handle uncertainty about the result of datastore reads using new consistency types. We implement a prototype of this model in Scala on top of an existing datastore, Cassandra, and use it to make performance/correctness tradeoffs in two applications: a ticket sales service and a Twitter clone. Our evaluation shows that IPA prevents consistency-based programming errors and adapts consistency automatically in response to changing network conditions, performing comparably to weak consistency and 2-10× faster than strong consistency.

Journal ArticleDOI
TL;DR: The Big Five personality factors often inspire the development and use of different inventories as discussed by the authors, and this practice rests on the vital assumption that different factors are different and different people have different personalities.
Abstract: Prominent theoretical constructs such as the Big Five personality factors often inspire the development and use of different inventories. This practice rests on the vital assumption that different ...

Journal ArticleDOI
TL;DR: This paper investigates the effects of -inconsistency of a -transitive PCM on wm and provides the notion of weak -consistency; it is weaker than -consistsency and stronger than -transitivity, and ensures that vectors associated with a PCM are reliable for assigning a preference order on the set of related decision elements.

Proceedings ArticleDOI
05 Jul 2016
TL;DR: In this article, a new consistency notion, Singleton Linear Arc Consistency (SLAC), is introduced, which is weaker than SAC and solves all the problems of bounded width.
Abstract: The characterization of all the Constraint Satisfaction Problems of bounded width, proposed by Feder and Vardi [SICOMP'98], was confirmed in [Bulatov'09] and independently in [FOCS'09, JACM'14]. Both proofs are based on the (2,3)-consistency (using Prague consistency in [FOCS'09], directly in [Bulatov'09]) which is costly to verify. We introduce a new consistency notion, Singleton Linear Arc Consistency (SLAC), and show that it solves the same family of problems. SLAC is weaker than Singleton Arc Consistency (SAC) and thus the result answers the question from [JLC'13] by showing that SAC solves all the problems of bounded width. At the same time the problem of verifying weaker consistency (even SAC) offers significant computational advantages over the problem of verifying (2,3)-consistency which improves the algorithms solving the CSPs of bounded width.

Journal ArticleDOI
TL;DR: This work attempts to solve the partitioning process, choosing the correct transactional primitive, and routing transactions appropriately by automating the partitioned process.
Abstract: Modern transactional processing systems need to be fast and scalable, but this means many such systems settled for weak consistency models. It is however possible to achieve all of strong consistency, high scalability and high performance, by using fine-grained partitions and light-weight concurrency control that avoids superfluous synchronization and other overheads such as lock management. Independent transactions are one such mechanism, that rely on good partitions and appropriately defined transactions. On the downside, it is not usually straightforward to determine optimal partitioning schemes, especially when dealing with non-trivial amounts of data. Our work attempts to solve this problem by automating the partitioning process, choosing the correct transactional primitive, and routing transactions appropriately.

Proceedings ArticleDOI
27 Feb 2016
TL;DR: This paper presents a new approach to define causal consistency for any abstract data type based on sequential specifications and explores, formalizes and studies the differences between three variations of causal consistency and highlights them in the light of PRAM, eventual consistency and sequential consistency.
Abstract: In distributed systems where strong consistency is costly when not impossible, causal consistency provides a valuable abstraction to represent program executions as partial orders. In addition to the sequential program order of each computing entity, causal order also contains the semantic links between the events that affect the shared objects -- messages emission and reception in a communication channel, reads and writes on a shared register. Usual approaches based on semantic links are very difficult to adapt to other data types such as queues or counters because they require a specific analysis of causal dependencies for each data type. This paper presents a new approach to define causal consistency for any abstract data type based on sequential specifications. It explores, formalizes and studies the differences between three variations of causal consistency and highlights them in the light of PRAM, eventual consistency and sequential consistency: weak causal consistency, that captures the notion of causality preservation when focusing on convergence; causal convergence that mixes weak causal consistency and convergence; and causal consistency, that coincides with causal memory when applied to shared memory.

Journal ArticleDOI
TL;DR: Weak consistency as discussed by the authors is a condition weaker than consistency and stronger than transitivity and ensures that vectors associated with a matrix, by means of a strictly increasing synthesis functional, provide a preference order, on the related set of decision elements, equal to the actual ranking.
Abstract: Consistency and transitivity are important and leading research topics in the study of decision-making in terms of pairwise comparison matrices. In this paper, we search for conditions that, in case of inconsistency, guarantee ordinal compatibility between ordinal ranking (actual ranking) derived from a transitive matrix and cardinal rankings provided by the most usual priority vectors proposed in the scientific literature. We provide the notion of weak consistency; it is a condition weaker than consistency and stronger than transitivity and ensures that vectors associated with a matrix, by means of a strictly increasing synthesis functional, provide a preference order, on the related set of decision elements, equal to the actual ranking. This notion extends, to the case in which the decision-maker can be indifferent between two or more alternatives/criteria, weak consistency introduced in previous papers under constraint of no indifference. Finally, we introduce an order relation on the rows of the matrix, that is, a simple order if and only if weak consistency is satisfied; this simple order allows us to easily determine the actual ranking on the set of decision elements. Copyright © 2015 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: SPEL++ is proposed, a dual-consistency cache coherence protocol that supports two execution modes: a traditional sequential-consistent protocol and a protocol that provides weak consistency (or sequential consistency for data-race-free).
Abstract: Traditional cache coherence protocols manage all memory accesses equally and ensure the strongest memory model, namely, sequential consistency. Recent cache coherence protocols based on self-invalidation advocate for the model sequential consistency for data-race-free, which enables powerful optimizations for race-free code. However, for racy code these cache coherence protocols provide sub-optimal performance compared to traditional protocols. This paper proposes SPEL++, a dual-consistency cache coherence protocol that supports two execution modes: a traditional sequential-consistent protocol and a protocol that provides weak consistency (or sequential consistency for data-race-free). SPEL++ exploits a static-dynamic hybrid classification of memory accesses based on (i) a compile-time identification of extended data-race-free code regions for OpenMP applications and (ii) a runtime classification of accesses based on the operating system's memory page management. By executing racy code under the sequential-consistent protocol and race-free code under the cache coherence protocol that provides sequential consistency for data-race-free, the end result is an efficient execution of the applications while still providing sequential consistency. Compared to a traditional protocol, we show improvements in performance from 19 to 38 percent and reductions in energy consumption from 47 to 53 percent, on average for different benchmark suites, on a 64-core chip multiprocessor.

Posted Content
TL;DR: The syntax and semantics of the cat language is provided, a domain specific language to describe consistency properties of parallel/distributed programs.
Abstract: We provide the syntax and semantics of the cat language, a domain specific language to describe consistency properties of parallel/distributed programs. The language is implemented in the herd7 too (this http URL.

Journal Article
TL;DR: RedBlue Consistency enables blue operations to be fast (and weakly consistent) while the remaining red operations are strongly consistent (and slow) and Explicit Consistsency further increases the space of operations that can be fast by restricting the concurrent execution of only the Operations that can break application-defined invariants.
Abstract: Geo-replicated storage systems are at the core of current Internet services. Unfortunately, there exists a fundamental tension between consistency and performance for offering scalable geo-replication. Weakening consistency semantics leads to less coordination and consequently a good user experience, but it may introduce anomalies such as state divergence and invariant violation. In contrast, maintaining stronger consistency precludes anomalies but requires more coordination. This paper discusses two main contributions to address this tension. First, RedBlue Consistency enables blue operations to be fast (and weakly consistent) while the remaining red operations are strongly consistent (and slow). We identify sufficient conditions for determining when operations can be blue or must be red. Second, Explicit Consistency further increases the space of operations that can be fast by restricting the concurrent execution of only the operations that can break application-defined invariants. We further show how to allow operations to complete locally in the common case, by relying on a reservation system that moves coordination off the critical path of operation execution.

Journal ArticleDOI
TL;DR: In this article, the authors considered the independent and identical (i i d ) situation and provided independent proofs of weak consistency and asymptotic normality of the maximum likelihood estimators (M L E ) of the hyper-parameters of their random effects parameters.

Proceedings ArticleDOI
01 Jan 2016
TL;DR: In this article, the authors propose a new classification along three dimensions related to: a total order of writes, a causal order of reads, and transactional composition of multiple operations.
Abstract: Comparisons of different consistency models often try to place them in a linear strong-to-weak order. However this view is clearly inadequate, since it is well known, for instance, that Snapshot Isolation and Serialisability are incomparable. In the interest of a better understanding, we propose a new classification, along three dimensions, related to: a total order of writes, a causal order of reads, and transactional composition of multiple operations. A model may be stronger than another on one dimension and weaker on another. We believe that this new classification scheme is both scientifically sound and has good explicative value. The current paper presents the three-dimensional design space intuitively.

Journal ArticleDOI
TL;DR: A criterion for acceptable consistency of PCM is introduced, which is independent of the scale and can be intuitively interpreted, and a multiplicative alo-group based hierarchical decision model is proposed, which has the property of preserving rank.
Abstract: Pairwise comparison matrix (PCM) is a popular technique used in multi-criteria decision making. The abelian linearly ordered group (alo-group) is a powerful tool for the discussion of PCMs. In this article, a criterion for acceptable consistency of PCM is introduced, which is independent of the scale and can be intuitively interpreted. The relation of the introduced criterion with the weak consistency is investigated. Then, a multiplicative alo-group based hierarchical decision model is proposed. The following approaches are included: (1) the introduced criterion for acceptable consistency is used to check whether or not a PCM is acceptable; (2) the row’s geometric mean method is used for deriving the local priorities of a multiplicative PCM; (3) a Hierarchy Composition Rule derived from the weighted mean is used for computing the criterion/subcriterion’s weights with regard to the total goal; and (4) the weighted geometric mean is used as the aggregation rule, where the alternative’s local priorities are...

Posted Content
TL;DR: In this paper, the authors consider a jump-type Cox-Ingersoll-Ross (CIR) process driven by a standard Wiener process and a subordinator, and study asymptotic properties of the maximum likelihood estimator (MLE) for its growth rate.
Abstract: We consider a jump-type Cox--Ingersoll--Ross (CIR) process driven by a standard Wiener process and a subordinator, and we study asymptotic properties of the maximum likelihood estimator (MLE) for its growth rate. We distinguish three cases: subcritical, critical and supercritical. In the subcritical case we prove weak consistency and asymptotic normality, and, under an additional moment assumption, strong consistency as well. In the supercritical case, we prove strong consistency and mixed normal (but non-normal) asymptotic behavior, while in the critical case, weak consistency and non-standard asymptotic behavior are described. We specialize our results to so-called basic affine jump-diffusions as well. Concerning the asymptotic behavior of the MLE in the supercritical case, we derive a stochastic representation of the limiting mixed normal distribution, where the almost sure limit of an appropriately scaled jump-type supercritical CIR process comes into play. This is a new phenomenon, compared to the critical case, where a diffusion-type critical CIR process plays a role.

Posted Content
TL;DR: In this paper, the authors prove that any implementation of pivotal sampling is more efficient than multinomial sampling, due to the weak consistency of the Horvitz-Thompson estimator and the existence of a conservative variance estimator.
Abstract: We prove that any implementation of pivotal sampling is more efficient than multinomial sampling. This property entails the weak consistency of the Horvitz-Thompson estimator and the existence of a conservative variance estimator. A small simulation study supports our findings.

01 Jan 2016
TL;DR: Why consistency checks with TGGs are worthwhile and identify backtracking issues making correct and ecient consistency checks challenging are discussed, and two strategies to overcome these challenges are presented.
Abstract: Development of a complex system relies on dierent yet related models each representing the system from a particular perspective. In this respect, an important task is to check consistency between related models to guide subsequent decisions concerning consistency restoration. Triple Graph Grammars (TGGs), a particular dialect of graph grammars, are well-suited for describing consistency of two models together with correspondences. The grammar-based description leads to a precise consistency notion which is prerequisite for reliable consistency checks, and correspondences serve as explicit traceability information. Consistency checks with TGGs, however, turn out to be more dicult than consistency restoration in most cases and have not been addressed suciently so far. We rst discuss why consistency checks with TGGs are worthwhile and identify backtracking issues making correct and ecient consistency checks challenging. Finally, we present two strategies to overcome these challenges, reecting our work in progress towards a formally-founded consistency check approach with viable tool support.

Posted Content
TL;DR: The syntax and semantics of the LISA (for "Litmus Instruction Set Architecture") language is provided and the parallel assembly language LISA is implemented in the herd7 tool for simulating weak consistency models.
Abstract: We provide the syntax and semantics of the LISA (for "Litmus Instruction Set Architecture") language. The parallel assembly language LISA is implemented in the herd7 tool (this http URL) for simulating weak consistency models.

Proceedings ArticleDOI
17 Jul 2016
TL;DR: This paper outlines an abstract model of programming language constructs and a static checker for data-centric consistency control and demonstrates this model through a simple prototype programming language implementation.
Abstract: The consistency level of operations over replicated data is an important parameter in distributed applications. It impacts correctness, performance, and availability. It is now common to find single applications using many different consistency levels at the same time; however, current commercial frameworks do not provide high-level abstractions for specifying or reasoning about different consistency properties of an application. Research frameworks that do tend to require a substantial effort from developers to specify operation dependencies, orderings and invariants to be preserved. We propose an approach for specifying consistency properties based on the observation that correctness criteria and invariants are a property of data, not operations. Hence, it is reasonable to define the consistency properties required to enforce various data invariants on the data itself rather than on the operations. The result is a system that is simpler to describe and reason about. In this paper, we outline an abstract model of programming language constructs and a static checker for data-centric consistency control, and demonstrate this model through a simple prototype programming language implementation.