scispace - formally typeset
Search or ask a question

Showing papers on "Weak consistency published in 2012"


Proceedings ArticleDOI
14 Feb 2012
TL;DR: This paper introduces the No-Order File System (NoFS), a simple, lightweight file system that employs a novel technique called backpointer-based consistency to provide crash consistency without ordering writes as they go to disk.
Abstract: Modern file systems use ordering points to maintain consistency in the face of system crashes. However, such ordering leads to lower performance, higher complexity, and a strong and perhaps naive dependence on lower layers to correctly enforce the ordering of writes. In this paper, we introduce the No-Order File System (NoFS), a simple, lightweight file system that employs a novel technique called backpointer-based consistency to provide crash consistency without ordering writes as they go to disk. We utilize a formal model to prove that NoFS provides data consistency in the event of system crashes; we show through experiments that NoFS is robust to such crashes, and delivers excellent performance across a range of workloads. Backpointer-based consistency thus allows NoFS to provide crash consistency without resorting to the heavyweight machinery of traditional approaches.

126 citations


Proceedings ArticleDOI
14 Oct 2012
TL;DR: This work exposes causal consistency's serious and inherent scalability limitations due to write propagation requirements and traditional dependency tracking mechanisms, and advocates the use of explicit causality, or application-defined happens-before relations.
Abstract: Causal consistency is the strongest consistency model that is available in the presence of partitions and provides useful semantics for human-facing distributed services Here, we expose its serious and inherent scalability limitations due to write propagation requirements and traditional dependency tracking mechanisms As an alternative to classic potential causality, we advocate the use of explicit causality, or application-defined happens-before relations Explicit causality, a subset of potential causality, tracks only relevant dependencies and reduces several of the potential dangers of causal consistency

95 citations


Book ChapterDOI
24 Mar 2012
TL;DR: This work establishes a handful of simple operational rules for managing replicas, versions and updates, based on graphs called revision diagrams, and proves that these rules are sufficient to guarantee correct implementation of eventually consistent transactions.
Abstract: When distributed clients query or update shared data, eventual consistency can provide better availability than strong consistency models. However, programming and implementing such systems can be difficult unless we establish a reasonable consistency model, i.e. some minimal guarantees that programmers can understand and systems can provide effectively. To this end, we propose a novel consistency model based on eventually consistent transactions. Unlike serializable transactions, eventually consistent transactions are ordered by two order relations (visibility and arbitration) rather than a single order relation. To demonstrate that eventually consistent transactions can be effectively implemented, we establish a handful of simple operational rules for managing replicas, versions and updates, based on graphs called revision diagrams. We prove that these rules are sufficient to guarantee correct implementation of eventually consistent transactions. Finally, we present two operational models (single server and server pool) of systems that provide eventually consistent transactions.

95 citations


Journal ArticleDOI
TL;DR: A new method for group decision making with incomplete fuzzy preference relations based on the additive consistency and the order consistency is presented, which can overcome the drawback of the existing methods.
Abstract: In this paper, we present a new method for group decision making with incomplete fuzzy preference relations based on the additive consistency and the order consistency. We estimate unknown preference values based on the additive consistency and then construct the consistency matrix which satisfies the additive consistency and the order consistency simultaneously for aggregation. The existing group decision making methods may not satisfy the order consistency for aggregation in some situations. The proposed method can overcome the drawback of the existing methods. It provides us with a useful way for group decision making with incomplete fuzzy preference relations based on the additive consistency and the order consistency.

88 citations


Journal ArticleDOI
TL;DR: This work disputes the particular fairness interpretations that have been offered for consistency, but develops a different and important fairness foundation for the principle, arguing that it can be seen as the result of adding ‘some’ efficiency to a ‘post-application’ and efficiency-free expression of solidarity in response to population changes.
Abstract: An allocation rule is ‘consistent’ if the recommendation it makes for each problem ‘agrees’ with the recommendation it makes for each associated reduced problem, obtained by imagining some agents leaving with their assignments. Some authors have described the consistency principle as a ‘fairness principle’. Others have written that it is not about fairness, that it should be seen as an ‘operational principle’. We dispute the particular fairness interpretations that have been offered for consistency, but develop a different and important fairness foundation for the principle, arguing that it can be seen as the result of adding ‘some’ efficiency to a ‘post-application’ and efficiency-free expression of solidarity in response to population changes. We also challenge the interpretations of consistency as an operational principle that have been given, and here identify a sense in which such an interpretation can be supported. We review and assess the other interpretations of the principle, as ‘robustness’, ‘coherence’ and ‘reinforcement’.

71 citations


Journal ArticleDOI
TL;DR: It is concluded that trace equivalence is not suited to be applied as a consistency notion, whereas the notions based on behavioural profiles approximate the perceived consistency of the subjects significantly.

40 citations


Book ChapterDOI
16 Oct 2012
TL;DR: This paper studies the semantics of sets under eventual consistency, which supports concurrent updates, reduces latency and improves fault tolerance, but forgoes strong consistency (e.g., linearisability).
Abstract: This paper studies the semantics of sets under eventual consistency. The set is a pervasive data type, used either directly or as a component of more complex data types, such as maps or graphs. Eventual consistency of replicated data supports concurrent updates, reduces latency and improves fault tolerance, but forgoes strong consistency (e.g., linearisability). Accordingly, several cloud computing platforms implement eventually-consistent replicated sets [2,4].

29 citations


Book ChapterDOI
19 Dec 2012
TL;DR: In this paper, a nonparametric quantile regression method was proposed to test conditional independence using local polynomial quantile regressions with weakly dependent data, which can detect local alternatives to conditional independence that decay to zero at the parametric rate.
Abstract: We provide straightforward new nonparametric methods for testing conditional independence using local polynomial quantile regression, allowing weakly dependent data. Inspired by Hausman's (1978) specification testing ideas, our methods essentially compare two collections of estimators that converge to the same limits under correct specification (conditional independence) and that diverge under the alternative. To establish the properties of our estimators, we generalize the existing nonparametric quantile literature not only by allowing for dependent heterogeneous data but also by establishing a weak consistency rate for the local Bahadur representation that is uniform in both the conditioning variables and the quantile index. We also show that, despite our nonparametric approach, our tests can detect local alternatives to conditional independence that decay to zero at the parametric rate. Our approach gives the first nonparametric tests for time-series conditional independence that can detect local alternatives at the parametric rate. Monte Carlo simulations suggest that our tests perform well in finite samples. We apply our test to test for a key identifying assumption in the literature on nonparametric, nonseparable models by studying the returns to schooling.

24 citations


Book ChapterDOI
27 Aug 2012
TL;DR: VFC3, a novel consistency model for replicated data across data centers with framework and library support to enforce increasing degrees of consistency for different types of data (based on their semantics), is proposed, offering rationalization of resources and improvement of QoS.
Abstract: Today we are increasingly more dependent on critical data stored in cloud data centers across the world. To deliver high-availability and augmented performance, different replication schemes are used to maintain consistency among replicas. With classical consistency models, performance is necessarily degraded, and thus most highly-scalable cloud data centers sacrifice to some extent consistency in exchange of lower latencies to end-users. More so, those cloud systems blindly allow stale data to exist for some constant period of time and disregard the semantics and importance data might have, which undoubtedly can be used to gear consistency more wisely, combining stronger and weaker levels of consistency. To tackle this inherent and well-studied trade-off between availability and consistency, we propose the use of VFC3, a novel consistency model for replicated data across data centers with framework and library support to enforce increasing degrees of consistency for different types of data (based on their semantics). It targets cloud tabular data stores, offering rationalization of resources (especially bandwidth) and improvement of QoS (performance, latency and availability), by providing strong consistency where it matters most and relaxing on less critical classes or items of data.

23 citations


Proceedings ArticleDOI
24 Jun 2012
TL;DR: A novel approach, called cost-based concurrency control (C3), that allows to dynamically and adaptively switch at runtime between different consistency levels of transactions, which demonstrates the feasibility of the cost model and shows that C3 leads to a reduction of the overall costs of transactions compared to a fixed consistency level.
Abstract: Clouds are becoming the preferred platforms for large-scale applications. Currently, Cloud environments focus on high scalability and availability by relaxing consistency. Weak consistency's considered to be sufficient for most of the currently deployed applications in the Cloud. However, the Cloud is increasingly being promoted as environment for running a wide range of different types of applications on top of replicated data - of which not all will be satisfied with weak consistency. Strong consistency, even though demanded by applications, decreases availability and is costly to enforce from both a performance and monetary point of view. On the other hand, weak consistency may generate high costs due to the access to inconsistent data. In this paper, we present a novel approach, called cost-based concurrency control (C3), that allows to dynamically and adaptively switch at runtime between different consistency levels of transactions. C3 has been implemented in a Data-as-a-Service Cloud environment and considers all costs that incur during execution. These costs are determined by infrastructure costs for running a transaction in a certain consistency level (called consistency costs) and, optionally, by additional application-specific costs for compensating the effects of accessing inconsistent data (called inconsistency costs).C3 considers transaction mixes running different consistency levels at the same time while enforcing the inherent consistency guarantees of each of these protocols. The main contribution of this paper is threefold. First, it thoroughly analyzes the consistency costs of the most common concurrency control protocols; second, it specifies a set of rules that allow to dynamically select the most appropriate consistency level with the goal of minimizing the overall costs (consistency and inconsistency costs);third, it provides a protocol that guarantees that anomalies in the transaction mixes supported by C3are avoided and that enforces the correct execution of all transactions in a transaction mix. We have evaluated C3 on the basis of real infrastructure costs, derived from Amazon's EC2. The results demonstrate the feasibility of the cost model and show that C3 leads to a reduction of the overall costs of transactions compared to a fixed consistency level.

17 citations


Journal ArticleDOI
TL;DR: In this article, a functional linear model for time series prediction which combines a functional endogenous predictor and real and functional exogenous variables is presented. And a penalized B -spline type estimator is presented and some weak consistency results are derived under suitable conditions.

30 Dec 2012
TL;DR: A new scheme for the discretization of heterogeneous anisotropic diffusion problems on general meshes is presented that can be written as a cell-centered scheme with a small stencil and that it is convergent for discontinuous tensors.
Abstract: We present a new scheme for the discretization of heterogeneous anisotropic diffusion problems on general meshes. With light assumptions, we show that the algorithm can be written as a cell-centered scheme with a small stencil and that it is convergent for discontinuous tensors. The key point of the proof consists in showing both the strong and the weak consistency of the method. The efficiency of the scheme is demonstrated through numerical tests of the 5th International Symposium on Finite Volumes for Complex Appli-cations -FVCA 5. Moreover, the comparison with classical finite volume schemes emphasizes the precision of the method. We also show the good behaviour of the algorithm for nonconforming meshes.

Book ChapterDOI
08 Oct 2012
TL;DR: This work discusses how the structure of the dual graph of binary CSPs affects the consistency level enforced by RNIC, and compares RNIC to Neighborhood Inverse Consistency (NIC) and strong Conservative Dual Consistencies (sCDC), which are higher-level consistency properties useful for solving difficult problem instances.
Abstract: Our goal is to investigate the definition and application of strong consistency properties on the dual graphs of binary Constraint Satisfaction Problems (CSPs). As a first step in that direction, we study the structure of the dual graph of binary CSPs, and show how it can be arranged in a triangle-shaped grid. We then study, in this context, Relational Neighborhood Inverse Consistency (RNIC), which is a consistency property that we had introduced for non-binary CSPs [17]. We discuss how the structure of the dual graph of binary CSPs affects the consistency level enforced by RNIC. Then, we compare, both theoretically and empirically, RNIC to Neighborhood Inverse Consistency (NIC) and strong Conservative Dual Consistency (sCDC), which are higher-level consistency properties useful for solving difficult problem instances. We show that all three properties are pairwise incomparable.

01 Jan 2012
TL;DR: The vireo group participated in four tasks: instance search, multimediaevent recounting, multimedia event detection, and semantic indexing, and the approaches and evaluation results are presented and discussed.
Abstract: The vireo group participated in four tasks: instance search, multimedia event recounting, multimedia event detection, and semantic indexing. In this paper, we will present our approaches and discuss the evaluation results. Instance Search (INS): We submitted four Bag-of-Words (BoW) based runs this year to mainly test the proper way of exploiting spatial information through comparing the weak consistency checking (WGC) and our spatial topology consistency checking using Delaunay Triangulation (DT) based matching. Considering the special features of the INS task of TRECVID (e.g., multiple image examples for a query; ROI indicating the spatial location of the instance), we also study the effects of multi-query fusion and background context modeling on top of BoW retrieval system.

Book ChapterDOI
08 Oct 2012
TL;DR: This paper studies local-to-global consistency for ORD-Horn languages, that is, structures definable over the ordered rationals i¾ź;< within the formalism of ORD -Horn clauses, and provides a syntactic characterization in terms of first-order definability.
Abstract: Establishing local consistency is one of the most frequently used algorithmic techniques in constraint satisfaction in general and in spatial and temporal reasoning in particular. A collection of constraints is globally consistent if it is completely explicit, that is, every partial solution may be extended to a full solution by greedily assigning values to variables one at a time. We will say that a structure B has local-to-global consistency if establishing local-consistency yields a globally consistent instance of $\textbf{CSP}\mathbf{B}$. This paper studies local-to-global consistency for ORD-Horn languages, that is, structures definable over the ordered rationals i¾ź;< within the formalism of ORD-Horn clauses. This formalism has attracted a lot of attention and is of crucial importance to spatial and temporal reasoning. We provide a syntactic characterization in terms of first-order definability of all ORD-Horn languages enjoying local-to-global consistency.

Journal ArticleDOI
TL;DR: In this paper, the authors prove the weak consistency of the posterior distribution and the Bayes estimator for a two-phase piecewise linear regression mdoel where the break-point is unknown.
Abstract: We prove the weak consistency of the posterior distribution and that of the Bayes estimator for a two-phase piecewise linear regression mdoel where the break-point is unknown. The non-differentiability of the likelihood of the model with regard to the break- point parameter induces technical difficulties that we overcome by creating a regularised version of the problem at hand. We first recover the strong consistency of the quantities of interest for the regularised version, using results about the MLE, and we then prove that the regularised version and the original version of the problem share the same asymptotic properties.

Journal ArticleDOI
TL;DR: In this article, the problem of bandwidth selection by cross-validation from a sequential point of view in a nonparametric regression model is considered, where the errors may be α-mixing or L 2-near epoch dependent.
Abstract: We consider the problem of bandwidth selection by cross-validation from a sequential point of view in a nonparametric regression model. Having in mind that in applications one often aims at estimation, prediction and change detection simultaneously, we investigate that approach for sequential kernel smoothers in order to base these tasks on a single statistic. We provide uniform weak laws of large numbers and weak consistency results for the cross-validated bandwidth. Extensions to weakly dependent error terms are discussed as well. The errors may be α-mixing or L 2-near epoch dependent, which guarantees that the uniform convergence of the cross validation sum and the consistency of the cross-validated bandwidth hold true for a large class of time series. The method is illustrated by analyzing photovoltaic data.

Proceedings ArticleDOI
05 Oct 2012
TL;DR: This work adopts the Optimal Stopping Theory and, specifically, the Odds-algorithm to enable the caching server to accurately handle the object refreshing and the stale delivery problem.
Abstract: Serving the most updated version of a resource with minimal networking overhead is always a challenge for WWW Caching, especially, for weak consistency algorithms such as the widely adopted Adaptive Time-to-Live (ATTL). We adopt the Optimal Stopping Theory (OST) and, specifically, the Odds-algorithm, to enable the caching server to accurately handle the object refreshing and the stale delivery problem. Simulation results show that the proposed OST-based algorithm outperforms the conventional ATTL.

01 Jan 2012
TL;DR: In this article, an asymptotic theory for a non-linear parametric co-integrating regression model is developed, which is easy to apply for various non-stationary time series, including partial sum of linear process and Harris recurrent Markov chain.
Abstract: This paper develops an asymptotic theory for a non-linear parametric co-integrating regression model. We establish a general framework for weak consistency that is easy to apply for various non-stationary time series, including partial sum of linear process and Harris recurrent Markov chain. We provide a limit distribution for the nonlinear least square estimator which signicantly extends the previous work. We also introduce endogeneity in the model by allowing the error to be serially dependent and cross correlated with the regressor.

Posted Content
TL;DR: In this paper, the authors studied the local linear estimator for the drift coefficient of stochastic differential equations driven by L 2 -stable L 1 -vy motions observed at discrete instants.
Abstract: We study the local linear estimator for the drift coefficient of stochastic differential equations driven by $\alpha$-stable L\'{e}vy motions observed at discrete instants letting $T \rightarrow \infty$. Under regular conditions, we derive the weak consistency and central limit theorem of the estimator. Compare with Nadaraya-Watson estimator, the local linear estimator has a bias reduction whether kernel function is symmetric or not under different schemes.

01 Jan 2012
TL;DR: It is shown that rigorousness is strictly stronger than opacity, but strictness is incomparable to opacity, and all non-eager STMs are strict.
Abstract: We describe several database consistency conditions that restrict ongoing transactions (which might later be aborted), and relate them to known consistency conditions for transactional memory. In particular, we show that rigorousness is strictly stronger than opacity, but strictness is incomparable to opacity. The same relationships also hold for virtual world consistency. We also show that all non-eager STMs are strict.

01 Feb 2012
TL;DR: This technical report introduces a novel approach called cost based concurrency control (C3), which allows to dynamically and adaptively switch at runtime between different consistency levels of transactions in a Cloud environment based on the costs incurring during execution.
Abstract: The recent advent of Cloud computing has strongly influenced thedeployment of large-scale applications. Currently, Cloud environmentsmainly focus on high scalability and availability of these applications. Consistency, in contrast, is relaxed and weak consistency is commonlyconsidered to be sufficient. However, an increasing number of applications are no longer satisfied with weak consistency. Strong consistency, in turn, decreases availability and is costly to enforce from both a performance and infrastructure point of view. On the other hand, weak consistency may lead to high costs due to the access to inconsistent data. In this technical report, we introduce a novel approach called cost based concurrency control (C3). Essentially, C3 allows to dynamically and adaptively switch at runtime between different consistency levels of transactions in a Cloud environment based on the costs incurring during execution. These costs are determined by infrastructure costs for running a transaction in a certain consistency level (called consistency costs) and, optionally, by additional application-specific costs for compensating the effects of accessing inconsistent data (called inconsistency costs). C3 jointly supports transactions of different consistency levels and enforces the inherent consistency guarantees of each protocol. We first analyze the consistency costs of concurrency control protocols; second, we specify a set of rules that allow to dynamically select the best consistency level with the goal of minimizing the overall costs; third, we provide a protocol that enforces the correct execution of all transactions in a transaction mix. We have evaluated C3 on top of amazon’s EC2. The results show that C3 leads to a reduction of the overall transaction costs compared to a fixed consistency level.

Proceedings ArticleDOI
01 Oct 2012
TL;DR: This vision paper informally introduces the formalism of Graph Diagrams which can serve as a notation for multiple models@run.time and their relationships as a basis for structural consistency analysis.
Abstract: Structural consistency is crucial in models@run.time. Since the designer is usually not around at run time the running system needs to be able to detect and resolve inconsistencies automatically. This requires means to formally express consistency requirements in and between models. In this vision paper we informally introduce the formalism of Graph Diagrams which can serve as a notation for multiple models@run.time and their relationships as a basis for structural consistency analysis.

Journal Article
TL;DR: An empirical version of the regularity index is studied and conditions for its weak and strong convergence are given without assuming absolute continuity or other global properties of the underlying measure.
Abstract: The index of regularity of a measure was introduced by Beirlant, Berlinet and Biau [1] to solve practical problems in nearest neighbour density estimation such as removing bias or selecting the number of neighbours. These authors proved the weak consistency of an estimator based on the nearest neighbour density estimator. In this paper, we study an empirical version of the regularity index and give su cient conditions for its weak and strong convergence without assuming absolute continuity or other global properties of the underlying measure.

01 Jan 2012
TL;DR: This dissertation proposes a new solution based on the adaptation of the Vector-Field Consistency algorithm which relies on two distinct concepts: locality-awareness and continuous consistency model, and describes in detail how this architecture was applied to the Eclipse IDE, under the form of a plug-in.
Abstract: Software development is, mostly, a collaborative process where teams of developers work together in order to produce quality code. Collaboration is, generally, not an issue, as teams work together in the same office or building. However, larger projects may require more people, who might be spread through-out different floors, buildings and different companies. Several systems have been developed in order to provide better means of communication and awareness over the actions of others. Still, most of them rely on an all-or-nothing approach: where the user is either immediately notified of all modifications occurring in a shared project, or is completely oblivious to all external changes. We propose a new solution based on the adaptation of the Vector-Field Consistency algorithm which relies on two distinct concepts: locality-awareness and continuous consistency model. Where the former represents the ability of system to make choices based on the proximity of remote changes in relation to a particular user’s position. While the later corresponds to a consistency model between strong and weak consistency, which is able to control and impose a limit over how much two replicas can diverge. With the correct parametrization this model can establish a great balance between consistency and availability. The Vector-Field Consistency (VFC) was originally applied to the context of distributed ad-hoc gaming, and will now take its first steps into collaborative software development. This adaptation will allow a software developer to have a higher degree of awareness over remote changes which might directly affect his work, and as the impact of changes will gradually grow further away from the developer’s task, so will the level of awareness. In this dissertation we first study the current state of the art concerning consistency models and various already existing software management tools, along with multiple techniques used for distributed collaborative software development. Then we will explain how the VFC algorithm and multiple other mechanisms were adapted into our new context. We then describe in detail how our architecture was applied to the Eclipse IDE, under the form of a plug-in, to provide a new level of distributed collaboration to software developers, and how our solution was evaluated. Eclipse.org: http://www.eclipse.org/platform

Proceedings Article
01 Jan 2012
TL;DR: This paper overviews the study on various shared memory consistency models, Sequential Consistency (SC), Weak Consistsency (WC), Release Consistencies (RC), and Protected Release Consistentity (PRC) mod ...
Abstract: This paper overviews our study on various shared memory consistency models, Sequential Consistency (SC), Weak Consistency (WC), Release Consistency (RC), and Protected Release Consistency (PRC) mod ...

Posted Content
TL;DR: In this article, the authors prove the weak consistency of the posterior distribution and the Bayes estimator for a two-phase piecewise linear regression model with the break-point parameter unknown.
Abstract: We prove the weak consistency of the posterior distribution and that of the Bayes estimator for a two-phase piecewise linear regression mdoel where the break-point is unknown The non-differentiability of the likelihood of the model with regard to the break- point parameter induces technical difficulties that we overcome by creating a regularised version of the problem at hand We first recover the strong consistency of the quantities of interest for the regularised version, using results about the MLE, and we then prove that the regularised version and the original version of the problem share the same asymptotic properties

Proceedings ArticleDOI
12 Oct 2012
TL;DR: The article improves the method that solves the problem of update conflicts with timestamp vector of whole sort in data replication, which greatly improves the system availability which uses timestamp vector to solve the problemof update conflicts.
Abstract: Data consistency consists of strong consistency and weak consistency Strong consistency ensures that the data replicas keep consistent at any time, but it limits system availability, weak consistency ensures the data replicas keep consistent at last that greatly increases system availability, but it can not keep concurrent modification timely Replication is the key technology of massive data consistency The article, according to the characteristic of massive data, based on the existing method of maintaining the consistency of massive data, improves the method that solves the problem of update conflicts with timestamp vector of whole sort in data replication It greatly improves the system availability which uses timestamp vector to solve the problem of update conflicts

Journal ArticleDOI
TL;DR: A method of quality evaluation of consistency on state level and structural level is presented and it is proved that this research is helpful to find the factors which influence the consistency and search for accurate measures to improve the consistency on every level.
Abstract: This paper analyzed the function consistency, dominating consistency, state consistency and structural consistency of multi-resolution model based on three principles of the similarity theory. Then, a method of quality evaluation of consistency on state level and structural level are presented. The practice proved that this research is helpful to find the factors which influence the consistency and search for accurate measures to improve the consistency on every level.

DissertationDOI
01 Jan 2012
TL;DR: This monograph aims to provide a chronology of the events leading to and following the publication of the Bouchut-Boyaval manuscript and its publication in the peer-reviewed literature.
Abstract: ..........................................................................................................................2 Dedication.......................................................................................................................3 Acknowledgments..........................................................................................................4 Table of contents............................................................................................................5 List of Figures.................................................................................................................9 Chapter