scispace - formally typeset
Search or ask a question

Showing papers on "Weak consistency published in 2010"


Proceedings ArticleDOI
25 Jul 2010
TL;DR: A variety of interesting places in the "CAP Space" are reviewed as a way to illuminate issues and their consequences, and a few of the options to try to "work around" the impossible are explored.
Abstract: At PODC 2000, the CAP theorem received its first broad audience. Surprisingly for an impossibility result, one important effect has been to free designers to explore a wider range of distributed systems. Designers of wide-area systems, in which network partitions are considered inevitable, know they cannot have both availability and consistency, and thus can now justify weaker consistency. The rise of the "NoSQL" movement ("Not Only SQL") is an expression of this freedom. The choices of how and when to weaken consistency are often the defining characteristics of these systems, with new variations appearing every year. We review a variety of interesting places in the "CAP Space" as a way to illuminate these issues and their consequences. For example, automatic teller machines (ATMs), which predate the CAP theorem, surprisingly choose availability with weak consistency but with bounded risk. Finally, I explore a few of the options to try to "work around" the impossible. The most basic is the use of commutative operations, which make it easy to restore consistency after a partition heals. However, even many commutative operations have non-commutative exceptions in practice, which means that the exceptions may be incorrect or late. Adding the concept of "delayed exceptions" allows more operations to be considered commutative and simplifies eventual consistency during a partition. Furthermore, we can think of delayed exception handling as "compensation" - we execute a compensating transaction that restores consistency. Delayed exception handling with compensation appears to be what most real wide-area systems do - inconsistency due to limited communication is treated as an exception and some exceptional action, such as monetary compensation or even legal action, is used to fix it. This approach to wide-area systems puts the emphasis on audit trails and recovery rather than prevention, and implies that we should expand and formalize the role of compensation in the design of complex systems

79 citations


Journal ArticleDOI
01 Mar 2010-Extremes
TL;DR: In this article, the authors generalize several studies in the area of extreme value theory for the estimation of the extreme value index and the second order parameter and prove weak consistency and asymptotic normality under classical assumptions.
Abstract: In this paper, we generalize several studies in the area of extreme value theory for the estimation of the extreme value index and the second order parameter. Weak consistency and asymptotic normality are proven under classical assumptions. Some numerical simulations and computations are also performed to illustrate the finite-sample and the limiting behavior of the estimators.

69 citations


Proceedings ArticleDOI
03 Oct 2010
TL;DR: A framework for specifying overlaps between partial models and defining their global consistency is presented and an advantage of the framework is that heterogeneous consistency checking is reduced to the homogeneous case yet merging partial metamodels into one global meetingamodel is not needed.
Abstract: Software development often involves a set of models defined in different metamodels, each model capturing a specific view of the system. We call this set a mutlimodel, and its elements partial or local models. Since partial models overlap, they may be consistent or inconsistent wrt. a set of global constraints.We present a framework for specifying overlaps between partial models and defining their global consistency. An advantage of the framework is that heterogeneous consistency checking is reduced to the homogeneous case yet merging partial metamodels into one global metamodel is not needed. We illustrate the framework with examples and sketch a formal semantics for it based on category theory.

57 citations


Proceedings Article
11 Jul 2010
TL;DR: It is empirically show that wR(*,m)C solves in a backtrack-free manner all the instances of some CSP benchmark classes, thus hinting at the tractability of those classes.
Abstract: Consistency properties and algorithms for achieving them are at the heart of the success of Constraint Programming. In this paper, we study the relational consistency property R(*,m)C, which is equivalent to m-wise consistency proposed in relational databases. We also define wR(*,m)C, a weaker variant of this property. We propose an algorithm for enforcing these properties on a Constraint Satisfaction Problem by tightening the existing relations and without introducing new ones. We empirically show that wR(*,m)C solves in a backtrack-free manner all the instances of some CSP benchmark classes, thus hinting at the tractability of those classes.

42 citations


Journal ArticleDOI
TL;DR: The main result is that GLACE is weakly consistent on general meshes in any dimension, and validates theoretically the use of isoparametric elements for 3D Lagrangian compressible gas dynamics calculations.

33 citations


Journal ArticleDOI
TL;DR: A heuristic approach for efficiently analyzing constraint specifications built from constraint patterns and exploits the semantic properties of constraint patterns, thereby enabling syntax-based consistency checking in polynomial-time and introduces a consistency checker implementing these ideas.
Abstract: Precision and consistency are important prerequisites for class models to conform to their intended domain semantics. Precision can be achieved by augmenting models with design constraints and consistency can be achieved by avoiding contradictory constraints. However, there are different views of what constitutes a contradiction for design constraints. Moreover, state-of-the-art analysis approaches for proving constrained models consistent either scale poorly or require the use of interactive theorem proving. In this paper, we present a heuristic approach for efficiently analyzing constraint specifications built from constraint patterns. This analysis is based on precise notions of consistency for constrained class models and exploits the semantic properties of constraint patterns, thereby enabling syntax-based consistency checking in polynomial-time. We introduce a consistency checker implementing these ideas and we report on case studies in applying our approach to analyze industrial-scale models. These studies show that pattern-based constraint development supports the creation of concise specifications and provides immediate feedback on model consistency.

26 citations


Journal ArticleDOI
TL;DR: An extension of the overtaking criterion, called fixed-step overtaking social welfare relation (SWR), and its leximin counterpart, calledFixed-step W-leximin SWR, are presented for the evaluation of infinite utility streams and satisfy Fixed-step Anonymity.
Abstract: This paper studies the extensions of the infinte-horizon variants of the leximin principle and utilitarianism on the set of infinite utility streams. We especially examine those extensions which satisfy the axiom of Preference-continuity (or Consistency) and the extended anonymity axiom called Q-Anonymity. We formulate new extended leximin and utilitarian social welfare relations (SWRs), called Q-W-leximin SWR and Q-overtaking criterion respectively, and show that Weak Preference-continuity (or Weak Consistency) and Q-Anonymity together with Strong Pareto and Hammond Equity (resp. Partial Unit Comparability) characterize all SWRs that include the Q-W-leximin SWR (resp. the Q-overtaking criterion) as a subrelation. We also show that there exists no SWR satisfying Strong Pareto, Strong Preference-continuity (or Strong Consistency) and Q-Anonymity.

25 citations


Proceedings ArticleDOI
03 Oct 2010
TL;DR: A framework for specifying overlaps between partial models and defining their global consistency is presented and an advantage of the framework is that heterogeneous consistency checking is reduced to the homogeneous case yet merging partial metamodels into one global meetingamodel is not needed.
Abstract: Software development often involves a set of models defined in different metamodels, each model capturing a specific view of the system. We call this set a multimodel, and its elements partial or local models. Since partial models overlap, they may be consistent or inconsistent wrt. a set of global constraints.We present a framework for specifying overlaps between partial models and defining their global consistency. An advantage of the framework is that heterogeneous consistency checking is reduced to the homogeneous case yet merging partial metamodels into one global metamodel is not needed. We illustrate the framework with examples and sketch its formal semantics based on category theory.

24 citations


Book ChapterDOI
01 Jan 2010
TL;DR: In this article, the authors describe a few of the different consistency models that have been proposed, and sketch a framework for thinking about consistency models, and also propose some axes of variation among the consistency models.
Abstract: There are many different replica control techniques, used in different research communities. To understand when one replica management algorithm can be replaced by another, we need to describe more abstractly the consistency model, which captures the set of properties that an algorithm provides, and on which the clients rely (whether the clients are people or other programs). In this chapter we describe a few of the different consistency models that have been proposed, and we sketch a framework for thinking about consistency models. In particular, we show that there are several styles in which consistency models can be expressed, and we also propose some axes of variation among the consistency models.

23 citations


Journal ArticleDOI
TL;DR: Weak consistency and asymptotic normality are shown for a stochastic EM algorithm for censored data from a mixture of distributions under lognormal assumptions in this article, which is used for estimation of wood fibre length distributions based on optically measured data from cylindric wood samples.

19 citations


Journal ArticleDOI
TL;DR: An iterated procedure for obtaining the nonparametric M-estimator and a cross-validation bandwidth selection method are discussed, and some numerical examples are provided to show that the proposed methods perform well in the finite sample case.

Journal ArticleDOI
TL;DR: This paper formally defines the four client-centric consistencies and their basis, i.e. eventual consistency, borrowing the framework from the theory of database concurrency control, and proves relations among these consistencies.

Journal ArticleDOI
TL;DR: In this article, the authors discuss the asymptotic limit behavior of quantile-quantile plots and mean excess plots and construct confidence bounds around the plots which enable them to statistically test whether the underlying distribution is heavy-tailed or not.
Abstract: Exploratory data analysis is often used to test the goodness-of-fit of sample observations to specific target distributions. A few such graphical tools have been extensively used to detect subexponential or heavy-tailed behavior in observed data. In this paper we discuss asymptotic limit behavior of two such plotting tools: the quantile-quantile plot and the mean excess plot. The weak consistency of these plots to fixed limit sets in an appropriate topology of $\mathbb{R}^2$ has been shown in Das and Resnick (Stoch. Models 24 (2008) 103-132) and Ghosh and Resnick (Stochastic Process. Appl. 120 (2010) 1492-1517). In this paper we find asymptotic distributional limits for these plots when the underlying distributions have regularly varying right-tails. As an application we construct confidence bounds around the plots which enable us to statistically test whether the underlying distribution is heavy-tailed or not.

Journal ArticleDOI
TL;DR: This paper shows that the dual-quorum protocol can approach the optimal performance and availability of Read-One/Write-All-Asynchronously (ROWA-A) epidemic algorithms without suffering the weak consistency guarantees and resulting design complexity inherent in ROWa-A systems.
Abstract: This paper introduces dual-quorum replication, a novel data replication algorithm designed to support Internet edge services. Edge services allow clients to access Internet services via distributed edge servers that operate on a shared collection of underlying data. Although it is generally difficult to share data while providing high availability, good performance, and strong consistency, replication algorithms designed for specific access patterns can offer nearly ideal trade-offs among these metrics. In this paper, we focus on the key problem of sharing read/write data objects across a collection of edge servers when the references to each object (1) tend not to exhibit high concurrency across multiple nodes and (2) tend to exhibit bursts of read-dominated or write-dominated behavior. Dual-quorum replication combines volume leases and quorum-based techniques to achieve excellent availability, response time, and consistency for such workloads. In particular, through both analytical and experimental evaluations, we show that the dual-quorum protocol can (for the workloads of interest) approach the optimal performance and availability of Read-One/Write-All-Asynchronously (ROWA-A) epidemic algorithms without suffering the weak consistency guarantees and resulting design complexity inherent in ROWA-A systems.

Proceedings Article
09 May 2010
TL;DR: A new class of local consistencies, called ◇f-consistencies, for qualitative constraint networks, based on weak composition and a mapping f that provides a covering for each relation of the considered qualitative calculus, and proposes a generic algorithm that allows us to compute the closure of quantitative constraint networks under any "well-behaved" consistency of the class.
Abstract: In this paper, we introduce a new class of local consistencies, called ◇f-consistencies, for qualitative constraint networks. Each consistency of this class is based on weak composition (◇) and a mapping f that provides a covering for each relation of the considered qualitative calculus. We study the connections existing between some properties of the introduced mappings and the relative inference strength of ◇f-consistencies. The consistency obtained by the usual closure under weak composition corresponds to the weakest element of the class, whereas ◇f-consistencies stronger than weak composition open new promising perspectives. Interestingly, the class of ◇f-consistencies is shown to form a complete lattice where the partial order denotes the relative strength of every two consistencies. We also propose a generic algorithm that allows us to compute the closure of qualitative constraint networks under any "well-behaved" consistency of the class. The experimentation that we have conducted on qualitative constraint networks from the Interval Algebra shows the interest of these new local consistencies, in particular for the consistency problem.

Journal ArticleDOI
TL;DR: This paper investigates the scalability of the relaxed consistency models (weak, release consistency) implemented by using transaction counters and compares the average and maximum code, synchronization and data latencies of the two consistency models for various network sizes with regular mesh topologies.
Abstract: This paper studies realization of relaxed memory consistency models in the network-on-chip based distributed shared memory (DSM) multi-core systems. Within DSM systems, memory consistency is a critical issue since it affects not only the performance but also the correctness of programs. We investigate the scalability of the relaxed consistency models (weak, release consistency) implemented by using transaction counters. Our experimental results compare the average and maximum code, synchronization and data latencies of the two consistency models for various network sizes with regular mesh topologies. The observed latencies rise for both the consistency models as the network size grows. However, the scaling behaviors are different. With the release consistency model these latencies grow significantly slower than with the weak consistency due to better optimization potential by means of overlapping, reordering and program order relaxations. The release consistency improves the performance by 15.6% and 26.5% on average in the code and consistency latencies over the weak consistency model for the specific application, as the system grows from single core to 64 cores. The latency of data transactions grows 2.2 times faster on the average with a weak consistency model than with a release consistency model when the system scales from single core to 64 core

Proceedings ArticleDOI
03 Aug 2010
TL;DR: This paper investigates the scalability of the weak consistency model, which may be implemented using a transaction counter, and compares synchronization latencies for various network sizes, topologies and lock positions in the network.
Abstract: In Multicore Network-on-Chip, it is preferable to realize distributed but shared memory (DSM) in order to reuse the huge amount of legacy code and easy programming. Within DSM systems, memory consistency is a critical issue since it affects not only performance but also the correctness of programs. In this paper, we investigate the scalability of the weak consistency model, which may be implemented using a transaction counter. The experimental results compare synchronization latencies for various network sizes, topologies and lock positions in the network. Average synchronization latency rises exponentially for mesh and torus topologies as the network size grows. However, torus improves the synchronization latency in comparison to mesh. For mesh topology network average synchronization latency is also slightly affected by the lock position with respect to the network center.

Book ChapterDOI
31 Aug 2010
TL;DR: This paper introduces a shared object, namely a set object that allows processes to add and remove values as well as take a snapshot of its content, and a new consistency condition suited to such an object, named value-based sequential consistency.
Abstract: This paper introduces a shared object, namely a set object that allows processes to add and remove values as well as take a snapshot of its content. A new consistency condition suited to such an object is introduced. This condition, named value-based sequential consistency, is weaker than linearizability. The paper also addresses the construction of a set object in a synchronous anonymous distributed system where participants can continuously join and leave the system. Interestingly, the protocol is proved correct under the assumption that some constraint on the churn is satisfied. This shows that the notion of "provably correct software" can be applied to dynamic systems.

Proceedings ArticleDOI
08 Dec 2010
TL;DR: This paper presents the middleware based McRep replication protocol that supports multiple consistency model in a distributed system with replicated data and demonstrates that in McRep workloads only pay for the consistency guarantees they actually need.
Abstract: Replication is a technique widely used in parallel and distributed systems to provide qualities such as performance, scalability, reliability and availability to their clients. These qualities comprise the non-functional requirements of the system. But the functional requirement consistency may also get affected as a side-effect of replication. Different replica control protocols provide different levels of consistency from the system. In this paper we present the middleware based McRep replication protocol that supports multiple consistency model in a distributed system with replicated data. Both correctness criteria and divergence aspects of a consistency model can be specified in the McRep configuration. Supported correctness criteria include linearizability, sequential consistency, serializability, snapshot isolation and causal consistency. Bounds on divergence can be specified in either version metric or delay metric. Our approach allows the same middleware to be used for applications requiring different consistency guarantees, eliminating the need for mastering a new replication middleware or framework for every application. We carried out experiments to compare the performance of various consistency requirements in terms of response time, concurrency conflict and bandwidth overhead. We demonstrate that in McRep workloads only pay for the consistency guarantees they actually need.

Proceedings ArticleDOI
Marco Serafini1, Flavio Junqueira1
28 Jul 2010
TL;DR: This paper shows how Eventual Linearizability can be used to support master-worker schemes such as workload sharding in Web applications and shows that it can significantly reduce the time to completion of the workload in some settings.
Abstract: It is well-known that using a replicated service requires a tradeoff between availability and consistency. Eventual Linearizability represents a midpoint in the design spectrum, since it can be implemented ensuring availability in worst-case periods and providing strong consistency in the regular case. In this paper we show how Eventual Linearizability can be used to support master-worker schemes such as workload sharding in Web applications. We focus on a scenario where sharding maps the workload to a pool of servers spread over an unreliable wide-area network. In this context, Linearizability is desirable, but it may prevent achieving sufficient availability and performance. We show that Eventual Linearizability can significantly reduce the time to completion of the workload in some settings.

Journal ArticleDOI
TL;DR: This work investigates the problem of bandwidth selection by cross-validation from a sequential point of view in a nonparametric regression model for sequential kernel smoothers in order to base these tasks on a single statistic.
Abstract: We consider the problem of bandwidth selection by cross-validation from a sequential point of view in a nonparametric regression model. Having in mind that in applications one often aims at estimation, prediction and change detection simultaneously, we investigate that approach for sequential kernel smoothers in order to base these tasks on a single statistic. We provide uniform weak laws of large numbers and weak consistency results for the cross-validated bandwidth. Extensions to weakly dependent error terms are discussed as well. The errors may be {\alpha}-mixing or L2-near epoch dependent, which guarantees that the uniform convergence of the cross validation sum and the consistency of the cross-validated bandwidth hold true for a large class of time series. The method is illustrated by analyzing photovoltaic data.

Proceedings ArticleDOI
27 Oct 2010
TL;DR: It is shown that w-SC is a significantly more powerful level of filtering and more effective w.r.t. the runtime than SAC, and is a complementary approach to AC or SAC.
Abstract: In this paper, we introduce a new partial consistency for constraint networks which is called Structural Consistency of level w and is denoted w-SC consistency. This consistency is based on a new approach. While conventional consistencies generally rely on local properties extended to the entire network, this new partial consistency considers global consistency on subproblems. These subproblems are defined by partial constraint graphs whose tree-width is bounded by a constant w. We introduce a filtering algorithm which achieves w-SC consistency. We also analyze w-SC filtering w.r.t. other classical local consistencies to show that this consistency is generally incomparable although this consistency can be regarded as a special case of inverse consistency. Finally, we present experimental results to assess the usefulness of this approach. We show that w-SC is a significantly more powerful level of filtering and more effective w.r.t. the runtime than SAC. We also show that w-SC is a complementary approach to AC or SAC. So we can offer a combination of filterings, whose power is greater than w-SC or SAC.

Journal ArticleDOI
TL;DR: The Consistency of Content-Externalism and Justification-Internalism is discussed in this paper, where it is shown that justification internalism is consistent with content externalism.
Abstract: (2002). The Consistency of Content-Externalism and Justification-Internalism. Australasian Journal of Philosophy: Vol. 80, No. 4, pp. 512-515.

Journal ArticleDOI
TL;DR: The results prove that the invalidate protocols of both consistency models are able to adapt themselves to the workload, and show that the newly developed delayed weak consistency is faster than the special weak consistency.

Proceedings Article
31 Mar 2010
TL;DR: This article proposed Dirichlet process mixtures of generalized linear models (DP-GLM) for nonparametric regression, which allows both continuous and categorical inputs and can model the same class of responses that can be modeled with a generalized linear model.
Abstract: We propose Dirichlet Process mixtures of Generalized Linear Models (DP-GLM), a new class of methods for nonparametric regression. Given a data set of input-response pairs, the DP-GLM produces a global model of the joint distribution through a mixture of local generalized linear models. DP-GLMs allow both continuous and categorical inputs, and can model the same class of responses that can be modeled with a generalized linear model. We study the properties of the DP-GLM, and show why it provides better predictions and density estimates than existing Dirichlet process mixture regression models. We give conditions for weak consistency of the joint distribution and pointwise consistency of the regression estimate.

Proceedings ArticleDOI
16 May 2010
TL;DR: This paper proposes an algorithm for managing metadata needed to route queries that is freshness-aware, thus gains in the fact that stale data can be read under some limits, and proposes two SWN models for structuring metadata: one with strong consistency and another with weak consistency.
Abstract: Recent systems which are in general composed by several resources and distributed over a large-scale network, need high-level models to be studied. In these complex systems such as peer-to-peer systems, efficient and fast query routing is necessary for managing applications workload. Many works have been proposed to deal with query routing, however none of these studies rely on Stochastic Well formed Petri Nets (SWN) for modelling the approach proposed. We aim in this paper, to propose an algorithm for managing metadata needed to route queries. Moreover we use SWN models to evaluate and validate our approach. Our solution is freshness-aware, thus gains in the fact that stale data can be read under some limits. These limits are wideley taken into account for managing metadata coherently. Our study takes into account the concurrency, the synchronization, the parallelism, the identity of the resources and their cooperation. We propose two SWN models for structuring metadata: one with strong consistency and another with weak consistency. Simulations are used to validate our approach and the results obtainded demonstrate the fesability of our solution.

01 Jan 2010
TL;DR: In this paper, a new condition to replace the Kullback-Leibler condition is presented, which is usefull in cases such as the estimation of decreasing densities.
Abstract: In this paper we discuss consistency of the posterior distribution in cases where the Kullback-Leibler condition is not verified. This condition is stated as : for all $\epsilon > 0$ the prior probability of sets in the form $\{f ; KL(f0 , f ) \leq \epsilon\}$ where KL(f0 , f ) denotes the Kullback-Leibler divergence between the true density f0 of the observations and the density f , is positive. This condi- tion is in almost cases required to lead to weak consistency of the posterior distribution, and thus to lead also to strong consistency. However it is not a necessary condition. We therefore present a new condition to replace the Kullback-Leibler condition, which is usefull in cases such as the estimation of decreasing densities. We then study some specific families of priors adapted to the estimation of decreasing densities and provide posterior concentration rate for these priors, which is the same rate a the convergence rate of the maximum likelihood estimator. Some simulation results are provided. Keywords: Nonparametric Bayesian inference, Consistency, entropy, Kullback Leibler, k-monotone density, kernel mixture.

Dissertation
20 Sep 2010
TL;DR: This thesis proposes a novel algorithm, called Scrooge, which reduces the replication costs of fast BFT replication in presence of unresponsive replicas, and shows the existence of an inherent tradeoff between optimal redundancy and minimal latency in Presence of faulty replicas.
Abstract: Online Web-scale services are being increasingly used to handle critical personal information. The trend towards storing and managing such information on the “cloud” is extending the need for dependable services to a growing range of Web applications, from emailing, to calendars, storage of photos, or finance. This motivates the increased adoption of fault-tolerant replication algorithms in Web-scale systems, ranging from classic, strongly-consistent replication in systems such as Chubby [Bur06] and ZooKeeper [HKJR10], to highly-available weakly-consistent replication as in Amazon’s Dynamo [DHJ+07] or Yahoo!’s PNUTS [CRS+08]. This thesis proposes novel algorithms to make fault-tolerant replication more efficient, available and cost effective. Although the proposed algorithms are generic, their goals are motivated by fulfilling two major needs of Web-scale systems. The first need is tolerating worst-case failures, which are also called Byzantine in the literature after the definition of [LSP82a], in order to reliably handle critical personal information. The second need is investigating proper weak consistency semantics for systems that must maximize availability and minimize performance costs and replication costs without relaxing consistency unnecessarily. Byzantine-Fault Tolerance: There has been a recent burst of research on Byzantine-Fault Tolerance (BFT) to make it have performance and replication costs that are feasible and comparable to the fault-tolerance techniques already in use today. BFT is typically achieved through state-machine replication, which implements the abstraction of a single reliable server on top of multiple unreliable replicas [Sch90]. This line of research ultimately aimed at showing the feasibility of this approach for Web-scale systems [CKL+09] to protect these critical systems from catastrophic events such as [Das]. This thesis proposes novel algorithms to reduce the performance and replication costs of BFT. First, the thesis shows how to reduce the cost of BFT without assuming trusted components. After the seminal PBFT algorithm [CL99], a number of fast BFT algorithms, as for example [MA06; DGV04; KAD+07], have been proposed. These papers show the existence of an inherent tradeoff between optimal redundancy and minimal latency in presence of faulty replicas. This is problematic in Web-scale systems, where Byzantine faults are very rare but where unresponsive (benign) replicas are commonplace. This thesis proposes a novel algorithm, called Scrooge, which reduces the replication costs of fast BFT replication in presence of unresponsive replicas. Scrooge shows that the additional replication costs needed for being fast in presence of faulty replicas are only dependent on the number of tolerated Byzantine faults, and not on the number of tolerated crashes. As an implication of this result, Scrooge is optimally resilient when it is configured to tolerate one Byzantine fault and any number of crashes. Such a configuration is quite common since Byzantine faults are relatively unlikely to happen. This thesis then explores the advantages of using trusted components. It shows that these can lead to significant latency and redundancy costs in practical asynchronous systems [SS07]. This dispelled the belief that trusted components need to be combined with synchronous links to achieve cost reductions, as hinted by previous work [CNV04; Ver06] . This additional assumption makes previously proposed algorithms unpractical in many settings, including Web-scale systems. In three-tiered Web-scale systems, for example, one could just leverage the fact that servers in the first tier (the Web-servers) are typically more stable, standardized and less prone to vulnerabilities than application servers. The HeterTrust protocol, which is presented in this thesis, uses trusted components without assuming synchronous links. It protects data confidentiality using a number of replicas that is linear in the number of tolerated faults and has a constant time complexity. This is a significant improvement over existing approaches which do not rely on trusted component but entail quadratic redundancy costs and linear latency [YMV+03]. Furthermore, different from existing work on confidential BFT, HeterTrust uses only symmetric-key cryptography instead of public-key signatures. HeterTrust features some interesting ideas related to speculation [KAD+07] and tolerance to denial-of-service attacks [ACKL08; CWA+09] that have been further developed by work published immediately after [SS07]. In parallel to this thesis’ work, the use of trusted components in asynchronous systems was also independently explored in [CMSK07]. Weak consistency: Some replicated Web-scale applications cannot afford strong consistency guarantees such as Linearizability [HW90]. The reason is the impossibility of implementing shared ob jects, as for example databases, that are available in presence of partitions or asynchrony [GL02]. With few exceptions, however, all these systems relax Linearizability even in periods when there are no partitions nor asynchrony and no relaxation is needed to keep the system available. Since this relaxation is problematic for many applications, recent research is focusing on stronger consistency guarantees which can be combined with high availability. This thesis introduces a novel consistency property, called Eventual Linearizability, which allows Linearizability to be violated only for finite windows of time. This thesis also describes Aurora, an algorithm ensuring Linearizability in periods when a single leader is present in the system. Aurora is graceful ly degrading because it uses a single failure detector and obtains different properties based on the actual strength of this failure detector, which is not known a priori. For Eventual Linearizability, a S failure detector is needed. In periods of asynchrony when links are untimely and no single leader is present, Aurora gracefully degrades to Eventual Consistency [FGL+96; Vog09] and Causal Consistency [Lam78]. For these property, Aurora only relies on a strongly complete failure detector C . In order to complete strong operations, which must be always linearized, a P failure detector is used. This is stronger than S , the weakest failure detector needed to implement consensus [CHT96], and thus linearizable shared objects. This thesis shows that there exists an inherent cost in combining Eventual Linearizability with Linearizability.

Proceedings ArticleDOI
27 Sep 2010
TL;DR: Key ideas include a cost model defined in term of analyzing work and span, the use of divide-and-conquer and contraction, the need for arrays (immutable) to achieve asymptotic efficiency, and the power of (deterministic) randomized algorithms.
Abstract: Functional programming presents several important advantages in the design, analysis and implementation of parallel algorithms: It discourages iteration and encourages decomposition.It supports persistence and hence easy speculation.It encourages higher-order aggregate operations.It is well suited for defining cost models tied to the programming language rather than the machine.Implementations can avoid false sharing.Implementations can use cheaper weak consistency models.And most importantly, it supports safe deterministic parallelism.In fact functional programming supports a level of abstraction in which parallel algorithms are often as easy to design and analyze as sequential algorithms. The recent widespread advent of parallel machines therefore presents a great opportunity for functional programming languages. However, any changes will require significant education at all levels and involvement of the functional programming community.In this talk I will discuss an approach to designing and analyzing parallel algorithms in a strict functional and fully deterministic setting. Key ideas include a cost model defined in term of analyzing work and span, the use of divide-and-conquer and contraction, the need for arrays (immutable) to achieve asymptotic efficiency, and the power of (deterministic) randomized algorithms. These are all ideas I believe can be taught at any level.

Journal ArticleDOI
TL;DR: In this paper, the relation between truth and consistency is investigated by employing the concept of ωconsistency and ω-inconsistency, and the results are illustrated by an interpretation of the well-known logical square and its generalization.
Abstract: This paper investigates relations between truth and consistency. The basic intuition is that truth implies consistency, but the reverse dependence fails. However, this simple account leads to some troubles, due to some metalogical results, in particular the Godel-Malcev completeness theorem. Thus, a more advanced analysis is required. This is done by employing the concept of ω-consistency and ω-inconsistency. Both concepts motivate that the concept of the standard truth should be introduced as well. The results are illustrated by an interpretation of the well-known logical square and its generalization.