scispace - formally typeset
Search or ask a question

Showing papers on "Two-phase commit protocol published in 2015"


Proceedings ArticleDOI
27 May 2015
TL;DR: A lower-bound on commit latency is derived to develop a commit protocol, called Helios, that achieves low commit latencies and in a real-world deployment on five datacenters, Helios has a commit latency that is close to the optimal.
Abstract: Cross datacenter replication is increasingly being deployed to bring data closer to the user and to overcome datacenter outages. The extent of the influence of wide-area communication on serializable transactions is not yet clear. In this work, we derive a lower-bound on commit latency. The sum of the commit latency of any two datacenters is at least the Round-Trip Time (RTT) between them. We use the insights and lessons learned while deriving the lower-bound to develop a commit protocol, called Helios, that achieves low commit latencies. Helios actively exchanges transaction logs (history) between datacenters. The received logs are used to decide whether a transaction can commit or not. The earliest point in the received logs that is needed to commit a transaction is decided by Helios to ensure a low commit latency. As we show in the paper, Helios is theoretically able to achieve the lower-bound commit latency. Also, in a real-world deployment on five datacenters, Helios has a commit latency that is close to the optimal.

44 citations


Journal ArticleDOI
TL;DR: This paper develops a three-phase framework to enable agents to create a commitment protocol dynamically, and proposes two algorithms that ensure that each generated protocol allows the agent to reach its goals if the protocol is enacted.
Abstract: Agent interaction is a fundamental part of any multiagent system. Such interactions are usually regulated by protocols, which are typically defined at design-time. However, in many situations a protocol may not exist or the available protocols may not fit the needs of the agents. In order to deal with such situations agents should be able to generate protocols at runtime. In this paper we develop a three-phase framework to enable agents to create a commitment protocol dynamically. In the first phase one of the agents generates candidate commitment protocols, by considering its goals, its abilities and its knowledge about the other agents' services. We propose two algorithms that ensure that each generated protocol allows the agent to reach its goals if the protocol is enacted. The second phase is ranking of the generated protocols in terms of their expected utility in order to select the one that best suits the agent. The third phase is the negotiation of the protocol between agents that will enact the protocol so that the agents can agree on a protocol that will be used for enactment. We demonstrate the applicability of our approach using a case study.

29 citations


Proceedings ArticleDOI
04 May 2015
TL;DR: This work describes a framework for top-down centralized self-adaptive MASs where adaptive agents are "protocol-driven" and adaptation consists in runtime protocol switch.
Abstract: We describe a framework for top-down centralized self-adaptive MASs where adaptive agents are "protocol-driven" and adaptation consists in runtime protocol switch.Protocol specifications take a global, rather than a local, perspective and each agent, before starting to follow a new (global) protocol, projects it for obtaining a local version. If all the agents in the MAS are driven by the same global protocol, the compliance of the MAS execution to the protocol is obtained by construction.

28 citations


Patent
12 Mar 2015
TL;DR: In this paper, a distributed transaction commit protocol with low latency read and write transactions is proposed, which operates by first receiving a transaction, distributed across partial transactions to be processed at respective cohort nodes, from a client at a coordinator node.
Abstract: Disclosed herein are system, method, and computer program product embodiments for implementing a distributed transaction commit protocol with low latency read and write transactions. An embodiment operates by first receiving a transaction, distributed across partial transactions to be processed at respective cohort nodes, from a client at a coordinator node. The coordinator node requests the cohort nodes to prepare to commit respective partial transactions. Upon receiving prepare commit results, the coordinator node generates a global commit timestamp for the transaction. Coordinator node then simultaneously sends the global commit timestamp to the cohort nodes and commit the transaction to a coordinator disk storage. Upon receiving both sending results from the cohort nodes and a committing result from the coordinator disk storage, the coordinator node provides a transaction commit result of the transaction to the client.

27 citations


Journal ArticleDOI
TL;DR: This study uses an ID based authenticated and key agreement protocol to improve the recent protocol proposed by Xue et al. that is not only really insecure against masquerade and insider attacks but also vulnerable to off-line password guessing attack.
Abstract: There is an increasing demand of an anonymous authentication to secure communications between numerous different network members while preserving privacy for the members. In this study, we address this issue by using an ID based authenticated and key agreement protocol to improve the recent protocol proposed by Xue et al. They claimed that their protocol could resist masquerade and insider attacks. Unfortunately, we find that Xue et al.'s protocol is not only really insecure against masquerade and insider attacks but also vulnerable to off-line password guessing attack. Therefore, a slight modification to their protocol is proposed to improve their shortcomings. Moreover, our protocol does not use timestamps, so it is not required to synchronize the time. As a result, according to our performance and security analyses, we can prove that our proposed protocol can enhance efficiency and improve security in comparison to previous protocols.

18 citations


Journal ArticleDOI
Youyou Lu1, Jiwu Shu1, Jia Guo1, Shuai Li1, Onur Mutlu2 
TL;DR: Experiments show that LightTx achieves nearly the lowest overhead in garbage collection, memory consumption and mapping persistence compared to existing embedded transaction designs and provides up to 20.6 percent performance improvement due to improved transaction concurrency.
Abstract: Flash memory has accelerated the architectural evolution of storage systems with its unique characteristics compared to magnetic disks The no-overwrite property of flash memory naturally supports transactions, a commonly used mechanism in systems to provide consistency However, existing embedded transaction designs in flash-based Solid State Drives (SSDs) either limit the transaction concurrency or introduce high overhead in tracking transaction states This leads to low or unstable SSD performance In this paper, we propose a transactional SSD (TxSSD) architecture, LightTx, to enable better concurrency and low overhead First, LightTx improves transaction concurrency arbitrarily by using a page-independent commit protocol Second, LightTx tracks the recent updates by leveraging the near-log-structured update property of SSDs and periodically retires dead transactions to reduce the transaction state tracking cost Experiments show that LightTx achieves nearly the lowest overhead in garbage collection, memory consumption and mapping persistence compared to existing embedded transaction designs LightTx also provides up to 206 percent performance improvement due to improved transaction concurrency

18 citations


Patent
14 May 2015
TL;DR: Distributed commit as discussed by the authors is a distributed transaction management technique that ensures synchronization between participating nodes in a global or distributed transaction, which leverages a commit protocol that uses local clocks at the respective participating nodes.
Abstract: The subject disclosure relates to a distributed transaction management technique that ensures synchronization between participating nodes in a global or distributed transaction. The technique leverages a commit protocol that uses local clocks at the respective participating nodes. Participants in a global transaction are configured to utilize the same commit timestamp and logical read time and can advance their respective local clocks to establish this synchronization. In one embodiment, distributed commit utilizes a modified version of two-phase commit that includes an extra phase to collect commit timestamp votes from participants. Additionally, a heartbeat mechanism can be used to establish loose synchronization between nodes. In another embodiment, a node can respond to a remote transaction request by returning a list of nodes involved in generating the result of the transaction and the types of access used by such nodes in addition to the transaction result itself.

13 citations


Proceedings ArticleDOI
04 Nov 2015
TL;DR: A safety concept of a role and an FRWA with role safety (FRWA-RS) protocol where the safety of arole increases and decreases if a transaction holding the role and illegally reading objects commits and aborts, respectively is proposed.
Abstract: In information systems, illegal information flow among objects has to be prevented. A transaction illegally reads an object if the object includes data in other objects which are not allowed to be read. In our previous studies, the flexible read-write-abortion with role sensitivity (FRWA-R) and object sensitivity (FRWA-O) protocols are discussed to prevent illegal information flow. Here, a transaction is aborted with some probability once illegally reading an object. The abortion probability depends on the sensitivity of roles which the transaction holds and objects which the transaction illegally reads. The role sensitivity and object sensitivity show how many transactions which hold the role and illegally read the object are aborted after illegally reading the object, respectively. Here, the sensitivity just monotonically increases each time a transaction is aborted. In this paper, we newly propose a safety concept of a role and an FRWA with role safety (FRWA-RS) protocol where the safety of a role increases and decreases if a transaction holding the role and illegally reading objects commits and aborts, respectively. A transaction with safer roles is aborted with smaller probability. In the evaluation, we show a fewer number of transactions are aborted in the FRWA-RS protocol than the RWA protocol while more than the WA protocol and transactions are more efficiently performed than the WA protocol.

6 citations


DissertationDOI
31 Mar 2015
TL;DR: This thesis is devoted to investigate advanced features in the analysis of cryptographic protocols tailored to the Maude-NPA tool, and defines several techniques which drastically reduce the state space and can often yield a finite state space, so that whether the desired security property holds or not can in fact be decided automatically, in spite of the general undecidability of such problems.
Abstract: The area of formal analysis of cryptographic protocols has been an active one since the mid 80's. The idea is to verify communication protocols that use encryption to guarantee secrecy and that use authentication of data to ensure security. Formal methods are used in protocol analysis to provide formal proofs of security, and to uncover bugs and security flaws that in some cases had remained unknown long after the original protocol publication, such as the case of the well known Needham-Schroeder Public Key (NSPK) protocol. In this thesis we tackle problems regarding the three main pillars of protocol verification: modelling capabilities, verifiable properties, and efficiency. This thesis is devoted to investigate advanced features in the analysis of cryptographic protocols tailored to the Maude-NPA tool. This tool is a model-checker for cryptographic protocol analysis that allows the incorporation of different equational theories and operates in the unbounded session model without the use of data or control abstraction. An important contribution of this thesis is relative to theoretical aspects of protocol verification in Maude-NPA. First, we define a forwards operational semantics, using rewriting logic as the theoretical framework and the Maude programming language as tool support. This is the first time that a forwards rewriting-based semantics is given for Maude-NPA. Second, we also study the problem that arises in cryptographic protocol analysis when it is necessary to guarantee that certain terms generated during a state exploration are in normal form with respect to the protocol equational theory. We also study techniques to extend Maude-NPA capabilities to support the verification of a wider class of protocols and security properties. First, we present a framework to specify and verify sequential protocol compositions in which one or more child protocols make use of information obtained from running a parent protocol. Second, we present a theoretical framework to specify and verify protocol indistinguishability in Maude-NPA. This kind of properties aim to verify that an attacker cannot distinguish between two versions of a protocol: for example, one using one secret and one using another, as it happens in electronic voting protocols. Finally, this thesis contributes to improve the efficiency of protocol verification in Maude-NPA. We define several techniques which drastically reduce the state space, and can often yield a finite state space, so that whether the desired security property holds or not can in fact be decided automatically, in spite of the general undecidability of such problems.

4 citations


Proceedings ArticleDOI
30 May 2015
TL;DR: Experimental results show that the program can improve HDFS file system metadata management efficiency, and a protocol based on two-phase commit to ensure distributed metadata consistency is proposed.
Abstract: As it can provide high throughput data access, and easy to deploy on a cheap machine ,Distributed File System HDFS is used widelyAll HDFS file access requests are required by the metadata server node so the single metadata server nodes often become the bottleneck of the whole system problem, this paper design a multi-node distributed metadata management solution in the directory Hash-based metadata storage strategy, proposed a protocol based on two-phase commit to ensure distributed metadata consistency Experimental results show that the program can improve HDFS file system metadata management efficiency

3 citations


Journal ArticleDOI
TL;DR: There are many parameters to consider when designing a key exchange protocols for mobile environment and significance of parameters is different, based on the security requirement of application for which protocol is being developed.
Abstract: Background/Objectives: Cryptographic protocols are used for securing information when transmitting it over insecure network such as Internet. This paper’s objective is to study recently proposed key exchange protocols for mobile environment. Methods/Statistical Analysis: In this paper we do a literature survey of recently proposed key exchange protocols for mobile environment. We analyze execution of protocol in three phases i.e. initialization, communication, renewal/ termination phase. In initialization protocol prepares for key exchange process. Next, protocol actually communicates with others to exchange secret key. Third protocol may terminate or renew connection for further communication. We also study activities done by protocols that define characteristics of protocol. Findings: In this paper we find that there are many parameters to consider when designing a key exchange protocols for mobile environment. However, significance of parameters is different, based on the security requirement of application for which protocol is being developed. Strength of a protocol is in the encryption technique that it uses. Hence, stronger encryption techniques results in better security of protocol. Speed of protocol is another important parameter. Length of steps in algorithm of protocol is directly proportional to its speed. A protocol must be able to withstand various attacks on it. A protocol should have high reliability if it is to be used in handling critical data. We found that modern key exchange protocols are not properly analyzed and tested before being proposed. Instead of working on already proposed protocols and solve their vulnerabilities and strengthening them researchers are proposing new protocols without testing them properly for vulnerabilities which are later exploited by malicious users. Applications/Improvements: This research paper will help researchers and protocol designers. It will give them idea about design parameters when designing key exchange protocol. It will enable them to take better decisions.

Proceedings ArticleDOI
01 Jan 2015
TL;DR: A colored Petri nets based conformance checking method is proposed for a satellite network control protocol signaling dynamic conformance verification framework, which is used to capture runtime protocol signaling from protocol execution environment.
Abstract: For the existence of malice or abnormal nodes and the influence of unreliable satellite network environment, some violations may probably happen in network control protocol execution process, which lead to the inconformance between protocol specification and actual protocol execution states. It reflects the robustness problems in relevant protocol design issue. Meanwhile, it is difficult to assure that the interaction behaviors of protocol nodes conform to the expectation of protocol specification. To tackle the inconformance problem, this paper proposes a colored Petri nets based conformance checking method. A satellite network control protocol signaling dynamic conformance verification framework, which is used to capture runtime protocol signaling from protocol execution environment, is presented. Then, it puts forward a signaling dynamic conformance checking algorithm that centers on the protocol interaction behaviors. At last, performance and overhead evaluations are performed to demonstrate the usability and availability of this conformance checking method.

Journal ArticleDOI
TL;DR: The simulation result shows that the cryptographic moving-knife protocol is better than the Sgall-Woeginger protocol and can be executed approximately but asynchronously by a discrete protocol using a secure auction protocol.
Abstract: This paper proposes a cake-cutting protocol using cryptography when the cake is a heterogeneous good that is represented by an interval on a real line. Although the Dubins-Spanier moving-knife protocol with one knife achieves simple fairness and truthfulness, all players must execute the protocol synchronously. Thus, the protocol cannot be executed on asynchronous networks such as the Internet. We show that the moving-knife protocol can be executed approximately but asynchronously by a discrete protocol using a secure auction protocol. The number of cuts is n − 1w heren is the number of players, which is the minimum. Sgall and Woeginger proposed another asynchronous protocol that satisfies simple fairness, truthfulness, and the minimum number of cuts. These two protocols are com- pared from the viewpoint of social surplus. The simulation result shows that the cryptographic moving-knife protocol is better than the Sgall-Woeginger protocol.

01 Jan 2015
TL;DR: This paper compares three existing negotiation protocols and two protocols developed by the authors to determine which protocol is best suited to the application in terms of scalability, robustness against agent failure, communication overhead, and response time.
Abstract: A robust negotiation protocol is required for a multi-agent simulation involving two adversarial teams in a highly dynamic and hostile environment. In this environment agent failure is possible due to a number of circumstances such as running out of fuel or being destroyed by other agents. This paper compares three existing negotiation protocols: the Contract Net Protocol (Smith, 1980), the Distributed Contract Net Protocol (Cano and Carbo, 2006), the Extended Contract Net Protocol (Aknine et al., 2004) and two protocols developed by the authors (termed herein theSimple' andHybrid' protocols). The objective of this paper is to determine which protocol is best suited to our application in terms of scalability, robustness against agent failure, communication overhead, and response time. To evaluate these negotiation protocols an experiment was conducted, involving three different test cases, which varied the availability of agents at different stages of the negotiation process. In these test cases a team of software agents (theblue team') were tasked with destroying a number of stationary targets (thered team'). The experimental results showed that the Contract Net Protocol (CNP) was suitable for low risk environments due to its low communication overhead, while the Distributed Contract Net Protocol (DCNP) was more suitable for high-risk environments due to its greater robustness against agent failure. However, this robustness was achieved at the expense of greatly increased communication. An alternate approach that showed promising results was to use a Hybrid protocol that switched between CNP and DCNP depending on the environment. Additional work is required to develop the Hybrid protocol further.

11 Aug 2015
TL;DR: This work initiates a protocol of Two-phase validation commit that makes sure of safe transaction by means of checking policy, as well as data consistency throughout transaction execution.
Abstract: In recent times, a lot of work has been made on provision of some level of assurance among data as well as policies. Trusted transactions do not break credential or else policy inconsistencies over transaction duration hence in our work we formalize perception of trusted transactions. A safe transaction is trustworthy as well as database accurate and we present safe transactions that recognize transactions that are both trustworthy and obey atomicity, consistency, isolation, and durability properties of distributed database systems. We put forward a novel algorithm known as two-phase validation that operates in two phases such as collection as well as validation. We initiate a protocol of Two-phase validation commit that makes sure of safe transaction by means of checking policy, as well as data consistency throughout transaction execution. Protocol of Two-phase validation commit is an altered version of fundamental two-phase validation commit protocols.

Patent
31 Jul 2015
TL;DR: In this paper, a Pthis paperRESH protocol for transforming a non-proactively secure secret sharing protocol into a Proactively Secure Secret Sharing (PSS) protocol is described. But, the Pthis paperESH protocol does not support sharing of secret data among multiple parties.
Abstract: Described is system for transforming a SHARE protocol into a proactively secure secret sharing (PSS) protocol. A PREFRESH protocol is performed that includes execution of the SHARE protocol. The PREFRESH protocol refreshes shares of secret data among multiple parties. The SHARE protocol is a non-proactively secure secret sharing protocol.

Proceedings ArticleDOI
01 Aug 2015
TL;DR: A hierarchical detection model is developed that can effectively solve the problem of excessive traffic load of the coordination to shorten the time of parallel processing for each sub-transaction and reduce unnecessary transaction submissions caused by failures through the detection and differentiation of the fault.
Abstract: Distributed transactions gradually become the mainstream mode of data processing. How to avoid the transaction congestion due to network delays and site failures, and how to effectively distinguish between these two faults become the hot issues in the study of distributed transactions. In the fault detection and distinction of the distributed transaction, we develop a hierarchical detection model, which has the advantages of detecting path clearly, having the small number of probe packet, and approximating the actual network topology. The model can effectively solve the problem of excessive traffic load of the coordination to shorten the time of parallel processing for each sub-transaction. On the other hand, this can reduce unnecessary transaction submissions caused by failures through the detection and differentiation of the fault to enhance the reliability and availability of the agreement. We select DπF calculus as a modeling language and the extend DπF calculus through adding the clock operator, which intuitively describes the scenarios of node failure and link failure in distributed transaction failures, and then we can distinguish between these two types of failures and validate them by the bisimulation theory.

01 Jan 2015
TL;DR: A protocol for secure mining of association principles in horizontally distributed databases that relies on the quick Distributed Mining (QDM) formula of Cheung et al. that is associate unsecured distributed version of the Apriori principle.
Abstract: We propose a protocol for secure mining of association principles in horizontally distributed databases. This leading protocol is that of Kantarcioglu and Clifton. Our protocol, like theirs, relies on the quick Distributed Mining (QDM) formula of Cheung et al. that is associate unsecured distributed version of the Apriori principle. The most ingredients in our protocol are two novel secure multi-party algorithms one that computes the union of personal subsets that every of the interacting group of actors hold, and second one that tests the coupling of a component control by one actor in an exceedingly set control by another. Our protocol offers increased privacy with relevance the protocol. Additionally, it is less complicated and is considerably a lot of economical in terms of communication rounds, communication value and machine value.


Proceedings ArticleDOI
29 Oct 2015
TL;DR: Pronto removes the prepare phase of 2PC, reducing communication and logging costs and compared to state-of-art commitment protocols, Pronto outperforms other commitment protocols in workloads with varying numbers of participants.
Abstract: Distributed transaction can enable not only large-scale transacting business in highly scalable datastores, but also fast information extraction from big data through materialized view maintenance and incremental processing. Atomic commitment is key to the correctness of transaction processing. While two-phase commit (2PC) is widely used for distributed transaction commitment even in modern large-scale datastores, it is costly in performance. Even though variants of 2PC can improve performance, they are blocking on server failures, impairing data availability. Existent non-blocking atomic commitment protocols are too costly for practical usage. This work presents Pronto, a non-blocking one-phase commit protocol for distributed transactions over replicated data. Pronto removes the prepare phase of 2PC, reducing communication and logging costs. Without transaction client failure, Pronto can commit a transaction in one communication roundtrip; and, client failures cannot block transaction processing. Pronto can also tolerate server failures. Pronto is compared to state-of-art commitment protocols. Pronto outperforms other commitment protocols in workloads with varying numbers of participants. When a transaction has many participants, Pronto can commit in one fifth of the time 2PC does.