scispace - formally typeset
Search or ask a question

Showing papers on "Two-phase commit protocol published in 1999"


Journal ArticleDOI
TL;DR: This work presents a protocol for unlinkable serial transactions suitable for a variety of network-based subscription services, and is the first protocol to use cryptographic blinding to enable subscription services.
Abstract: We present a protocol for unlinkable serial transactions suitable for a variety of network-based subscription services. It is the first protocol to use cryptographic blinding to enable subscription services. The protocol prevents the service from tracking the behavior of its customers, while protecting the service vendor from abuse due to simultaneous or cloned use by a single subscriber. Our basic protocol structure and recovery protocol are robust against failure in protocol termination. We evaluate the security of the basic protocol and extend the basic protocol to include auditing, which further deters subscription sharing. We describe other applications of unlinkable serial transactions for pay-per-use trans subscription, third-party subscription management, multivendor coupons, proof of group membership, and voting.

77 citations


Journal ArticleDOI
TL;DR: This paper addresses the cycle time properties of the PROFIBUS MAC protocol, since the knowledge of these properties is of paramount importance for guaranteeing the real-time behaviour of a distributed computer-controlled system which is supported by this type of network.

75 citations


Proceedings ArticleDOI
24 Mar 1999
TL;DR: This work uses model checking to establish five essential correctness properties of the secure electronic transaction (SET) protocol, and is the first attempt to formalize the SET protocol for the purpose of model checking.
Abstract: We use model checking to establish five essential correctness properties of the secure electronic transaction (SET) protocol SET has been developed jointly by Visa and MasterCard as a method to secure payment card transactions over open networks, and industrial interest in the protocol is high Our main contributions are to firstly create a formal model of the protocol capturing the purchase request, payment authorization, and payment capture transactions Together these transactions constitute the kernel of the protocol We then encoded our model and the aforementioned correctness properties in the input language of the FDR model checker Running FDR on this input established that our model of the SET protocol satisfies all five properties even though the cardholder and merchant, two of the participants in the protocol, may try to behave dishonestly in certain ways To our knowledge, this is the first attempt to formalize the SET protocol for the purpose of model checking

65 citations


Book
30 Nov 1999
TL;DR: The paper provides an overview of transaction processing needs and solutions in conventional DBMSs as background, explains the constraints introduced by multilevel security, and describes the results of research in multileVEL secure transaction processing, which includes research results and limitations in concurrency control, multilesvel transaction management, and secure commit protocols.
Abstract: Since 1990, transaction processing in multilevel secure database management systems (DBMSs) has been receiving a great deal of attention from the security community. Transaction processing in these systems requires modification of conventional scheduling algorithms and commit protocols. These modifications are necessary because preserving the usual transaction properties when transactions are executing at different security levels often conflicts with the enforcement of the security policy. Considerable effort has been devoted to the development of efficient, secure algorithms for the major types of secure DBMS architectures: kernelized, replicated, and distributed. An additional problem that arises uniquely in multilevel secure DBMSs is that of secure, correct execution when data at multiple security levels must be written within one transaction. Significant progress has been made in a number of these areas, and a few of the techniques have been incorporated into commercial trusted DBMS products. However, there are many open problems remain to be explored. This paper reviews the achievements to date in transaction processing for multilevel secure DBMSs. The paper provides an overview of transaction processing needs and solutions in conventional DBMSs as background, explains the constraints introduced by multilevel security, and then describes the results of research in multilevel secure transaction processing. Research results and limitations in concurrency control, multilevel transaction management, and secure commit protocols are summarized. Finally, important new areas are identified for secure transaction processing research.

42 citations


Patent
01 Oct 1999
TL;DR: In this paper, the authors present a two-phase commit protocol for a transaction in a system having a plurality of data sources, where availability status is verified for all of the data sources and the transaction is completed for those data sources that are available.
Abstract: A method, apparatus, and article of manufacture for performing a two-phase commit protocol for a transaction in a system having a plurality of data sources. An availability status is verified for all of the data sources, and the two-phase commit protocol for the transaction is completed for those data sources that are available, while the transaction is logged for data sources that are unavailable.

27 citations


Proceedings ArticleDOI
01 May 1999
TL;DR: A new atomic commit protocol is presented, called Presumed Any, that integrates the three commonly known two-phase commit protocols and proves the correctness of their integration.
Abstract: We identify one of the incompatibility problems associated with atomic commit protocols that prevents them from being used together and we derive a correctness criterion that cap tures the correctness of their integration. We also present a new atomic commit protocol, called Presumed Any, that integrates the three commonly known two-phase commit protocols and prove its correctness.

19 citations


Book ChapterDOI
TL;DR: In this paper, the authors present a lightweight reliable object migration protocol that preserves the centralized object semantics, allows for precise prediction of network behavior, and permits construction of fault tolerance abstractions in the language.
Abstract: This paper presents a lightweight reliable object migration protocol that preserves the centralized object semantics, allows for precise prediction of network behavior, and permits construction of fault tolerance abstractions in the language. Each object has a "home site" to which all migration requests are directed. Compared to the standard technique of creating and collapsing forwarding chains, this gives a better worst-case network behavior and it; limits dependencies on third-party sites. The protocol defines "freely mobile" objects that have the interesting property of always executing locally i.e., each method executes in the thread that invokes it. This makes them dual, in a precise sense. to stationary objects. The protocol is designed to be as efficient as a nonreliable protocol in the common ease of no failure, and to provide sufficient hooks so that common fault tolerance algorithms can be programmed completely in the Oz language. The protocol is fully implemented in the network layer of the Mozart platform for distributed application development, which implements Oz (see http: //www.mozart;-oz, drg). This paper defines the protocol. in an intuitive yet precise way using the concept; of distribution graph to model distributed execution of language entities. Formalization and proof of protocol properties are done elsewhere.

13 citations


Proceedings ArticleDOI
27 Jan 1999
TL;DR: A group failure detection protocol whose design naturally ensures its liveness is analysed, and it is shown that by tuning appropriately some of its duration-related parameters, the safety property can be guaranteed with a probability as close to 1 as desired.
Abstract: A group membership failure (in short, a group failure) occurs when one of the group members crashes. A group failure detection protocol has to inform all the non-crashed members of the group that this group entity has crashed. Ideally, such a protocol should be live (if a process crashes, then the group failure has to be detected) and safe (if a group failure is claimed, then at least one process has crashed). Unreliable asynchronous distributed systems are characterized by the impossibility for a process to get an accurate view of the system state. Consequently, the design of a group failure detection protocol that is both safe and live is a problem that cannot be solved in all runs of an asynchronous distributed system. We analyse a group failure detection protocol whose design naturally ensures its liveness. We show that by tuning appropriately some of its duration-related parameters, the safety property can be guaranteed with a probability as close to 1 as desired. This analysis shows that, in real distributed systems, it is possible to achieve failure detection with a negligible probability of wrong suspicions.

12 citations


Patent
22 Sep 1999
TL;DR: In this article, the authors propose a two-phase commit protocol, coordinated by server process 22 and server process 26, where a failure occurs during the first phase, such that server process 21 has "gone down", such that some of the resource objects may make an heuristic decision to commit or rollback to avoid locking the resources.
Abstract: Distributed transactions are processed in a client/server computer system using the two phase commit protocol, coordinated by server process 22. Upon completion of the first phase, the resource objects 231, 241, 251 etc. involved in the transaction are placed in a prepared state; server process 23 performs its own two-phase commit protocol with respect to resource objects 261-263 of server process 26. Should a failure occur during this prepared state, such that server process 22 has "gone down", some of the resource objects, e.g. 261-263 may make an heuristic decision to commit or rollback to avoid prolonged locking of the resources. An heuristic direction is agreed in advance amongst the transactionally involved servers having an ability to take heuristic decisions, thus ensuring that all the resource objects follow the same direction; the recovered server process 22 also follows this direction when issuing the phase two commands of the two phase commit protocol (fig. 4). Thus heuristic damage - which would occur if some resources commit while others rollback - is avoided.

9 citations


Proceedings ArticleDOI
18 Oct 1999
TL;DR: A new efficient logging protocol for adaptive software DSM (ADSM), called adaptive logging (AL), which is suitable for both coordinated and independent checkpointing since it speeds up the recovery process and eliminates the unbounded rollback problem associated withindependent checkpointing.
Abstract: Software distributed shared memory (DSM) improves the programmability of message-passing machines and workstation clusters by providing a shared memory abstract (i.e., a coherent global address space) to programmers. As in any distributed system, however; the probability of software DSM failures increases as the system size grows. This paper presents a new efficient logging protocol for adaptive software DSM (ADSM), called adaptive logging (AL). It is suitable for both coordinated and independent checkpointing since it speeds up the recovery process and eliminates the unbounded rollback problem associated with independent checkpointing. By leveraging the existing coherence data maintained by ADSM, our AL protocol adapts to log only unrecoverable data (which cannot be recreated or retrieved after a failure) necessary for correct recovery, reducing both the number of messages logged and the amount of logged data. We have performed experiments on a cluster of eight Sun Ultra-5 workstations, comparing our AL protocol against the previous message logging (ML) protocol by implementing both protocols in TreadMarks-based ADSM. The experimental results show that our AL protocol consistently outperforms the ML protocol: Our protocol increases the execution time slightly by 2% to 10% during failure-free execution, while the ML protocol lengthens the execution time by many folds due to its larger log size and higher number of messages logged. Our AL-based recovery also outperforms ML-based recovery by 9% to 17% under parallel application examined.

8 citations


Proceedings ArticleDOI
21 Sep 1999
TL;DR: A new, efficient message logging technique, called the coherence-centric logging (CCL) and recovery protocol, for home-based SDSM, which improves the crash recovery speed by 55% to 84% when compared to re-execution, and it outperforms ML-recovery by a noticeable margin.
Abstract: The probability of failures in software distributed shared memory (SDSM) increases as the system size grows. This paper introduces a new, efficient message logging technique, called the coherence-centric logging (CCL) and recovery protocol, for home-based SDSM. Our CCL minimizes failure-free overhead by logging only data necessary for correct recovery and tolerates high disk access latency by overlapping disk accesses with coherence-induced communication existing in home-based SDSM, while our recovery reduces the recovery time by prefetching data according to the future shared memory access patterns, thus eliminating the memory miss idle penalty during the recovery process. To the best of our knowledge, this is the very first work that considers crash recovery in home-based SDSM. We have performed experiments on a cluster of eight SUN Ultra-5 workstations, comparing our CCL against traditional message logging (ML) by modifying TreadMarks, a state-of-the-art SDSM system, to support the home-based protocol and then implementing both our CCL and the ML protocols in it. The experimental results show that our CCL protocol consistently outperforms the ML protocol: Our protocol increases the execution time negligibly, by merely 1% to 6%, during failure-free execution, while the ML protocol results in the execution time overhead of 9% to 24% due to its large log size and high disk access latency. Our recovery protocol improves the crash recovery speed by 55% to 84% when compared to re-execution, and it outperforms ML-recovery by a noticeable margin, ranging from 5% to 18% under parallel applications examined.

Proceedings ArticleDOI
12 Apr 1999
TL;DR: A new, efficient logging protocol, called lazy logging, and a fast crash recovery Protocol, called the prefetch-based crash recovery (PCR), for software distributed shared memory (SDSM), which reduces the recovery time by prefetching data according to the future memory access patterns, thus eliminating memory miss penalty during the recovery process.
Abstract: In this paper we propose a new, efficient logging protocol, called lazy logging, and a fast crash recovery protocol, called the prefetch-based crash recovery (PCR), for software distributed shared memory (SDSM). Our lazy logging protocol minimizes failure-free overhead by logging only data indispensable for correct recovery, while our PCR protocol reduces the recovery time by prefetching data according to the future memory access patterns, thus eliminating memory miss penalty during the recovery process. We have performed experiments on workstation clusters, comparing our protocols against the earlier reduced-stable logging (RSL) protocol by actually implementing both protocols in TreadMarks, a state-of-the-art SDSM system. The experimental results show that our lazy logging protocol consistently outperforms the RSL protocol. Our protocol increases the execution time slightly by 1% to 4% during failure-free execution, while the RSL protocol results in the execution time overhead of 6% to 21% due to its larger log size and higher disk access frequency. Our PCR protocol also outperforms the widely used simple crash recovery protocol by 18% to 57% under all applications examined.


Book ChapterDOI
12 Apr 1999
TL;DR: This work presents results from a new protocol that provides error recovery, and whose performance is close to that of existing low-latency protocols.
Abstract: Existing low-latency protocols make unrealistically strong assumptions about reliability. This allows them to achieve impressive performance, but also prevents this performance being exploited by applications, which must then deal with reliability issues in the application code. We present results from a new protocol that provides error recovery, and whose performance is close to that of existing low-latency protocols. We achieve a CPU overhead of 1.5 μs for packet download and 3.6 μs for upload. Our results show that (a) executing a protocol in the kernel is not incompatible with high performance, and (b) complete control over the protocol stack enables (1) simple forms of flow control to be adopted, (2) proper bracketing of the unreliable portions of the interconnect thus minimising buffers held up for possible recovery, and (3) the sharing of buffer pools. The result is a protocol which performs well in the context of parallel computation and the loose coupling of processes in the workstations of a cluster.

Book ChapterDOI
TL;DR: This paper describes an extension of the OMG's Object Transaction Service, by adding the "open nested transaction model", which greatly improves transaction parallelism by releasing the nested transaction locks at the nestedTransaction commit time.
Abstract: The two-phase commit protocol is combined with the strict two-phase locking protocol as means for ensuring atomicity and serializability of transactions. The implication of this combination on the length of time a transaction may holding locks on various data items might be severe. There are certain classes of applications where it is known that resources acquired within a transaction can be "released early", rather than having to wait until the transaction terminates. Furthermore, there are applications involving heterogeneous competing business organizations, which do not allow to block their resources; therefore, the preservation of local autonomy of individual systems is crucial. This paper describes an extension of the OMG's Object Transaction Service, by adding the "open nested transaction model", which greatly improves transaction parallelism by releasing the nested transaction locks at the nested transaction commit time. Open nested transactions relax the isolation property by allowing the effects of the committed nested transaction to be visible to concurrent transactions. We also describe how we take benefit of this model using the proposed Asynchronous Nested Transaction model to overcome the limits of the current messaging products and standard specifications when they are confronted with the problem of guaranteeing the atomicity of distributed multi-tier transactional applications.

Book ChapterDOI
05 Oct 1999
TL;DR: This paper attempts a larger security protocol: a recently published protocol for secure group communication, and finds two flaws in the protocol, one of which has not been reported previously.
Abstract: With the explosive growth of the Internet and the distributed applications it supports, there is a pressing need for secure group communications — the ability of a group of agents to communicate securely with each other while allowing members to join or leave the group. Prompted by the success of other researchers in applying finite-state model-checking tools to the verification of small security protocols, we decided to attempt a larger security protocol: a recently published protocol for secure group communication. Not surprisingly, creating an ad hoc abstract model suitable for model-checking required cleverness, and state explosion was always a threat. Nevertheless, with minimal effort, the model checking tool discovered two flaws in the protocol, one of which has not been reported previously. We conclude our paper with a discussion of possible fixes to the protocol, as well as suggested verification tool improvements that would have simplified our task.

Journal ArticleDOI
TL;DR: The study shows that the improved algorithm can be applied to obtain the global states in the case of a loss of cooperation of the different processes in the protocol, which can be used as a recovery point that will be used by the following recovery procedure.
Abstract: In this paper the algorithms for self-stabilizing communication protocols are studied. First some concepts and a formal method for describing the proposed algorithms are described, then an improved algorithm for achieving global states is presented. The study shows that the improved algorithm can be applied to obtain the global states in the case of a loss of cooperation of the different processes in the protocol, which can be used as a recovery point that will be used by the following recovery procedure. Thus, the improved algorithm can be used to self-stabilize a communication protocol. Meanwhile, a recovery algorithm for self-stabilizing communication protocols is presented. After a failure is detected, all processes can eventually know the error. The recovery algorithm uses the contextual information exchanged during the progress of the protocol and recorded on the stable memory. The proof of correctness and analysis of complexity for these algorithms have been made. The availability and efficiency of the algorithms have been verified by illustrating the example protocols. Finally, some conclusions and remarks are given.

Book ChapterDOI
30 Aug 1999
TL;DR: This work proposes the idea of transaction shipping to reduce the overheads in processing a transaction over mobile network and in resolving priority inversion in a distributed lock-based real-time protocol.
Abstract: Due to the unpredictability of mobile network, it is difficult to meet transaction deadlines in a mobile distributed real-time database system (MDRDTBS). We propose the idea of transaction shipping to reduce the overheads in processing a transaction over mobile network and in resolving priority inversion. We consider a distributed lock-based real-time protocol, the Distributed High Priority Two Phase Locking (DHP-2PL), to study the impacts of mobile network on real-time data access. A detailed model of a MDRTDBS has been developed, and a series of simulation experiments have been performed to evaluate the performance of our approach.

Proceedings ArticleDOI
01 May 1999
TL;DR: A new lock-based concurrency control protocol, Secure Dynamic Copy Protocol, ensuring both conflicting requirements and reducing the storage overhead of maintaining secondary copies and minimizing the processing overhead of update history is proposed.
Abstract: Concurrency control for real-time secure database systems must satisfy not only logical data consistency but also timing constraints and security requirements associated with transactions. These conflicting natures between timing constraints and security requirements are often resolved by maintaining several versions (or secondary copies) on the same data items. In this paper, we propose a new lock-based concurrency control protocol, Secure Dynamic Copy Protocol, ensuring both conflicting requirements. Our protocol aims for reducing the storage overhead of maintaining secondary copies and minimizing the processing overhead of update history. The main idea of our protocol is to keep a secondary copy only when it is needed to resolve the conflicting read/write operations in real time secure database systems. For doing this, a secondary copy is dynamically created and removed during a transaction's read/write operations. While comparing the existing real-time security protocol, we have also examined the performance characteristics of our protocol through simulation under different workloads. The results show that our protocol consumed less storage and decreased the deadline missing transactions.

Proceedings ArticleDOI
S.H. Brackin1
12 Oct 1999
TL;DR: The Automatic Authentication Protocol Analyzer, 2nd Version (AAPA2), in contrast, automatically correctly identifies 88% of the protocols in an independently selected collection of protocols as failed or not failed, on a modest computer, in an average of only 2.6 minutes per protocol.
Abstract: A cryptographic protocol is a short series of message exchanges, usually involving encryption, intended to establish secure communication over an insecure network. A protocol fails if an active wiretapper can obtain confidential information or impersonate a legitimate user, without performing cryptanalysis, by blocking, replaying, relabeling or otherwise modifying messages. Since the number of possible wiretapper-induced distortions of a protocol grows exponentially with the size of the protocol, most tools for detecting protocol failure require extended, expert user guidance. The Automatic Authentication Protocol Analyzer, 2nd Version (AAPA2), in contrast, automatically correctly identifies 88% of the protocols in an independently selected collection of protocols as failed or not failed, on a modest computer, in an average of only 2.6 minutes per protocol. This paper summarizes the AAPA2's results, sketches how it produces them and gives references providing more information.

01 Jan 1999
TL;DR: A protocol is designed that allows the user of a system to tune the degree of optimism and provides a trade-off between failure-free overhead and recovery efficiency and a new fault-tolerant optimistic simulation protocol is developed.
Abstract: This dissertation focuses on the use of message logging for recovering from process failures in distributed systems. Optimistic message logging protocols assume that failures are rare. Based on this assumption, they try to reduce the failure-free overhead. We have proved several fundamental results about optimistic logging protocols. We have designed a protocol that allows the user of a system to tune the degree of optimism. This protocol provides a trade-off between failure-free overhead and recovery efficiency. The special cases of this protocol include an existing optimistic protocol and an existing pessimistic protocol. We have also studied extensions of optimistic protocols to multi-threaded environments. The natural extensions offer a trade-off between the false causality and the failure-free overhead. We avoid this trade-off by treating threads as the unit of recovery and processes as the unit of failure. The protocols mentioned so far are independent of any particular application characteristics. The fault-tolerance overhead can sometimes be reduced by exploiting the specific characteristics of an application. We have demonstrated this reduction in the context of optimistic computations. Specifically, we have developed a new fault-tolerant optimistic simulation protocol.


Journal ArticleDOI
TL;DR: The effort to incorporate the transaction function into the Directory in order to support the strong consistency requirement is described, which indicates the usefulness of transaction capability outweighs the overhead consideration.

Proceedings ArticleDOI
17 Nov 1999
TL;DR: This paper uses the multi-invariant data structure (MIDS) scheme to develop a highly available, reliable, real-time transaction processing algorithm that achieves non-blocking atomic transaction processing with very little overhead.
Abstract: Many multiple server systems are now being used for heavily accessed web services. Performance, availability, and real-time transaction processing are important requirements for many of these applications. In this paper, we apply the multi-invariant data structure (MIDS) concept for real-time transaction processing. We used the MIDS scheme to develop a highly available, reliable, real-time transaction processing algorithm. We show that with very little overhead compared to the two-phase commit protocol, we achieve non-blocking atomic transaction processing. Also, the algorithm is suitable for real-time processing since a task can be preempted at any point of execution without expensive recovery procedure.

Book ChapterDOI
TL;DR: The proposed protocol generates a public modulus number, without the parties knowing the factorization of that number, which is similar to that of Boneh-Franklin's protocol when there are two communicating parties.
Abstract: This paper describes how n parties can jointly generate the parameters for the RSA encryption system while being robust to prevent attacks from cheaters and malicious parties. The proposed protocol generates a public modulus number, without the parties knowing the factorization of that number. Our proposed protocol is similar to that of Boneh-Franklin's protocol. However, when there are two communicating parties our proposed protocol does not need the help of a third party. By using our proposed protocol, we can detect the presence of malicious parties and cheaters among the authorized user. An analysis shows that our proposed protocol has less computational complexity than the protocol of Frankel-MacKenzie-Yung.

Journal Article
TL;DR: In this paper, a process-based transaction model is introduced, in which each subtransaction called a toolkit guarantees the consistency of transactions, and a concurrency control protocol and a recovery protocol are proposed to support collaborative work.
Abstract: Collaborative work presupposes information exchange among participants. In contrast, traditional transaction technologies allow concurrent users to operate on shared data, while providing them with the illusion of complete isolation from each other. To overcome this gap, this paper introduces a process-based transaction model, in which each subtransaction called a toolkit guarantees the consistency of transactions. Based on this transaction model, we discuss the correctness criterion of collaborative work formally and propose a concurrency control protocol and a recovery protocol to support collaborative work. The protocols permit users to exchange information with each other and to relieve the loss of result problem.

Patent
22 Sep 1999
TL;DR: In this paper, the Coordinator object coordinates the transaction with respect to a plurality of resources according to the two phase commit protocol, and is created at a later stage in the transaction, only in response to a predetermined trigger event, such as the server receiving a request to update a local resource, or another server process being called by the transaction.
Abstract: Transaction processing in a distributed (client/server) computing system. A transaction is created by setting up transaction state objects in a server process (22, figs. 2 and 3). Whereas normally, Control, Terminator and Coordinator objects (221-223, fig. 2) would be instantiated, according to the invention only the Control and Terminator objects (221, 222, fig. 3) are instantiated, 42. The Coordinator object coordinates the transaction with respect to a plurality of resources according to the two phase commit protocol, and is created at a later stage in the transaction, 44, only in response to a predetermined trigger event, such as the server receiving a request to update a local resource, 43, or another server process being called by the transaction, 45. Since the Coordinator object is thus only created if, and when, needed by the transaction, processor cycles are saved; this also prevents the transaction from having to be logged to storage (225, fig. 2) when such logging is unnecessary for the presently executing transaction.