scispace - formally typeset
Search or ask a question

Showing papers on "Two-phase commit protocol published in 2006"


Journal ArticleDOI
Jim Gray1, Leslie Lamport1
TL;DR: The Paxos Commit algorithm as mentioned in this paper runs a Paxos consensus algorithm on the commit/abort decision of each participant to obtain a transaction commit protocol that uses 2F p 1 coordinators and makes progress if at least F p 1 of them are working properly.
Abstract: The distributed transaction commit problem requires reaching agreement on whether a transaction is committed or aborted. The classic Two-Phase Commit protocol blocks if the coordinator fails. Fault-tolerant consensus algorithms also reach agreement, but do not block whenever any majority of the processes are working. The Paxos Commit algorithm runs a Paxos consensus algorithm on the commit/abort decision of each participant to obtain a transaction commit protocol that uses 2F p 1 coordinators and makes progress if at least F p 1 of them are working properly. Paxos Commit has the same stable-storage write delay, and can be implemented to have the same message delay in the fault-free case as Two-Phase Commit, but it uses more messages. The classic Two-Phase Commit algorithm is obtained as the special F = 0 case of the Paxos Commit algorithm.

380 citations


Journal ArticleDOI
TL;DR: A static two-phase locking and high priority based, write-update type, ideal for fast and timeliness commit protocol i.e. SWIFT is proposed, which minimizes intersite message traffic, execute-commit conflicts and log writes consequently resulting in a better response time.
Abstract: Although there are several factors contributing to the difficulty in meeting distributed real time transaction deadlines, data conflicts among transactions, especially in commitment phase, are the prime factor resulting in system performance degradation. Therefore, design of an efficient commit protocol is of great significance for distributed real time database systems (DRTDBS). Most of the existing commit protocols try to improve system performance by allowing a committing cohort to lend its data to an executing cohort, thus reducing data inaccessibility. These protocols block the borrower when it tries to send WORKDONE/PREPARED message [1, 6, 8, 9], thus increasing the transactions commit time. This paper first analyzes all kind of dependencies that may arise due to data access conflicts among executing-committing transactions when a committing cohort is allowed to lend its data to an executing cohort. It then proposes a static two-phase locking and high priority based, write-update type, ideal for fast and timeliness commit protocol i.e. SWIFT. In SWIFT, the execution phase of a cohort is divided into two parts, locking phase and processing phase and then, in place of WORKDONE message, WORKSTARTED message is sent just before the start of processing phase of the cohort. Further, the borrower is allowed to send WORKSTARTED message, if it is only commit dependent on other cohorts instead of being blocked as opposed to [1, 6, 8, 9]. This reduces the time needed for commit processing and is free from cascaded aborts. To ensure non-violation of ACID properties, checking of completion of processing and the removal of dependency of cohort are required before sending the YES-VOTE message. Simulation results show that SWIFT improves the system performance in comparison to earlier protocol. The performance of SWIFT is also analyzed for partial read-only optimization, which minimizes intersite message traffic, execute-commit conflicts and log writes consequently resulting in a better response time. The impact of permitting the cohorts of the same transaction to communicate with each other [5] on SWIFT has also been analyzed.

38 citations


Book ChapterDOI
08 May 2006
TL;DR: In this article, a group key agreement protocol that resists attacks by malicious insiders in the authenticated broadcast model, loses this security when it is transferred into an unauthenticated point-to-point network with the protocol compiler introduced by Katz and Yung.
Abstract: Considering a protocol of Tseng, we show that a group key agreement protocol that resists attacks by malicious insiders in the authenticated broadcast model, loses this security when it is transfered into an unauthenticated point-to-point network with the protocol compiler introduced by Katz and Yung We develop a protocol framework that allows to transform passively secure protocols into protocols that provide security against malicious insiders and active adversaries in an unauthenticated point-to-point network and, in contrast to existing protocol compilers, does not increase the number of rounds Our protocol particularly uses the session identifier to achieve the security By applying the framework to the Burmester-Desmedt protocol we obtain a new 2 round protocol that is provably secure against active adversaries and malicious participants

28 citations


Journal ArticleDOI
TL;DR: MuSeqoR as mentioned in this paper is a multi-path routing protocol that tackles the twin issues of reliability (protection against failures of multiple paths) and security, while ensuring a minimum data redundancy.

28 citations


Book ChapterDOI
TL;DR: The design of a scalable and fault tolerant protocol for supporting parallel runtime environment communications to support transmission of messages across multiple nodes with in a self-healing topology to protect against recursive node and process failures is presented.
Abstract: The number of processors embedded on high performance computing platforms is growing daily to satisfy users desire for solving larger and more complex problems. Parallel runtime environments have to support and adapt to the underlying libraries and hardware which require a high degree of scalability in dynamic environments. This paper presents the design of a scalable and fault tolerant protocol for supporting parallel runtime environment communications. The protocol is designed to support transmission of messages across multiple nodes with in a self-healing topology to protect against recursive node and process failures. A formal protocol verification has validated the protocol for both the normal and failure cases. We have implemented multiple routing algorithms for the protocol and concluded that the variant rule-based routing algorithm yields the best overall results for damaged and incomplete topologies .

24 citations


Patent
Gaku Yamamoto1, Hideki Tai1, Hiroshi Horii1
22 Nov 2006
TL;DR: In this paper, the authors propose a system for resending a process to a backup server farm from a client without waiting for the failure-detection, if no reply is received for a certain time.
Abstract: The present invention proposes a system for resending a process to a backup server farm from a client without waiting for the failure-detection, if no reply is received for a certain time. The transaction processing mechanism of the present invention has a transaction start processing mechanism in which an exclusive control using a valid processing authority token and data consistency are combined, and a commit processing mechanism in which determination on whether a commit is available or not based on a distributed agreement and replication of updated data. With the mechanisms, a system for shortening a service halt time when a failure occurs to a time as such it appears to a client that the service does not stop is provided.

24 citations


Book ChapterDOI
14 Jun 2006
TL;DR: This paper proposes CATE, a component-based architecture of standard 2PC-based protocols and a Context-Aware Transaction sErvice and shows that using CATE performs better than using only one commit protocol in a variable system and that the reconfiguration cost is negligible.
Abstract: For years, transactional protocols have been defined for particular application needs Traditionally, when implementing a transaction service, a protocol is chosen and remains the same during the system execution Nevertheless, the dynamic nature of nowadays application contexts (eg, mobile, ad-hoc, peer-to-peer) and context variations (semantics-related aspects) motivates the need for transaction service adaptation Next generation of transaction services should be adaptive or even better self-adaptive This paper proposes CATE: (1) a component-based architecture of standard 2PC-based protocols and (2) a Context-Aware Transaction sErvice Self-adaptation of CATE is obtained by context awareness and component-based reconfiguration This allows CATE to select the most appropriate protocol with respect to the execution context We show that using CATE performs better than using only one commit protocol in a variable system and that the reconfiguration cost is negligible

21 citations


Proceedings ArticleDOI
18 Apr 2006
TL;DR: A new commit protocol which aims to improve the performance of composite Web services transactions is presented and it is revealed that the proposed protocol significantly improves the performance in committing a composite Web service transaction.
Abstract: Transaction commit protocols have widely been used to ensure the correctness and reliability of distributed applications. This paper investigates into the performance of such protocols within the context of composite web services. It presents a new commit protocol which aims to improve the performance of composite web services transactions. The proposed protocol is tested through various analytical experiments. These experiments reveal that the proposed protocol significantly improves the performance in committing a composite web service transaction. The experiments also exhibit the processing overhead of the proposed protocol in the case of unsuccessful execution of a web service transaction

19 citations


Journal ArticleDOI
TL;DR: A new commit protocol for managing transactions in composite web services is proposed that aims to improve the performance by reducing network delays and the processing time of transactions.

19 citations


Proceedings ArticleDOI
06 Jul 2006
TL;DR: A new token based protocol for group mutual exclusion in distributed systems that uses one single token to allow multiple processes to enter the critical section for a common session and ensures no starvation in the system.
Abstract: In this paper we present a new token based protocol for group mutual exclusion in distributed systems. The protocol uses one single token to allow multiple processes to enter the critical section for a common session. One of the significant characteristics of the protocol is - concurrency, throughput and waiting time can be regulated adjusting the time period for which a session is declared. The minimum and the maximum number of messages to enter the CS is 0 and ( n + 2) respectively where n is the total number of processes in the system. Moreover, simulation results show that the protocol, on average case, considerably reduces the number of messages per entry to the CS and also requires much lower waiting times. The maximum concurrency the protocol supports is n. The protocol also ensures no starvation in the system. Furthermore, this algorithm works out for the Extended Group Mutual Exclusion problem as well.

16 citations


Journal ArticleDOI
TL;DR: To the best of the knowledge, this protocol is the first fault-tolerant exchange protocol in the context of offline TTP and asynchronous channels and provides a recovery method for network and local system failures.

Journal ArticleDOI
01 Nov 2006
TL;DR: A group checkpoint strategy is proposed that uses features of locality and well-structured-ness for the purpose of both reducing runtime overhead and minimizing recovery spread, i.e., without any explicit kernel message or runtime message logging.
Abstract: Distributed multi-agent systems are usually large-scale, involving a large number of agents and messages. Existing checkpoint and recovery strategies are not quite favorable to such systems due to either global recovery spread or runtime logging overhead associated with these strategies. This paper presents our work on the design of correct and efficient checkpoint and recovery strategies for distributed agent systems. The initial part of the paper introduces a formal model to capture the correctness of recovery that is applicable in general, including those used by existing techniques such as deterministic as well as non-deterministic, and single as well as simultaneous recoveries. In particular, notions of atomic and quasi-atomic recovery blocks are introduced to capture the subset of events nullified in a single recovery. It is proved that the correctness of multiple recoveries is guaranteed if a recovery technique ensures well-ordering of corresponding recovery blocks. The rest of the paper utilizes the features of agent communication protocols towards the design of a simple and efficient checkpoint protocol. In particular, agents interact with each other via well-defined agent communication protocols. Agent protocol sessions are group-based and all message interactions are localized inside such groups. A group checkpoint strategy is proposed that uses these features of locality and well-structured-ness for the purpose of both reducing runtime overhead and minimizing recovery spread. The resulted protocol creates strong and asynchronous group checkpoints, i.e., without any explicit kernel message or runtime message logging. An accompanying recovery protocol uses a notion of a protocol dependency graph to identify the minimal quasi-atomic recovery block corresponding to single or simultaneous agent crashes. Correctness of the recovery protocol is proved under the formal model. The paper concludes with a discussion on the significance and contrast of our research with other related works, followed by future research directions.

Proceedings ArticleDOI
24 Jul 2006
TL;DR: The paper presents a primary-backup protocol to manage replicated in-memory database systems (IMDBs) that exploits two features of IMDBs: coarse-grain concurrency control and deferred disk writes.
Abstract: The paper presents a primary-backup protocol to manage replicated in-memory database systems (IMDBs). The protocol exploits two features of IMDBs: coarse-grain concurrency control and deferred disk writes. Primary crashes are quickly detected by backups and a new primary is elected whenever the current one is suspected to have failed. False failure suspicions are tolerated and never lead to incorrect behavior. The protocol uses a consensus-like algorithm tailor-made for our replication environment. Under normal circumstances (i.e., no failures or false suspicions), transactions can be committed after two communication steps, as seen by the applications. Performance experiments have shown that the protocol has very low overhead and scales linearly with the number of replicas.

Proceedings ArticleDOI
02 Oct 2006
TL;DR: The Fault-Tolerant Pre-Phase Transaction Commit (FT-PPTC) protocol is presented, which decouples the commit of mobile participants from that of fixed participants and can be supported by any traditional atomic commit protocol, such as the established 2PC protocol.
Abstract: Transactions are required not only for wired networks but also for the emerging wireless environments where mobile and fixed hosts participate side by side in the execution of the transaction. This heterogenous environment is characterized by constraints in mobile host capabilities, network connectivity and also an increasing number of possible failure modes. Classical atomic commit protocols used in wired networks are therefore not directly suitable for this heterogenous environment. Furthermore, the few commit protocols designed for mobile transactions either consider mobile hosts only as initiators though not as active participants, or show a high resource blocking time. We present the Fault-Tolerant Pre-Phase Transaction Commit (FT-PPTC) protocol for mobile environments. FT-PPTC decouples the commit of mobile participants from that of fixed participants. Consequently, the commit set can be reduced to a set of entities in the fixed network. Thus, the commit can easily be supported by any traditional atomic commit protocol, such as the established 2PC protocol. We integrate fault-tolerance as a key feature of FT-PPTC. Performance evaluations confirm the efficiency, scalability and low resource blocking time of our approach

01 Jan 2006
TL;DR: This research concentrates on providing transactional support appropriate for mobile computing by proposing TCOT (Transaction Commit on Timeout protocol) protocol for MDS, and proposes mechanisms for processing transactions locally at the mobile unit in presence of data broadcast.
Abstract: Mobile Computing has become a reality thanks to the convergence of two technologies: appearance of powerful portable computers and development of fast reliable wireless networks. Among, applications that are finding their way to the market of Mobile Computing, those that involve data management, hold a prominent position. In past few years there has been tremendous surge of research in the area of data management in mobile computing. However mobile computing as exists today is not fully capable of processing database transactions. This research concentrates on providing transactional support appropriate for mobile computing. Mobile Database System (MDS) is a distributed system that supports mobility during information processing. A new transaction model for MDS is first presented. The scope of traditional transaction properties is expanded by introducing new location property to transaction. Transaction execution could be distributed over multiple components. Some sub transactions would be executed on MU and rest on the servers on wired network. For consistency preserving execution either all subtransactions should commit or all should abort. We propose TCOT (Transaction Commit on Timeout protocol) protocol for MDS. TCOT protocol is based on timeouts and uses lesser number of messages than 2-Phase Commit (2PC), which is commonly used in database systems for fixed networks. We show using detailed simulation study that TCOT performs better than 2PC. Next we propose mechanisms for processing transactions locally at the mobile unit in presence of data broadcast. Broadcast-based data dissemination is likely to be a major mode of information transfer in mobile computing and wireless environment. As these systems evolve, they will be used to run sophisticated applications many of which will involve data whose consistency must be maintained and data updates may originate at mobile client. In pull-based broadcast environment we propose integrated approach to broadcast data scheduling and transaction processing. Our approach aims at reducing tuning time, commit time and number of aborts for the transaction. For push-based environment a new concurrency control mechanism, which uses requires low bandwidth, ensures availability, accommodates the disconnection problem, and is scalable. We did detailed performance study of transaction processing in broadcast environment.

Journal ArticleDOI
01 Jul 2006
TL;DR: An integer linear programming model that derives distributed applications with minimum communication costs is proposed that can treat several reasonable cost criteria that could be used in various related application areas.
Abstract: Protocol synthesis is used to derive a protocol specification, that is, the specification of a set of application components running in a distributed system of networked computers, from a specification of services (called the service specification) to be provided by the distributed application to its users. Protocol synthesis reduces design costs and errors by specifying the message exchanges between the application components, as defined by the protocol specifications. In this paper, we propose a new synthesis method that generates optimized protocol specification. Both service and protocol specifications are described using extended Petri nets. Particularly, we propose an integer linear programming model that derives distributed applications with minimum communication costs. The model determines an optimal allocation of resources that minimizes communication costs. Our model can treat several reasonable cost criteria that could be used in various related application areas. Particularly, we have considered the following cost criteria: (a) the number of messages exchanged between different distributed applications, (b) the size of messages, (c) the number of messages based on frequency of execution, (d) communication channel costs, and (e) resource placement costs. An application example is given along with some experimental results.

Journal Article
TL;DR: A new execution framework that provides an extension aware of the mobility of the hosts that preserves the 2PC principle and the freedom of the mobile clients and servers while it minimizes the impact of unreliable wireless communication links.
Abstract: The exploding activity in the telecommunication domain and the increasing emergence of portable devices are making mobile ubiquitous computing a reality. However, many challenging issues have to be faced before enabling users to take part in distributed computing while moving in an efficient and quasi-transparent manner. Many researches focus on revisiting the conventional distributed computing paradigms for use in the new environment. In this paper we propose to revisit the conventional implementation of the Two Phase Commit (2PC) protocol which is a fundamental asset of transactional technology for ensuring consistent effects of distributed transactions. We propose a new execution framework that provides an extension aware of the mobility of the hosts. The proposed Mobility-aware 2PC (M-2PC) protocol preserves the 2PC principle and the freedom of the mobile clients and servers while it minimizes the impact of unreliable wireless communication links.

Book ChapterDOI
01 Nov 2006
TL;DR: A recovery protocol which boosts availability, fault tolerance and performance by enabling failed network nodes to resume an active role immediately after they start recovering by specifying the procedures executed with every message and event of interest and outline a correctness proof.
Abstract: We describe a recovery protocol which boosts availability, fault tolerance and performance by enabling failed network nodes to resume an active role immediately after they start recovering. The protocol is designed to work in tandem with middleware-based eager update-everywhere strategies and related group communication systems. The latter provide view synchrony, i.e., knowledge about currently reachable nodes and about the status of messages delivered by faulty and alive nodes. That enables a fast replay of missed updates which defines dynamic database recovery partition. Thus, speeding up the recovery of failed nodes which, together with the rest of the network, may seamlessly continue to process transactions even before their recovery has completed. We specify the protocol in terms of the procedures executed with every message and event of interest and outline a correctness proof.

Journal ArticleDOI
01 May 2006
TL;DR: A scheme to automatically identify a suitable checkpoint and recovery protocol for a given distributed application running on a given system is presented, which involves a novel technique for finding the similarity between the communication pattern of two distributed applications that is of independent interest also.
Abstract: Checkpoint and recovery protocols are commonly used in distributed applications for providing fault tolerance. The performance of a checkpoint and recovery protocol is judged by the amount of computation it can save against the amount of overhead it incurs. This performance depends on different system and application characteristics, as well as protocol specific parameters. Hence, no single checkpoint and recovery protocol works equally well for all applications, and given a distributed application and a system it will run on, it is important to choose a protocol that will give the best performance for that system and application. In this paper, we present a scheme to automatically identify a suitable checkpoint and recovery protocol for a given distributed application running on a given system. The scheme involves a novel technique for finding the similarity between the communication pattern of two distributed applications that is of independent interest also. The similarity measure is based on a graph similarity problem. We present a heuristic for the graph similarity problem. Extensive experimental results are shown both for the graph similarity heuristic and the automatic identification scheme to show that an appropriate checkpoint and recovery protocol can be chosen automatically for a given application.

Proceedings ArticleDOI
04 Dec 2006
TL;DR: Aiming at both minimizing message logging and localizing recovery effect, this paper proposes a strategy that forms group checkpoints around such regions and meanwhile selectively logs inter-region messages.
Abstract: This paper explores the use of locality of dependen-cies in large-scale distributed systems towards devel-oping efficient checkpoint strategies. Dependencies among processes evolve into message interactions, which often spread and affect recovery dependencies and logging requirements. On the other hand, message interactions are usually localized within small sub-regions formed in space and time. Aiming at both minimizing message logging and localizing recovery effect, we propose a strategy that forms group check-points around such regions and meanwhile selectively logs inter-region messages. A simple and efficient Atomic Group Checkpoint (AGC) protocol is devel-oped based on the locality information of a distributed computation, e.g., in agent communication protocol sessions in multi-agent systems. Atomicity guarantees consistency of group checkpoint and uniformity of group logging, and hence minimizes logging overhead. The correctness of the AGC protocol is analyzed and proved through a generic Checkpoint Dependency Graph (CDG) model, which captures the recovery dependency relations among checkpoints.

Proceedings Article
01 Jan 2006
TL;DR: This paper proved that improved protocol performs in the desired manner while under modelled attacks from dishonest players and shows how formal methods can be used by protocol designer to achieve a better design of a complex system.
Abstract: Formal specification and verification of protocols have been credited for uncovering protocol flaws; revealing inadequacies in protocol design of the Initial Stage and Negotiation Stage; and proved that improved protocol performs in the desired manner while under modelled attacks from dishonest players. It also shows how formal methods can be used by protocol designer to achieve a better design of a complex system. Formal methods can also populate an abstract concept with a more complete and consistent protocol specification. Complex system protocol can be easily specified with simplifying assumptions for a high level of protocol verification. This set of assumptions can then be used to further explore the protocol. Using formal methods for complex secure system protocol design will provide not only better quality protocol but could also be the first step in advancing an abstract concept to a more practical stage for development.

Proceedings ArticleDOI
24 Jul 2006
TL;DR: This paper addresses reliability issues in Web-based transactional systems characterized by stateless application servers with an innovative scheme for distributed transaction management (based on ad-hoc demarcation and concurrency control mechanisms), which is introduced in this paper.
Abstract: In this paper we address reliability issues in Web-based transactional systems. We are interested in the category of systems characterized by stateless application servers. For these systems, a framework called e-Transaction has been recently proposed, which specifies a set of desirable end-to-end reliability guarantees. Within this framework we propose an innovative distributed protocol providing those reliability guarantees in the general case of multiple, autonomous back-end databases (typical of scenarios with multiple parties involved within a same business process). Compared to existing proposals coping with the e- Transaction framework, our protocol adopts a weaker approach to failure detection, i.e. it does not rely on any assumption on the accuracy of failure detection. Hence it reveals suited for a wider class of distributed systems, including those systems where the level of asynchrony makes stronger approaches to failure detection not feasible in practice. To achieve such a target, our protocol exploits an innovative scheme for distributed transaction management (based on ad-hoc demarcation and concurrency control mechanisms), which we introduce in this paper. We also provide hints on the protocol integration with conventional systems (e.g. database systems).

Book ChapterDOI
23 Oct 2006
TL;DR: This work proposes a Transaction-Aware Tentative Hold Protocol (taTHP) to perceive the transaction context information and play a more active role in transaction coordination and concludes that taTHP is provided with better efficiency and satisfactory quality of service.
Abstract: With the rapid development of WWW, Web Service is becoming a new application model for decentralized computing based on Internet. However, the tradeoff problem between consistency and resource utilization is the primary obstructer for building a transactional environment for Web Services Compositions. Since a resource may not be acceptable to lock exclusively by an unknown Internet user, we propose a Transaction-Aware Tentative Hold Protocol (taTHP) to perceive the transaction context information and play a more active role in transaction coordination. With the capability of forecasting the will-succeed transactions in a fairly small scope of candidates, taTHP is able to achieve more resource utilization with smaller complaints about the transaction coordination. Finally, a comprehensive comparison is carried out to demonstrate the improvement of the proposed protocol. And it can be concluded from the result that taTHP is provided with better efficiency and satisfactory quality of service.

Journal ArticleDOI
01 Dec 2006
TL;DR: A formal coordination framework for applying THP in conjunction with two phase commit protocol to the problem in which service providers independently manage resources and clients seek to acquire the resources from multiple providers as a single atomic transaction is presented.
Abstract: Web services are emerging as an effective means for carrying out automated transactions between multiple business parties. While there are several specific protocols that have been discussed to address the problem of coordinating web services-enabled business transactions, we consider the tentative hold protocol (THP) that allows the placement of tentative holds on business resources prior to actual transactions in order to provide increased flexibility in coordination. In this paper, we present a formal coordination framework for applying THP in conjunction with two phase commit protocol to the problem in which service providers independently manage resources and clients seek to acquire the resources from multiple providers as a single atomic transaction. The proposed framework facilitates the performance optimization of THP through effective parameterization with the notion of overhold size and hold duration. Subsequently, a detailed analysis is carried out to obtain an efficient method that can optimize the performance by adaptively determining the hold duration. The simulation results show that the proposed adaptive approach yields a significant improvement over other non-adaptive policies.

Journal ArticleDOI
01 Jul 2006
TL;DR: Fractal is the first application level protocol adaptation framework that considers the real deployment problem using mobile code and CDN, and evaluation results show the proposed adaptive approach performs very well on both the client side and server side.
Abstract: The rapid growth of heterogeneous devices and diverse networks in our daily life, makes it is very difficult, if not impossible, to build a one-size-fits-all application or protocol, which can run well in such a dynamic environment. Adaptation has been considered as a general approach to address the mismatch problem between clients and servers; however, we envision that the missing part, which is also a big challenge, is how to inject and deploy adaptation functionality into the environment. In this paper we propose a novel application level protocol adaptation framework, Fractal, which uses the mobile code technology for protocol adaptation and leverages existing content distribution networks (CDN) for protocol adaptors (mobile codes) deployment. To the best of our knowledge, Fractal is the first application level protocol adaptation framework that considers the real deployment problem using mobile code and CDN. To evaluate the proposed framework, we have implemented two case studies: an adaptive message encryption protocol and an adaptive communication optimization protocol. In the adaptive message encryption protocol, Fractal always chooses a proper encryption algorithm according to different application requirements and device characteristics. And the adaptive communication optimization protocol is capable of dynamically selecting the best one from four communication protocols, including Direct sending, Gzip, Bitmap, and Vary-sized blocking, for different hardware and network configurations. In comparison with other adaptation approaches, evaluation results show the proposed adaptive approach performs very well on both the client side and server side. For some clients, the total communication overhead reduces 41% compared with no protocol adaptation mechanism, and 14% compared with the static protocol adaptation approach.

Journal Article
TL;DR: This paper surveys the solutions proposed for mobile transaction commitment and outlines how the conventional commit protocols are revisited in order to fit the needs of a mobile environment.
Abstract: Mobile computing has attracted attention of intensive researches during the recent years Many papers revisit the conventional implementation of distributed computing paradigms for use in this new environment A key paradigm of the transaction processing is the transaction commitment A commitment mechanism such as Two Phases Commit (2PC) protocol, a fundamental asset of transactional technology (and its variants), ensures consistent effects of a distributed transaction This paper surveys the solutions proposed for mobile transaction commitment and outlines how the conventional commit protocols are revisited in order to fit the needs of a mobile environment The different approaches try to deal with the slow and unreliable wireless links, the lightweight devices and their limited resources, the frequent disconnections and the movement of mobile devices

Journal ArticleDOI
TL;DR: The results show that the protocol can significantly reduce the hit delay while maintaining the high hit rate and also the congestion problems such as query loss and the peer overloading problem can be effectively alleviated.
Abstract: Peer-to-Peer (P2P) file sharing is the hottest, fastest growing application on the Internet. When designing Gnutella-like applications, the most important consideration is the scalability problem, because P2P systems typically support millions of users online concurrently. Gnutella suffers from poor scaling due to its flooding-based search, resulting in excessive amounts of repeated query messages. Therefore, a good search protocol plays an important role in a system's scalability. However, congestion, due to large query loads from users, definitely impacts on the performance of search protocols, and this consideration has received little attention from the research community. In this paper, we propose a congestion-aware search protocol for unstructured P2P networks. Our protocol consists of three parts--Congestion-Aware Forwarding, Random Early Stop and Emergency Signaling. The aim of our protocol is to integrate congestion control and object discovery functionality so that the search protocol can achieve good performance under congested networks and flash crowds. We perform extensive simulations to study our proposed protocol. The results show that our protocol can significantly reduce the hit delay while maintaining the high hit rate and also the congestion problems such as query loss and the peer overloading problem can be effectively alleviated.

Proceedings ArticleDOI
18 Dec 2006
TL;DR: A new commit protocol, BA-1.5PC, is presented, which is well tailored to such distributed storage environments as autonomous disks that use a primary-backup storage schema and significantly outperforms several well-known commit protocols in terms of transaction throughput.
Abstract: Advanced data engineering applications require a largescale storage system that is both scalable and dependable. In such a system, an atomic commit protocol becomes imperative to ensure the consistency and atomicity of transactions. In this paper we present a new commit protocol, BA-1.5PC, which is well tailored to such distributed storage environments as autonomous disks [20] that use a primarybackup storage schema. The protocol achieves an efficient commit process while also guaranteeing a high dependability by combining several approaches: (1) a low-overhead log mechanism that eliminates blocking disk I/Os, (2) removing the voting phase from commit processing to gain a faster commit process, and (3) a primary-backup assisted recovery strategy to enhance dependability in the presence of possible failures, so that a master failure in the decision phase will not block prepared cohorts of a transaction. Experiments were carried out on a trial version of an autonomous disks system to verify its efficiency. The results indicate that this protocol significantly outperforms several well-known commit protocols in terms of transaction throughput.

01 Jan 2006
TL;DR: This paper presents a non-blocking atomic commitment protocol, noted ANB-CLL (Asynchronous Non-Blocking Coordinator Logical Log), that drastically reduces the cost of distributed transaction commitment in terms of time delay and message complexity and shows that the resulting protocol is more efficient than all other non- blocking protocols proposed in the literature.
Abstract: In distributed transactional systems, an Atomic Commitment Protocol (ACP) is used to ensure the atomicity of distributed transactions even in the presence of failures. An ACP is said to be non-blocking if it allows correct participants to decide on the transaction despite the failure of others. Several non-blocking protocols have been proposed in the literature. However, none of these protocols is able to combine high efficiency during normal processing with fault-tolerance (i.e. non-blocking). In this paper, we present a non-blocking atomic commitment protocol, noted ANB-CLL (Asynchronous Non-Blocking Coordinator Logical Log), that drastically reduces the cost of distributed transaction commitment in terms of time delay and message complexity. Performance analysis shows that the resulting protocol is more efficient than all other non-blocking protocols proposed in the literature. An important characteristic of ANB-CLL is that it can be applied to commercial transactional systems that are not 2PC compliant. To achieve non-blocking, ANBCLL uses a uniform consensus protocol as a termination protocol in an asynchronous system augmented with an unreliable failure detector, and in which processes may crash and recover. By supporting recovery, we study, for the first time, the problem of non-blocking atomic commitment in asynchronous systems based on a crashrecovery model of computation.

Book ChapterDOI
01 Jan 2006
TL;DR: High availability and data consistency are critically important for enterprise applications, such as those for e-commerce and e-government, which need to provide continuous service, 24 hours a day, 7 days a week.
Abstract: Enterprise applications, such as those for e-commerce and e-government, are becoming more and more critical to our economy and society. Such applications need to provide continuous service, 24 hours a day, 7 days a week. Any disruption in service, including both planned and unplanned downtime, can result in negative financial and social effects. Consequently, high availability and data consistency are critically important for enterprise applications. Enterprise applications are typically implemented as three-tier applications. A three-tier application consists of clients in the front tier, servers that perform the business logic processing in the middle tier, and database systems that store the application data in the backend tier, as shown in Figure 1. Within the middle tier, a server application typically uses a transaction processing programming model. When a server application receives a client’s request, it initiates one or more transactions, which often are distributed transactions. When it finishes processing the request, the server application commits the transaction, stores the resulting state in the backend database, and returns the result to the client. A fault in the middle tier might cause the abort of a transaction and/or prevent the client from knowing the