scispace - formally typeset
Search or ask a question

Showing papers on "Concurrency control published in 2003"


Journal ArticleDOI
26 Oct 2003
TL;DR: It is argued that these problems can be addressed by moving to a declarative style of concurrency control in which programmers directly indicate the safety properties that they require, which is easier for mainstream programmers to use, prevents lock-based priority-inversion and deadlock problems and can offer performance advantages.
Abstract: Concurrent programming is notoriously difficult. Current abstractions are intricate and make it hard to design computer systems that are reliable and scalable. We argue that these problems can be addressed by moving to a declarative style of concurrency control in which programmers directly indicate the safety properties that they require. In our scheme the programmer demarks sections of code which execute within lightweight software-based transactions that commit atomically and exactly once. These transactions can update shared data, instantiate objects, invoke library features and so on. They can also block, waiting for arbitrary boolean conditions to become true. Transactions which do not access the same shared memory locations can commit concurrently. Furthermore, in general, no performance penalty is incurred for memory accesses outside transactions.We present a detailed design of this proposal along with an implementation and evaluation. We argue that the resulting system (i) is easier for mainstream programmers to use, (ii) prevents lock-based priority-inversion and deadlock problems and (iii) can offer performance advantages.

735 citations


Proceedings ArticleDOI
16 Jun 2003
TL;DR: It is demonstrated that distributed versioning scales better than previous methods that provide consistency, and that the benefits of relaxing consistency are limited, except for the conflict-heavy TPC-W ordering mix.
Abstract: Dynamic content Web sites consist of a front-end Web server, an application server and a back-end database. In this paper we introduce distributed versioning, a new method for scaling the back-end database through replication. Distributed versioning provides both the consistency guarantees of eager replication and the scaling properties of lazy replication. It does so by combining a novel concurrency control method based on explicit versions with conflict-aware query scheduling that reduces the number of lock conflicts.We evaluate distributed versioning using three dynamic content applications: the TPC-W e-commerce benchmark with its three workload mixes, an auction site benchmark, and a bulletin board benchmark. We demonstrate that distributed versioning scales better than previous methods that provide consistency. Furthermore, we demonstrate that the benefits of relaxing consistency are limited, except for the conflict-heavy TPC-W ordering mix.

105 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a replicated database architecture that employs the new atomic broadcast primitive in such a way that communication and transaction processing are fully overlapped, providing high performance without relaxing transaction correctness.
Abstract: Atomic broadcast primitives are often proposed as a mechanism to allow fault-tolerant cooperation between sites in a distributed system. Unfortunately, the delay incurred before a message can be delivered makes it difficult to implement high performance, scalable applications on top of atomic broadcast primitives. Recently, a new approach has been proposed for atomic broadcast which, based on optimistic assumptions about the communication system, reduces the average delay for message delivery to the application. We develop this idea further and show how applications can take even more advantage of the optimistic assumption by overlapping the coordination phase of the atomic broadcast algorithm with the processing of delivered messages. In particular, we present a replicated database architecture that employs the new atomic broadcast primitive in such a way that communication and transaction processing are fully overlapped, providing high performance without relaxing transaction correctness.

90 citations


Journal ArticleDOI
TL;DR: A method and tool support for testing concurrent Java components and their application to testing over 20 concurrent components, a number of which are sourced from industry and were found to contain faults, is presented and discussed.
Abstract: Concurrent programs are hard to test due to the inherent nondeterminism. This paper presents a method and tool support for testing concurrent Java components. Tool support is offered through ConAn (Concurrency Analyser), a tool for generating drivers for unit testing Java classes that are used in a multithreaded context. To obtain adequate controllability over the interactions between Java threads, the generated driver contains threads that are synchronized by a clock. The driver automatically executes the calls in the test sequence in the prescribed order and compares the outputs against the expected outputs specified in the test sequence. The method and tool are illustrated in detail on an asymmetric producer-consumer monitor. Their application to testing over 20 concurrent components, a number of which are sourced from industry and were found to contain faults, is presented and discussed.

85 citations


Proceedings ArticleDOI
John Regehr1, Alastair Reid1, Kirk Webb1, Michael Parker1, Jay Lepreau1 
03 Dec 2003
TL;DR: A new way to look at real-time and embedded software: as a collection of execution environments created by a hierarchy of schedulers, which is to create systems that are evolvable: they are easier to modify in response to changing requirements than are systems created using traditional techniques.
Abstract: We have developed a new way to look at real-time and embedded software: as a collection of execution environments created by a hierarchy of schedulers. Common schedulers include those than run interrupts, bottom-half handlers, threads, and events. We have created algorithms for deriving response times, scheduling overheads, and blocking terms for tasks in systems containing multiple execution environments. We have also created task scheduler logic, a formalism that permits checking systems for race conditions and other errors. Concurrency analysis of low-level software is challenging because there are typically several kinds of locks, such as thread mutexes and disabling interrupts, and groups of cooperating tasks may need to acquire some, all or none of the available types of locks to create correct software. Our high-level goal is to create systems that are evolvable: they are easier to modify in response to changing requirements than are systems created using traditional techniques. We have applied our approach to two case studies in evolving software for networked sensor nodes.

67 citations


Book ChapterDOI
03 Sep 2003
TL;DR: This paper investigates the problem of distributed monitoring of concurrent and asynchronous systems, with application to distributed fault management in telecommunications networks, and combines two techniques: compositional unfoldings to handle concurrency properly, and a variant of graphical algorithms and belief propagation.
Abstract: Developing applications over a distributed and asynchronous architecture without the need for synchronization services is going to become a central track for distributed computing. This research track will be central for the domain of autonomic computing and self-management. Distributed constraint solving, distributed observation, and distributed optimization, are instances of such applications. This paper is about distributed observation: we investigate the problem of distributed monitoring of concurrent and asynchronous systems, with application to distributed fault management in telecommunications networks. Our approach combines two techniques: compositional unfoldings to handle concurrency properly, and a variant of graphical algorithms and belief propagation, originating from statistics and information theory.

65 citations


Journal ArticleDOI
TL;DR: Transactional lock removal can dynamically eliminate synchronization operations and achieve transparent transactional execution by treating lock-based critical sections as lock-free optimistic transactions.
Abstract: Although lock-based critical sections are the synchronization method of choice, they have significant performance limitations and lack certain properties, such as failure atomicity and stability. Addressing both these limitations requires considerable software overhead. Transactional lock removal can dynamically eliminate synchronization operations and achieve transparent transactional execution by treating lock-based critical sections as lock-free optimistic transactions.

49 citations


Journal ArticleDOI
TL;DR: An optimistic real-time commit protocol, double space commit (2SC), is proposed, which is specifically designed for the high-performance distributedreal-time transaction and easy to incorporate in current concurrency control protocols.

38 citations


Journal ArticleDOI
TL;DR: The concept of absolute and relative temporal consistency from the perspective of transactions for discrete data objects is defined and the important issue of transaction scheduling among the three types of activities such that the two timing requirements can be met.
Abstract: A real-time database system contains base data items which record and model a physical, real-world environment. For better decision support, base data items are summarized and correlated to derive views. These base data and views are accessed by application transactions to generate the ultimate actions taken by the system. As the environment changes, updates are applied to base data, which subsequently trigger view recomputations. There are thus three types of activities: base data update, view recomputation, and transaction execution. In a real-time database system, two timing constraints need to be enforced. We require that transactions meet their deadlines (transaction timeliness) and read fresh data (data timeliness). In this paper, we define the concept of absolute and relative temporal consistency from the perspective of transactions for discrete data objects. We address the important issue of transaction scheduling among the three types of activities such that the two timing requirements can be met. We also discuss how a real-time database system should be designed to enforce different levels of temporal consistency.

33 citations


Journal ArticleDOI
TL;DR: The dOPT algorith for maintaining consistency is proposed, which transforms updates as they transmitted among sites, and creates the illusion that each participant executed the same sequence of operations, with same results.

30 citations


Book ChapterDOI
18 Feb 2003
TL;DR: This paper introduces a novel way of handling concurrency in a real-time database system, where concurrency is modeled as an aspect crosscutting the system.
Abstract: Increasing complexity of real-time systems, and demands for enabling their configurability and tailorability are strong motivations for applying new software engineering principles, such as aspect-oriented and component-based development. In this paper we introduce a novel concept of aspectual component-based real-time system development. The concept is based on a design method that assumes decomposition of real-time systems into components and aspects, and provides a real-time component model that supports the notion of time and temporal constraints, space and resource management constraints, and composability. We anticipate that the successful applications of the proposed concept should have a positive impact on real-time system development in enabling efficient configuration of real-time systems, improved reusability and flexibility of real-time software, and modularization of crosscutting concerns. We provide arguments for this assumption by presenting an application of the proposed concept on the design and development of a configurable embedded real-time database, called COMET. Furthermore, using the COMET system as an example, we introduce a novel way of handling concurrency in a real-time database system, where concurrency is modeled as an aspect crosscutting the system.

Patent
16 May 2003
TL;DR: In this article, the authors propose a concurrency control method for high performance database systems, which includes receiving a database access request message from a transaction, generating an element that corresponds to the access request messages, and posting it to a read-commit queue.
Abstract: A system and method for concurrency control in high performance database systems. Generally includes receiving a database access request message from a transaction. Then, generating an element that corresponds to the access request message. The element type is that of a read element, commit element, validated element, or restart element. The element is then posted to a read-commit (RC) queue. If the element is a commit element, an intervening validation of the transaction is performed. Upon the transaction passing validation the requested database access is performed.

Proceedings ArticleDOI
06 Oct 2003
TL;DR: A dataflow analysis technique for identifying schedules of transaction execution aimed at revealing concurrency faults of this nature is presented, along with techniques for controlling the DBMS or the application so that execution of transaction sequences follows generated schedules.
Abstract: Database application programs are often designed to be executed concurrently by many users. By grouping related database queries into transactions, DBMS (database management system) can guarantee that each transaction satisfies the well-known ACID properties: atomicity, consistency, isolation, and durability. However, if a database application is decomposed into transactions in an incorrect manner, the application may fail when executed concurrently due to potential offline concurrency problems. This paper presents a dataflow analysis technique for identifying schedules of transaction execution aimed at revealing concurrency faults of this nature, along with techniques for controlling the DBMS or the application so that execution of transaction sequences follows generated schedules. The techniques have been integrated into AGENDA, a tool set for testing relational database application programs. Preliminary empirical evaluation is presented.

Journal ArticleDOI
TL;DR: This paper develops an approach that can handle data inconsistencies and is thus inherently much more scalable and first construct a dynamic shared ontology by analyzing the correspondence graph that relates the heterogeneous classification schemes.
Abstract: Aggregate views are commonly used for summarizing information held in very large databases such as those encountered in data warehousing, large scale transaction management, and statistical databases. Such applications often involve distributed databases that have developed independently and therefore may exhibit incompatibility, heterogeneity, and data inconsistency. We are here concerned with the integration of aggregates that have heterogeneous classification schemes where local ontologies, in the form of such classification schemes, may be mapped onto a common ontology. In previous work, we have developed a method for the integration of such aggregates; the method previously developed is efficient, but cannot handle innate data inconsistencies that are likely to arise when a large number of databases are being integrated. In this paper, we develop an approach that can handle data inconsistencies and is thus inherently much more scalable. In our new approach, we first construct a dynamic shared ontology by analyzing the correspondence graph that relates the heterogeneous classification schemes; the aggregates are then derived by minimization of the Kullback-Leibler information divergence using the EM (Expectation-Maximization) algorithm. Thus, we may assess whether global queries on such aggregates are answerable, partially answerable, or unanswerable in advance of computing the aggregates themselves.

Book ChapterDOI
09 Sep 2003
TL;DR: This work shows that a combination of a "value-based" latch pool and previous high concurrency locking techniques can solve the maintenance of materialized aggregate join views.
Abstract: The maintenance of materialized aggregate join views is a well-studied problem. However, to date the published literature has largely ignored the issue of concurrency control. Clearly immediate materialized view maintenance with transactional consistency, if enforced by generic concurrency control mechanisms, can result in low levels of concurrency and high rates of deadlock. While this problem is superficially amenable to well-known techniques such as fine-granularity locking and special lock modes for updates that are associative and commutative, we show that these previous techniques do not fully solve the problem. We extend previous high concurrency locking techniques to apply to materialized view maintenance, and show how this extension can be implemented even in the presence of indices on the materialized view.

Proceedings ArticleDOI
22 Apr 2003
TL;DR: The Java implementation of process networks is suitable for execution on a single computer, a cluster of servers on a high-speed LAN, or geographically dispersed servers on the Internet.
Abstract: Kahn (1974, 1977) defined a formal model for networks of processes that communicate through channels carrying streams of data tokens. His mathematical results show the necessary conditions for an implementation to be determinate, that is, for the results of the computation to be identical whether the processes are executed sequentially, concurrently, or in parallel. In our Java implementation channels enforce blocking reads and each process has its own thread in order to ensure determinacy and avoid deadlock. The network connections required to maintain the communication channels between processes executing on separate servers are automatically established when parts of the program graph are distributed to other servers by Java object serialization. Our Java implementation of process networks is suitable for execution on a single computer, a cluster of servers on a high-speed LAN, or geographically dispersed servers on the Internet.

Proceedings ArticleDOI
01 Dec 2003
TL;DR: This work explores issues related to the control and verification of similar agents that interact through events broadcast over a network and shows how the state explosion problem inherent to many concurrent systems is not as problematic in this setting.
Abstract: We explore issues related to the control and verification of similar agents that interact through events broadcast over a network The similar agents are modeled as discrete-event systems that have identical structure System events are partitioned into global and private events that respectively affect all agents or exactly one agent We show how the state explosion problem inherent to many concurrent systems is not as problematic in this setting We give a procedure to test if these systems are globally deadlock-free or nonblocking We explore control and verification problems related to both local and global specifications on these systems For each module there is exactly one controller and all controllers enforce the same control policy Necessary and sufficient conditions for achieving local and global specifications in this setting are identified

Journal ArticleDOI
TL;DR: This paper presents an efficient distributed algorithm to detect if a node is part of a knot in a distributed graph and finds exactly which nodes are involved in the knot.
Abstract: Knot detection in a distributed graph is an important problem and finds applications in deadlock detection in several areas such as store-and-forward networks, distributed simulation, and distributed database systems. This paper presents an efficient distributed algorithm to detect if a node is part of a knot in a distributed graph. The algorithm requires 2e messages and a delay of 2(d+1) message hops to detect if a node in a distributed graph is in a knot (here, e is the number of edges in the reachable part of the distributed graph and d is its diameter). A significant advantage of this algorithm is that it not only detects if a node is involved in a knot, but also finds exactly which nodes are involved in the knot. Moreover, if the node is not involved in a knot, but is only involved in a cycle, then it finds the nodes that are in a cycle with that node. We illustrate the working of the algorithm with examples. The paper ends with a discussion on how the information about the nodes involved in the knot can be used for deadlock resolution and also on the performance of the algorithm.

Journal Article
TL;DR: The system utilizes the incremental transmission to decrease the transmitted-data quantum of solid model, improves the graphics interaction capability under Web interface by extending Java3D data structure, and performs flexible concurrency control based on conflict prediction tokens and an operation filter.

Journal ArticleDOI
TL;DR: The present study examines detection of conflicts based on enhanced local processing for distributed concurrency control in the proposed "edge detection" approach, a graph-based resolution of access conflicts has been adopted.
Abstract: Distributed locking is commonly adopted for performing concurrency control in distributed systems. It incorporates additional steps for handling deadlocks. This activity is carried out by methods based on wait-for-graphs or probes. The present study examines detection of conflicts based on enhanced local processing for distributed concurrency control. In the proposed "edge detection" approach, a graph-based resolution of access conflicts has been adopted. The technique generates a uniform wait-for precedence order at distributed sites for transactions to execute. The earlier methods based on serialization graph testing are difficult to implement in a distributed environment. The edge detection approach is a fully distributed approach. It presents a unified technique for locking and deadlock detection exercises. The technique eliminates many deadlocks without incurring message overheads.

Proceedings ArticleDOI
09 Jun 2003
TL;DR: An installation graph of operations in an execution is defined, an ordering significantly weaker than conflict ordering from concurrency control, which explains recoverable system state in terms of which operations are considered installed and form an invariant that is the contract between normal operation and recovery.
Abstract: Our goal is to understand redo recovery. We define an installation graph of operations in an execution, an ordering significantly weaker than conflict ordering from concurrency control. The installation graph explains recoverable system state in terms of which operations are considered installed. This explanation and the set of operations replayed during recovery form an invariant that is the contract between normal operation and recovery. It prescribes how to coordinate changes to system components such as the state, the log, and the cache. We also describe how widely used recovery techniques are modeled in our theory, and why they succeed in providing redo recovery.

01 Dec 2003
TL;DR: This paper introduces new B±tree algorithms in which tree-structure modifications such as page splits or merges are executed as atomic actions, in which database records are identified by their primary keys.
Abstract: The B±tree is the most widely used index structure in the current commercial database systems. This paper introduces new B±tree algorithms in which tree-structure modifications such as page splits or merges are executed as atomic actions. A B±tree structure modification, once executed to completion will never be undone no matter if the transaction that triggered such a structure modification commits or aborts later on. In restart recovery from a system crash, the redo pass of the recovery algorithm will always produce a structurally consistent B±tree, on which undo operations by backward-rolling transactions can be performed. A database transaction can contain any number of operations of the form “fetch the first (or next) matching record”, “insert a record”, or “delete a record”, where database records are identified by their primary keys. Repeatable-read-level isolation for transactions is achieved by key-range locking.

Proceedings ArticleDOI
24 Nov 2003
TL;DR: This paper shows how the special syntactical constraints of some classes of resource allocation systems can help in developing specific implementations to compute siphons in a very efficient way.
Abstract: Siphons are related to liveness properties of Petri net models. This relation is strong in the case of resource allocation systems (RAS). Siphons can be used in these systems in order to both, characterize and prevent/avoid deadlock situations. However, the computation of these structural components can be very time consuming or, even, impossible. Moreover, if, in general, the complete enumeration of the set of minimal siphons must be avoided (there can exist an exponential number of such components), some deadlock prevention methods rely on its (complete or partial) computation and enumeration. In the paper we show how the special syntactical constraints of some classes of resource allocation systems (we concentrate on S/sup 4/PR) can help in developing specific implementations to compute siphons in a very efficient way.

Book
Jörg Kienzle1
01 Jan 2003
TL;DR: This chapter introduces a new transaction model called Open Multithreaded Transactions [KRS01] that supports competitive and cooperative concurrency, and integrates well with the concurrency features found in modern programming languages.
Abstract: This chapter introduces a new transaction model called Open Multithreaded Transactions [KRS01] that supports competitive and cooperative concurrency, and integrates well with the concurrency features found in modern programming languages.

Journal ArticleDOI
TL;DR: An optimistic commit protocol 2LC (two-Level Commit) is proposed, which allows transactions to optimistically access the locked data in a controlled manner, which reduces the data inaccessibility and priority inversion inherent and undesirable in distributed real-time database systems.
Abstract: Ramamritham gives three common types of constraints for the execution history of concurrent transactions. This paper extends the constraints and gives the fourth type of constraint. Then the weak commit dependency and abort dependency between transactions, because of data access conflicts, are analyzed. Based on the analysis, an optimistic commit protocol 2LC (two-Level Commit) is proposed, which is specially designed for the distributed real-time domain. It allows transactions to optimistically access the locked data in a controlled manner, which reduces the data inaccessibility and priority inversion inherent and undesirable in distributed real-time database systems. Furthermore, if the prepared transaction is aborted, the transactions in its weak commit dependency set will execute as normal according to 2LC. Extensive simulation experiments have been performed to compare the performance of 2LC with that of the base protocol, the permits reading of modified prepared-data for timeliness (PROMPT) and the deadline-driven conflict resolution (DDCR). The simulation results show that 2LC is effective in reducing the number of missed transaction deadlines. Furthermore, it is easy to be incorporated with the existing concurrency control protocols.

Journal ArticleDOI
TL;DR: A comparison study reveals that the MVCC-SFBVC scheme outperforms all other investigated concurrency control schemes suitable for mobile database systems.
Abstract: Different isolation levels are required to ensure various degrees of data consistency and currency to read-only transactions. Current definitions of isolation levels such as Conflict Serializability, Update Serializability or External Consistency/Update Consistency are not appropriate for processing read-only transactions since they lack any currency guarantees. To resolve this problem, we propose four new isolation levels which incorporate data consistency and currency guarantees. Further, we present efficient implementations of the proposed isolation levels. Our concurrency control protocols are envisaged to be used in a hybrid mobile data delivery environment in which broadcast push technology is utilized to disseminate database objects to a large number of mobile clients and conventional point-to-point technology is applied to satisfy on-demand requests. The paper also presents the results of a simulation study conducted to evaluate the performance of our protocols. According to the simulation results the costs imposed by the MVCC-SFBS protocol, which ensures serializability to read-only transactions are moderate relative to those imposed by the MVCC-SFBUS and MVCC-SFBVC protocols, which provide weaker consistency guarantees. A comparison study reveals that the MVCC-SFBVC scheme outperforms all other investigated concurrency control schemes suitable for mobile database systems.

Book ChapterDOI
21 Jan 2003
TL;DR: This paper shows how agent-based information processing can be enriched by dedicated transactional semantics -- despite of the lack of global control which is an inherent characteristic of peer-to-peer environments -- by presenting the AMOR (Agents, MObility, and tRansactions) approach.
Abstract: Mobile agent applications are a promising approach to cope with the ever increasing amount of data and services available in large networks. Users no longer have to manually browse for certain data or services but rather to submit a mobile personal agent that accesses and processes information on her/his behalf. These agents operate on top of a peer-to-peer network spanned by the individual providers of data and services. However, support for the correct concurrent and fault-tolerant execution of multiple agents accessing shared resources is vital to agent-based information processing. This paper addresses this problem and shows how agent-based information processing can be enriched by dedicated transactional semantics -- despite of the lack of global control which is an inherent characteristic of peer-to-peer environments -- by presenting the AMOR (Agents, MObility, and tRansactions) approach.

Journal ArticleDOI
TL;DR: A network-server-based architecture and algorithms which can not only reduce the blocking time of higher-priority transactions and improve the response time of client-side read-only transactions, but also provide a diskless runtime logging mechanism and an efficient and predictable recovery procedure are proposed.
Abstract: While there has been a significant amount of research in real-time concurrency control, little work has been done in logging and recovery for real-time databases. This paper proposes a two-version approach which considers both real-time concurrency control and recovery. We propose a network-server-based architecture and algorithms which can not only reduce the blocking time of higher-priority transactions and improve the response time of client-side read-only transactions, but also provide a diskless runtime logging mechanism and an efficient and predictable recovery procedure. The performance of the algorithms was verified by a series of simulation experiments by comparing the algorithms with the well-known Priority Ceiling Protocol (PCP), the Read/Write PCP, the New PCP, and the 2-version two-phase locking protocol, for which we have very encouraging results. The schedulability of higher-priority transactions and the response time of client-side read-only transactions were all greatly improved.

Proceedings ArticleDOI
22 Apr 2003
TL;DR: A formal specification of the Java concurrency model using the Z specification language is presented and a number of important correctness properties of concurrent programs are constructed from the model, and their application to the implementation of verification and testing tools for concurrent Java programs is discussed.
Abstract: The Java programming language is a modem object-oriented language that supports concurrency. Ensuring concurrent programs are correct is difficult. Additional problems encountered in concurrent programs, compared with sequential programs, include deadlock, livelock, starvation, and dormancy. Often these problems are related and are sometimes side effects of one another Furthermore, different programming languages attach different meanings to these terms. Sun Microsystems provides a textual description of the Java concurrency model which is inadequate for reasoning with such programs. Formal specifications are required for verifying concurrent programs through the use of tools and methods such as static analysis, dynamic analysis, model-checking, and theorem proving. It is clear that the behaviour of the Java concurrency model must be unambiguous and well-understood for these tools to operate effectively. This paper presents a formal specification of the Java concurrency model using the Z specification language. A number of important correctness properties of concurrent programs are constructed from the model, and their application to the implementation of verification and testing tools for concurrent Java programs is discussed.

Book ChapterDOI
Klaus Haller1, Heiko Schuldt1
16 Jun 2003
TL;DR: This protocol is, to the best knowledge, the first distributed protocol that addresses the global problem of concurrency control and recovery in a truly distributed way and that, at the same time, jointly solves both problems in a single framework.
Abstract: The proliferation of Internet technology resulted in a high connectivity between individuals and companies all over the world. This technology facilitates interactions within and between enterprises, organizations, etc. and allows for data and information exchange. Automating business interactions on this platform requires the execution of processes. This process execution has to be reliable, i.e., guarantees for correct concurrent and fault tolerant execution are vital. A strategy enforcing these properties must take into consideration that large-scale networks like the Internet are not always reliable. We deal with this by encapsulating applications within mobile agents. Essentially, this allows users to be temporary disconnected from the network while their application is executing. To stress the aspect of guarantees, we use the term transactional agents. They invoke services provided by resources, which are responsible for logging and conflict detection. In contrast, it is the transactional agents' task to ensure globally correct concurrent interactions by means of communication. The used communication pattern is a sample implementation of our newly developed protocol. It is, to our best knowledge, the first distributed protocol that addresses the global problem of concurrency control and recovery in a truly distributed way and that, at the same time, jointly solves both problems in a single framework. Because (i) processes are long running transactions requiring optimistic techniques and (ii) large networks require decentralized approaches, this protocol meets the demands of process-based applications in large scale networks.