scispace - formally typeset
Search or ask a question

Showing papers on "Concurrency control published in 2004"


Journal ArticleDOI
01 Jan 2004
TL;DR: This paper intends to present a tutorial survey of state-of-the art modeling and deadlock control methods for discrete manufacturing systems and presents the updated results in the areas of deadlock prevention, detection and recovery, and avoidance.
Abstract: As more and more producers move to use flexible and agile manufacturing as a way to keep them with a competitive edge, the investigations on deadlock resolution in automated manufacturing have received significant attention for a decade. Deadlock and related blocking phenomena often lead to catastrophic results in automated manufacturing systems. Their efficient handling becomes a necessary condition for a system to gain high productivity. This paper intends to present a tutorial survey of state-of-the art modeling and deadlock control methods for discrete manufacturing systems. It presents the updated results in the areas of deadlock prevention, detection and recovery, and avoidance. It focuses on three modeling methods: digraphs, automata, and Petri nets. Moreover, for each approach, the main and relevant contributions are selected enlightening pros and cons. The paper concludes with the future research needs in this important area in order to bridge the gap between the academic research and industrial needs.

334 citations


Proceedings ArticleDOI
18 Oct 2004
TL;DR: Ganymed is introduced, a database replication middleware intended to provide scalability without sacrificing consistency and avoiding the limitations of existing approaches by using a novel transaction scheduling algorithm that separates update and read-only transactions.
Abstract: Data grids, large scale web applications generating dynamic content and database service providing pose significant scalability challenges to database engines. Replication is the most common solution but it involves difficult trade-offs. The most difficult one is the choice between scalability and consistency. Commercial systems give up consistency. Research solutions typically either offer a compromise (limited scalability in exchange for consistency) or impose limitations on the data schema and the workload. In this paper we introduce Ganymed, a database replication middleware intended to provide scalability without sacrificing consistency and avoiding the limitations of existing approaches. The main idea is to use a novel transaction scheduling algorithm that separates update and read-only transactions. Transactions can be submitted to Ganymed through a special JDBC driver. Ganymed then routes updates to a main server and queries to a potentially unlimited number of read-only copies. The system guarantees that all transactions see a consistent data state (snapshot isolation). In the paper we describe the scheduling algorithm, the architecture of Ganymed, and present an extensive performance evaluation that proves the potential of the system.

260 citations


Journal ArticleDOI
01 Jan 2004
TL;DR: A fault-tolerant deadlock avoidance controller synthesis problem for assembly processes based on controlled assembly Petri net (CAPN), a class of Petri nets that can model such characteristics as multiple resources and subassembly parts requirement in assembly production processes.
Abstract: Unreliable resources pose challenges in design of deadlock avoidance algorithms as resources failures have negative impacts on scheduled production activities and may bring the system to dead states or deadlocks. This paper focuses on the development of a suboptimal polynomial complexity deadlock avoidance algorithm that can operate in the presence of unreliable resources for assembly processes. We formulate a fault-tolerant deadlock avoidance controller synthesis problem for assembly processes based on controlled assembly Petri net (CAPN), a class of Petri nets (PNs) that can model such characteristics as multiple resources and subassembly parts requirement in assembly production processes. The proposed fault-tolerant deadlock avoidance algorithm consists of a nominal algorithm to avoid deadlocks for nominal system state and an exception handling algorithm to deal with resources failures. We analyze the fault-tolerant property of the nominal deadlock avoidance algorithm based on resource unavailability models. Resource unavailability is modeled as loss of tokens in nominal Petri Net models to model unavailability of resources in the course of time-consuming recovery procedures. We define three types of token loss to model 1) resource failures in a single operation, 2) resource failures in multiple operations of a production process and 3) resource failures in multiple operations of multiple production processes. For each type of token loss, we establish sufficient conditions that guarantee the liveness of a CAPN after some tokens are removed. An algorithm is proposed to conduct feasibility analysis by searching for recovery control sequences and to keep as many types of production processes as possible continue production so that the impacts on existing production activities can be reduced.

83 citations


Proceedings ArticleDOI
30 Mar 2004
TL;DR: It is shown how generalized strong serializability can be implemented in a lazy replication system, and the results of a simulation study that quantifies the strengths and limitations of the approach are presented.
Abstract: Lazy replication is a popular technique for improving the performance and availability of database systems. Although there are concurrency control techniques, which guarantee serializability in lazy replication systems, these techniques result in undesirable transaction orderings. Since transactions may see stale data, they may be serialized in an order different from the one in which they were submitted. Strong serializability avoids such problems, but it is very costly to implement. We propose a generalized form of strong serializability that is suitable for use with lazy replication. In addition to having many of the advantages of strong serializability, it can be implemented more efficiently. We show how generalized strong serializability can be implemented in a lazy replication system, and we present the results of a simulation study that quantifies the strengths and limitations of the approach.

77 citations


Journal ArticleDOI
01 Sep 2004
TL;DR: It is possible for an SI history to be non-serializable while the sub-history containing all update transactions is serializable, contradicting assumptions under which read-only transactions always execute serializably.
Abstract: Snapshot Isolation (SI), is a multi-version concurrency control algorithm introduced in [BBGMOO95] and later implemented by Oracle. SI avoids many concurrency errors, and it never delays read-only transactions. However it does not guarantee serializability. It has been widely assumed that, under SI, read-only transactions always execute serializably provided the concurrent update transactions are serializable. The reason for this is that all SI reads return values from a single instant of time when all committed transactions have completed their writes and no writes of non-committed transactions are visible. This seems to imply that read-only transactions will not read anomalous results so long as the update transactions with which they execute do not write such results. In the current note, however, we exhibit an example contradicting these assumptions: it is possible for an SI history to be non-serializable while the sub-history containing all update transactions is serializable.

71 citations


Journal ArticleDOI
01 Jul 2004
TL;DR: A hierarchy of three concurrency control mechanisms is presented in descending order of collaborative surprises, which allows the concurrency scheme to be tailored to the tolerance for such surprises.
Abstract: As collaboration in virtual environments becomes more object-focused and closely coupled, the frequency of conflicts in accessing shared objects can increase. In addition, two kinds of concurrency control "surprises" become more disruptive to the collaboration. Undo surprises can occur when a previously visible change is undone because of an access conflict. Intention surprises can happen when a concurrent action by a remote session changes the structure of a shared object at the same perceived time as a local access of that object, such that the local user might not get what they expect because they have not had time to visually process the change. A hierarchy of three concurrency control mechanisms is presented in descending order of collaborative surprises, which allows the concurrency scheme to be tailored to the tolerance for such surprises. One mechanism is semioptimistic; the other two are pessimistic. Designed for peer-to-peer vitual environments in which several threads have access to the shared scene graph, these algorithms are straightforward and relatively simple. They can be implemented using C/C++ and Java, under Windows and Unix, on both desktop and immersive systems. In a series of usability experiments, the average performance of the most conservative concurrency control mechanism on a local LAN was found to be quite acceptable.

65 citations


Proceedings ArticleDOI
D. Li1, R. Li1
24 Mar 2004
TL;DR: A novel state difference based transformation (SDT) algorithm is presented to solve the problem of consistency maintenance in real-time group editors and reveals that the standard priority schemes to break ties in distributed systems should be used with more caution.
Abstract: Real-time group editors allow distributed users to work on local replicas of a shared document simultaneously to achieve high responsiveness and free interaction. Operational transformation (OT) is the standard method for consistency maintenance in state-of-the-art group editors. It is potentially able to achieve content consistency (convergence) as well as intention consistency (so that the converged content is what the users intend), while traditional concurrency control methods such as locking and serialization often cannot. However, existing OT algorithms are often not able to really guarantee consistency due to important algorithmic flaws that have been there for fourteen years. We present a novel state difference based transformation (SDT) algorithm to solve the problem. Our result also reveals that the standard priority schemes to break ties in distributed systems should be used with more caution.

54 citations


Proceedings ArticleDOI
07 Jul 2004
TL;DR: This paper presents a time interval based operational transformation algorithm (TIBOT) that overcomes the various limitations of previous related work and guarantees content convergence and is significantly more simple and efficient than existing approaches.
Abstract: Traditional concurrency control methods such as locking and serialization are not suitable for distributed interactive applications that demand fast local response. Operational transformation (OT) is the standard solution to concurrency control and consistency maintenance in group editors, an important class of interactive groupware applications. It generally trades consistency for local responsiveness, because human users can often tolerate temporary inconsistencies but do not like their interactions be lost or nondeterministically blocked. This paper presents a time interval based operational transformation algorithm (TIBOT) that overcomes the various limitations of previous related work. Our approach guarantees content convergence and is significantly more simple and efficient than existing approaches. This is achieved in a pure replicated architecture by using a linear clock and by posing some constraints on communication that are reasonable for the application domain.

52 citations


Journal ArticleDOI
TL;DR: These SL variants that process transactions efficiently by significantly reducing the number of speculative executions significantly improve the performance over two-phase locking in the DDBS environments where transactions spend longer time for processing and transaction-aborts occur frequently.
Abstract: We have proposed speculative locking (SL) protocols to improve the performance of distributed database systems (DDBSs) by trading extra processing resources. In SL, a transaction releases the lock on the data object whenever it produces corresponding after-image during its execution. By accessing both before and after-images, the waiting transaction carries out speculative executions and retains one execution based on the termination (commit or abort) mode of the preceding transactions. By carrying out multiple executions for a transaction, SL increases parallelism without violating serializability criteria. Under the naive version of SL, the number of speculative executions of the transaction explodes with data contention. By exploiting the fact that a submitted transaction is more likely to commit than abort, we propose the SL variants that process transactions efficiently by significantly reducing the number of speculative executions. The simulation results indicate that even with manageable extra resources, these variants significantly improve the performance over two-phase locking in the DDBS environments where transactions spend longer time for processing and transaction-aborts occur frequently.

51 citations


Journal ArticleDOI
TL;DR: This work proposes a modeling methodology to represent and analyze a context-aware agent-based system, which tends to be highly complex, and introduces CPNs as a method of capturing the dynamics of this contextual change.
Abstract: Context-awareness is become more crucial in mobile distributed computing systems. However, sophisticated modeling methods to analyze context-aware systems are still very few. Among those, the Colored Petri Net (CPN) is promising because it is proven to be useful for modeling system dynamics and concurrency control in more efficient ways. However, to support managing multiple configurations of components of context-aware applications, some features need to be added to specialize the CPNs. To address these challenges, our research has two idea: (a) to decompose a system into several meaningful subsystems, each of which we will call a pattern, and (b) to separate context from the patterns to realize context-pattern independence. Hence, we propose a modeling methodology to represent and analyze a context-aware agent-based system, which tends to be highly complex. We introduce CPNs as a method of capturing the dynamics of this contextual change. We define CPNs and a way to apply them in context-aware agent-based systems. We also describe a prototype system that we have developed called CPN Generator, which translates CPN specification into Java programs.

49 citations


Book
15 Dec 2004
TL;DR: This textbook describes the client-server model for developing distributed network systems, the communication paradigms used in a distributed network system, and the principles of reliability and security in the design of distributednetwork systems.
Abstract: Overview of Distributed Network Systems.- Modelling for Distributed Network Systems: The Client-Server Model.- Communication Paradigms for Distributed Network Systems.- Internetworking.- Interprocess Communication Using Message Passing.- TCP/UDP Communication in Java.- Interprocess Communication Using RPC.- Group Communications.- Reliability and Replication Techniques.- Security.- A Reactive System Architecture for Fault-Tolerant Computing.- Web-Based Databases.- Mobile Computing.- Distributed Network Systems: Case Studies.- Distributed Network Systems: Current Development.

Journal ArticleDOI
TL;DR: A new variant of the optimistic concurrency control protocol that is suitable for broadcast environments and provides autonomy between the mobile clients and the server with minimum upstream communication, which is a desirable feature to the scalability of applications running in broadcast environments.

Journal ArticleDOI
TL;DR: A centralized approach is taken that removes the need of backward propagation of replies, but sends the dependency information directly to the initiator of the algorithm, which enables reduction of time cost for deadlock detection to half of that of the existing distributed algorithms.
Abstract: In the literature, only a few studies have been performed on the distributed deadlock detection and resolution problem in the generalized request model. Most of the studies are based on the diffusing computation technique where propagation of probes and backward propagation of replies are required to detect deadlock. The replies carry the dependency information between processes for the initiator of the algorithm to determine deadlock. Since fast detection of deadlock is critical, we take a centralized approach that removes the need of backward propagation of replies, but sends the dependency information directly to the initiator of the algorithm. This enables reduction of time cost for deadlock detection to half of that of the existing distributed algorithms. The algorithm is extended to handle concurrent executions in order to further improve deadlock detection time, whereas the current algorithms focus only on a single execution. Simulation experiments are performed to see the effectiveness of this centralized approach as compared to previous distributed algorithms. It is found that our algorithm shows better results in several performance metrics especially in deadlock latency and algorithm execution time.

Journal ArticleDOI
TL;DR: This paper shows that concurrency control mechanisms in CVS, relational, and object-oriented database systems are inadequate for collaborative systems based on semistructured data, and proposes two new locking schemes based on path locks which are tightly coupled to the document instance.
Abstract: The hierarchical and semistructured nature of XML data may cause complicated update behavior. Updates should not be limited to entire document trees, but should ideally involve subtrees and even individual elements. Providing a suitable scheduling algorithm for semistructured data can significantly improve collaboration systems that store their data—e.g., word processing documents or vector graphics—as XML documents. In this paper we show that concurrency control mechanisms in CVS, relational, and object-oriented database systems are inadequate for collaborative systems based on semistructured data. We therefore propose two new locking schemes based on path locks which are tightly coupled to the document instance. We also introduce two scheduling algorithms that can both be used with any of the two proposed path lock schemes. We prove that both schedulers guarantee serializability, and show that the conflict rules are necessary.

Proceedings ArticleDOI
20 Sep 2004
TL;DR: A modular verification approach that exploits the modularity of the proposed pattern, i.e., decoupling of the controller behavior from the threads that use the controller, and shows that the correctness of the user threads can be verified using the concurrency controller interfaces as stubs, which improves the efficiency of the interface verification significantly.
Abstract: We present a framework for verifiable concurrent programming in Java based on a design pattern for concurrency controllers. Using this pattern, a programmer can write concurrency controller classes defining a synchronization policy by specifying a set of guarded commands and without using any of the error-prone synchronization primitives of Java. We present a modular verification approach that exploits the modularity of the proposed pattern, i.e., decoupling of the controller behavior from the threads that use the controller. To verify the controller behavior (behavior verification) we use symbolic and infinite state model checking techniques, which enable verification of controllers with parameterized constants, unbounded variables and arbitrary number of user threads. To verify that the threads use a controller in the specified manner (interface verification) we use explicit state model checking techniques, which allow verification of arbitrary thread implementations without any restrictions. We show that the correctness of the user threads can be verified using the concurrency controller interfaces as stubs, which improves the efficiency of the interface verification significantly. We also show that the concurrency controllers can be automatically optimized using the specific notification pattern. We demonstrate the effectiveness of our approach on a Concurrent Editor implementation which consists of 2800 lines of Java code with remote procedure calls and complex synchronization constraints.

Journal ArticleDOI
Robert D. Miller1, Anand Tripathi
TL;DR: This model addresses two fundamental problems with distributed exception handling in a group of asynchronous processes by encapsulating rules for handling concurrent exceptions and directing each process to the semantically correct context for executing its recovery actions.
Abstract: This work presents an abstraction called guardian for exception handling in distributed and concurrent systems that use coordinated exception handling. This model addresses two fundamental problems with distributed exception handling in a group of asynchronous processes. The first is to perform recovery when multiple exceptions are concurrently signaled. The second is to determine the correct context in which a process should execute its exception handling actions. Several schemes have been proposed in the past to address these problems. These are based on structuring a distributed program as atomic actions based on conversations or transactions and resolving multiple concurrent exceptions into a single one. The guardian in a distributed program represents the abstraction of a global exception handler, which encapsulates rules for handling concurrent exceptions and directing each process to the semantically correct context for executing its recovery actions. Its programming primitives and the underlying distributed execution model are presented here. In contrast to the existing approaches, this model is more basic and can be used to implement or enhance the existing schemes. Using several examples we illustrate the capabilities of this model. Finally, its advantages and limitations are discussed in contrast to existing approaches.

Journal ArticleDOI
02 Mar 2004
TL;DR: It is argued that next-generation multi-threaded processors with integrated memory controllers should adopt this mechanism as a way of building less complex high-performance DSM multiprocessors.
Abstract: We introduce the SMTp architecture-an SMT processoraugmented with a coherence protocol thread context,that together with a standard integrated memory controllercan enable the design of (among other possibilities) scalablecache-coherent hardware distributed shared memory(DSM) machines from commodity nodes. We describe theminor changes needed to a conventional out-of-order multi-threadedcore to realize SMTp, discussing issues related toboth deadlock avoidance and performance. We then compareSMTp performance to that of various conventionalDSM machines with normal SMT processors both with andwithout integrated memory controllers. On configurationsfrom 1 to 32 nodes, with 1 to 4 application threads pernode, we find that SMTp delivers performance comparableto, and sometimes better than, machines with more complexintegrated DSM-specific memory controllers. Our resultsalso show that the protocol thread has extremely lowpipeline overhead. Given the simplicity and the flexibility ofthe SMTp mechanism, we argue that next-generation multi-threadedprocessors with integrated memory controllersshould adopt this mechanism as a way of building less complexhigh-performance DSM multiprocessors.

Proceedings ArticleDOI
30 Jun 2004
TL;DR: This paper provides formal proof that the algorithm is deadlock free and formally verify that transactions have atomic semantics and presents an evaluation that demonstrates significant benefits for both soft and hard transactions when this algorithm is used.
Abstract: In this paper we present a concurrency control algorithm that allows co-existence of soft real-time, relational database transactions, and hard real-time database pointer transactions in real-time database management systems. The algorithm uses traditional pessimistic concurrency-control (i.e. locking) for soft transactions and versioning for hard transactions to allow them to execute regardless of any database lock. We provide formal proof that the algorithm is deadlock free and formally verify that transactions have atomic semantics. We also present an evaluation that demonstrates significant benefits for both soft and hard transactions when our algorithm is used. The proposed algorithm is suited for resource-constrained safety critical, real-time systems that have a mix of hard real-time control applications and soft real-time management, maintenance, or user-interface applications.

Proceedings ArticleDOI
David B. Lomet1
30 Mar 2004
TL;DR: This work exploits the B/sup link/ -tree property of being "well-formed" even when index term posting for a node split has not been completed to greatly simplify the algorithms of B-tree concurrency control.
Abstract: Why might B-tree concurrency control still be interesting? For two reasons: (i) currently exploited "real world" approaches are complicated; (ii) simpler proposals are not used because they are not sufficiently robust. In the "real world", systems need to deal robustly with node deletion, and this is an important reason why the currently exploited techniques are complicated. In our effort to simplify the world of robust and highly concurrent B-tree methods, we focus on exactly where B-tree concurrency control needs information about node deletes, and describe mechanisms that provide that information. We exploit the B/sup link/ -tree property of being "well-formed" even when index term posting for a node split has not been completed to greatly simplify our algorithms. Our goal is to describe a very simple but nonetheless robust method.

Proceedings ArticleDOI
20 Sep 2004
TL;DR: The paper proposes simple extensions to Web service definition language (WSDL) enabling the order in which the exposed operations should be invoked to be specified, and proposes a composition language for defining the structure of a composite service.
Abstract: Availability of a wide variety of Web services over the Internet offers opportunities of providing new value added services built by composing them out of existing ones. Service composition poses a number of challenges. A composite service can be very complex in structure, containing many temporal and data-flow dependencies between their constituent services. Furthermore, each individual service is likely to have its own sequencing constraints over its operations. It is highly desirable therefore to be able to validate that a given composite service is well formed: proving that it will not deadlock or livelock and that it respects the sequencing constraints of the constituent services. With this aim in mind, the paper proposes simple extensions to Web service definition language (WSDL) enabling the order in which the exposed operations should be invoked to be specified. In addition, the paper proposes a composition language for defining the structure of a composite service. Both languages have an XML notation and a formal basis in the /spl pi/-calculus (a calculus for concurrent systems). The paper presents the main features of these languages, and shows how it is possible to validate a composite service by applying the /spl pi/-calculus reaction rules.

Journal ArticleDOI
TL;DR: RTSJ attempts to remove some of the limitations relative to real-time applications - primarily by circumventing garbage collection - but RTSJ does not make the language safer, and retains standard Java's threading pitfalls and is a risky candidate for critical concurrent applications.
Abstract: A thread is a basic unit of program execution that can share a single address space with other threads - that is, they can read and write the same variables and data structures. Originally, only assembly programmers used threads. A few older programming languages such as PL/I supported thread concurrency, but newer languages such as C and C++ use libraries instead. Only recently have programming languages again begun to build in direct support for threads. Java and Ada are examples of industry-strength languages for multithreading. The Java thread model has its roots in traditional concurrent programming. As the "real-time specification for Java" sidebar describes, RTSJ attempts to remove some of the limitations relative to real-time applications - primarily by circumventing garbage collection. But RTSJ does not make the language safer. It retains standard Java's threading pitfalls and is a risky candidate for critical concurrent applications.

01 Jan 2004
TL;DR: Two dynamic replication control algorithms designed for medium and large scale distributed realtime database systems that can greatly improve the system performance compared to the systems without replication or systems with simple replication strategies such as full replication.
Abstract: Many real-time applications need data services in distributed environments. However, providing such data services is a challenging task due to long remote data accessing delays and stringent time requirements of real-time transactions. Replication can help distributed real-time database systems meet the stringent time requirements of application transactions. In this paper, we present two dynamic replication control algorithms designed for medium and large scale distributed realtime database systems. With the data needs information from incoming transactions, our algorithms dynamically determine where and how often the replicas are replicated. In our algorithms, the data replicas are dynamically created upon the requests by the incoming transactions and their update frequencies are determined by the data freshness requirements of these transactions. A detailed simulation study shows that our algorithms can greatly improve the system performance compared to the systems without replication or systems with simple replication strategies such as full replication.

Proceedings ArticleDOI
26 Apr 2004
TL;DR: J-SAMOA is developed: a framework for a synchronisation augmented microprotocol approach in Java that has been designed to allow concurrent protocols to be expressed without explicit low-level synchronisation, thus making programming easier and less error-prone.
Abstract: Summary form only given. We address programming abstractions for building protocols from smaller, reusable microprotocols. The existing protocol frameworks, such as Appia and Cactus, either restrict the amount of concurrency between microprotocols, or depend on the programmer, who should implement all the necessary synchronisation using standard language facilities. We develop J-SAMOA: a framework for a synchronisation augmented microprotocol approach in Java. It has been designed to allow concurrent protocols to be expressed without explicit low-level synchronisation, thus making programming easier and less error-prone. We describe versioning concurrency control algorithms. They are used by the runtime system of our framework to guarantee that the concurrent execution of a protocol is equivalent to a serial execution of its microprotocols. This guarantee, called the isolation property, ensures consistency of session or message-specific data maintained by microprotocols.

Proceedings ArticleDOI
23 Jun 2004
TL;DR: An algorithm to detect deadlocks in concurrent message-passing programs by iteratively constructing increasingly more precise abstractions on the basis of spurious counterexamples to either detect a deadlock or prove that no deadlock exists.
Abstract: We present an algorithm to detect deadlocks in concurrent message-passing programs. Even though deadlock is inherently noncompositional and its absence is not preserved by standard abstractions, our framework employs both abstraction and compositional reasoning to alleviate the state space explosion problem. We iteratively construct increasingly more precise abstractions on the basis of spurious counterexamples to either detect a deadlock or prove that no deadlock exists. Our approach is inspired by the counterexample-guided abstraction refinement paradigm. However, our notion of abstraction as well as our schemes for verification and abstraction refinement differs in key respects from existing abstraction refinement frameworks. Our algorithm is also compositional in that abstraction, counterexample validation, and refinement are all carried out component-wise and do not require the construction of the complete state space of the concrete system under consideration. Finally, our approach is completely automated and provides diagnostic feedback in case a deadlock is detected. We have implemented our technique in the MAGIC verification tool and present encouraging results (up to 20 times speed-up in time and 4 times less memory consumption) with concurrent message-passing C programs. We also report a bug in the real-time operating system MicroC/OS version 2.70.

Journal ArticleDOI
01 Jan 2004
TL;DR: This paper proposes a deadlock strategy to avoid deadlock conditions in more complex systems where multiple resource acquisitions are allowed to complete a working operation conjunctive resource service system (CRSS).
Abstract: Automated manufacturing systems (AMSs) can process different parts according to operation sequences sharing a finite number of resources. In these systems, deadlock situations can occur so that the flow of parts is permanently inhibited, and the processing of jobs is partially or completely blocked. Hence, one of the tasks of the control system is ruling resource allocation to prevent such situations from occurring. A large part of the existing literature focused on systems in which every operation is performed by only one resource. This paper proposes a deadlock strategy to avoid deadlock conditions in more complex systems where multiple resource acquisitions are allowed to complete a working operation conjunctive resource service system (CRSS). The AMS structure and dynamics is described by a colored timed Petri net model, suitable for following resource changes and working procedure updating. Moreover, digraphs characterize the complex interactions between resources and jobs so that the conditions for the deadlock occurrence are derived. Finally, an event-based controller is defined to avoid deadlock in CRSSs on the basis of the system state knowledge and of the given priority law ruling the concurrent job selection.

Journal ArticleDOI
TL;DR: This work proposes an enhanced concurrency control algorithm that maximizes the concurrency of multidimensional index structures and shows that the proposed algorithm outperforms the existing algorithm in terms of throughput and response time.
Abstract: We propose an enhanced concurrency control algorithm that maximizes the concurrency of multidimensional index structures. The factors that deteriorate the concurrency of index structures are node splits and minimum bounding region (MBR) updates in multidimensional index structures. The properties of our concurrency control algorithm are as follows: First, to increase the concurrency by avoiding lock coupling during MBR updates, we propose the PLC (partial lock coupling) technique. Second, a new MBR update method is proposed. It allows searchers to access nodes where MBR updates are being performed. Finally, our algorithm holds exclusive latches not during whole split time but only during physical node split time that occupies the small part of a whole split process. For performance evaluation, we implement the proposed concurrency control algorithm and one of the existing link technique-based algorithms on MIDAS-III that is a storage system of a BADA-IV DBMS. We show through various experiments that our proposed algorithm outperforms the existing algorithm in terms of throughput and response time. Also, we propose a recovery protocol for our proposed concurrency control algorithm. The recovery protocol is designed to assure high concurrency and fast recovery.

Journal ArticleDOI
01 Jun 2004
TL;DR: A collaborative environment to help with the tasks of discrete event simulation software development using the World Wide Web platform is proposed, named GroupSim, which uses the concepts of distributed modeling with automatic program generation and distributed control of experimentation.
Abstract: The simulation process involves the collaboration of different participants, such as simulation analysts, programmers, statisticians, and users of the simulation software. Many simulation tasks such as modeling, verification, validation, and design for experimentation require the participants to meet. It is understood that these meetings are time-consuming and expensive. This paper proposes a collaborative environment to help with the tasks of discrete event simulation software development using the World Wide Web platform. The environment, named GroupSim, is based on a collaborative computer system and uses the concepts of distributed modeling with automatic program generation and distributed control of experimentation. The authors show some examples to illustrate the use of the environment and discuss some issues related to collaborative environments such as concurrency control, access control, awareness, and performance.

Proceedings ArticleDOI
20 Jul 2004
TL;DR: A journal-based failure-recovery mechanism for distributed metadata servers in the dawning cluster file system-DCFS2, which exploits a modified two-phase commit protocol which ensures consistent metadata updates on multiple metadata servers even in case of one server's failure.
Abstract: Distributed metadata servers are required for cluster file system's scalability. However, how to distribute the file system metadata among multiple metadata servers and how to make the file system reliable in case of server failures are two difficult problem. We present a journal-based failure-recovery mechanism for distributed metadata servers in the dawning cluster file system-DCFS2. The DCFS2 metadata protocol exploits a modified two-phase commit protocol which ensures consistent metadata updates on multiple metadata servers even in case of one server's failure. We focus on the logging policy and concurrent control policy for metadata updates, and the failure recovery policy. The DCFS2 metadata protocol is compared with the two phase commit protocol and some virtues are shown. Some results of performance experiments on our system are also presented.

Proceedings ArticleDOI
Yi Yang1, Du Li1
06 Nov 2004
TL;DR: This paper proposes a novel approach for supporting adaptable consistency protocols in collaborative systems that cleanly separates data and control, allowing consistency protocols to be dynamically attached to shared data at the object level.
Abstract: Consistency control is critical for the correct functioning of distributed collaboration support systems. A large number of consistency control methods have appeared in the literature with different design tradeoffs and usability implications. However, there has been relatively little work on how to accommodate different protocols and variations in one framework to address the dynamic needs of collaboration. In this paper, we propose a novel approach for supporting adaptable consistency protocols in collaborative systems. Our approach cleanly separates data and control, allowing consistency protocols to be dynamically attached to shared data at the object level. Protocols can be switched at run time without modifying source code.

Proceedings ArticleDOI
13 Jun 2004
TL;DR: These methods extend escrow locking with multi-granularity (hierarchical) locking, snapshot transactions, multi-version concurrency control, key range locking, and system transactions, i.e., multiple proven database implementation techniques.
Abstract: Materialized views have become a standard technique for performance improvement in decision support databases and for a variety of monitoring purposes. In order to avoid inconsistencies and thus unpredictable query results, materialized views and their indexes should be maintained immediately within user transaction just like indexes on ordinary tables. Unfortunately, the smaller a materialized view is, the higher the concurrency contention between queries and updates as well as among concurrent updates. Therefore, we have investigated methods that reduce contention without forcing users to sacrifice serializability and thus predictable application semantics. These methods extend escrow locking with multi-granularity (hierarchical) locking, snapshot transactions, multi-version concurrency control, key range locking, and system transactions, i.e., multiple proven database implementation techniques. The complete design eliminates all contention between pure read transactions and pure update transactions as well as contention among pure update transactions as well as contention among pure update transactions; it enables maximal concurrency of mixed read-write transactions with other transactions; it supports bulk operations such as data import and online index creation; and it provides recovery for transaction, media, and system failures.