scispace - formally typeset
Search or ask a question

Showing papers on "Concurrency control published in 2000"


Proceedings ArticleDOI
29 Feb 2000
TL;DR: New specifications for the ANSI levels are presented, which are portable: they apply not only to locking implementations, but also to optimistic and multi-version concurrency control schemes.
Abstract: Commercial databases support different isolation levels to allow programmers to trade off consistency for a potential gain in performance. The isolation levels are defined in the current ANSI standard, but the definitions are ambiguous and revised definitions proposed to correct the problem are too constrained since they allow only pessimistic (locking) implementations. This paper presents new specifications for the ANSI levels. Our specifications are portable: they apply not only to locking implementations, but also to optimistic and multi-version concurrency control schemes. Furthermore, unlike earlier definitions, our new specifications handle predicates in a correct and flexible manner at all levels.

180 citations


Book ChapterDOI
30 Aug 2000
TL;DR: A tool, called AX, that can be used in combination with the model checker Spin to efficiently verify logical properties of distributed software systems implemented in ANSI-standard C.
Abstract: We describe a tool, called AX, that can be used in combination with the model checker Spin to efficiently verify logical properties of distributed software systems implemented in ANSI-standard C [18]. AX, short for Automaton eXtractor, can extract verification models from C code at a user defined level of abstraction. Target applications include telephone switching software, distributed operating systems code, protocol implementations, concurrency control methods, and client-server applications.

153 citations


Book ChapterDOI
04 Oct 2000
TL;DR: The protocols presented in the paper provide correct executions while minimizing overhead and providing higher scalability, and use an optimistic multicast technique that overlaps transaction execution with total order message delivery.
Abstract: In this paper, we explore data replication protocols that provide both fault tolerance and good performance without compromising consistency. We do this by combining transactional concurrency control with group communication primitives. In our approach, transactions are executed at only one site so that not all nodes incur in the overhead of producing results. To further reduce latency, we use an optimistic multicast technique that overlaps transaction execution with total order message delivery. The protocols we present in the paper provide correct executions while minimizing overhead and providing higher scalability.

112 citations


Journal ArticleDOI
01 Feb 2000
TL;DR: The log-structured history data access method (LHAM) as discussed by the authors partitions the data into successive components based on the timestamps of the record versions, and the components are assigned to different levels of a storage hierarchy.
Abstract: Numerous applications such as stock market or medical information systems require that both historical and current data be logically integrated into a temporal database. The underlying access method must support different forms of “time-travel” queries, the migration of old record versions onto inexpensive archive media, and high insertion and update rates. This paper presents an access method for transaction-time temporal data, called the log-structured history data access method (LHAM) that meets these demands. The basic principle of LHAM is to partition the data into successive components based on the timestamps of the record versions. Components are assigned to different levels of a storage hierarchy, and incoming data is continuously migrated through the hierarchy. The paper discusses the LHAM concepts, including concurrency control and recovery, our full-fledged LHAM implementation, and experimental performance results based on this implementation. A detailed comparison with the TSB-tree, both analytically and based on experiments with real implementations, shows that LHAM is highly superior in terms of insert performance, while query performance is in almost all cases at least as good as for the TSB-tree; in many cases it is much better.

96 citations


Proceedings ArticleDOI
10 Apr 2000
TL;DR: This paper identifies the tasks that storage controllers must perform, and proposes an approach which allows these tasks to be composed from basic operations-called base storage transactions (BSTs)-such that correctness requires only the serializability of the BSTs and not of the parent tasks.
Abstract: Switched system-area networks enable thousands of storage devices to be shared and directly accessed by end hosts, promising databases and file systems highly scalable, reliable storage. In such systems, hosts perform access tasks (read and write) and management tasks (storage migration and reconstruction of data on failed devices.) Each task translates into multiple phases of low-level device I/Os, so that concurrent host tasks accessing shared devices can corrupt redundancy codes and cause hosts to read inconsistent data. Concurrent control protocols that scale to large system sizes are required in order to coordinate on-line storage management and access tasks. In this paper we identify, the tasks that storage controllers must perform, and propose an approach which allows these tasks to be composed from basic operations-called base storage transactions (BSTs)-such that correctness requires only the serializability of the BSTs and not of the parent tasks. We present highly scalable distributed protocols which exploit storage technology trends and BST properties to achieve serializability while coming within a few percent of ideal performance.

86 citations


Book
08 Mar 2000
TL;DR: This paper discusses the role of performance modeling and Evaluation in industry, workload Characterization Issues and Methodologies, and the discovery of Self-Similar Traffic.
Abstract: Position Paper.- Performance Evaluation in Industry: A Personal Perspective.- Topical Area Papers.- Mainframe Systems.- Performance Analysis of Storage Systems.- Ad Hoc, Wireless, Mobile Networks: The Role of Performance Modeling and Evaluation.- Trace-Driven Memory Simulation: A Survey.- Performance Issues in Parallel Processing Systems.- Measurement-Based Analysis of Networked System Availability.- Performance of Client/Server Systems.- Performance Characteristics of the World Wide Web.- Parallel Job Scheduling: A Performance Perspective.- Scheduling of Real-Time Tasks with Complex Constraints.- Software Performance Evaluation by Models.- Performance Analysis of Database Systems.- Performance Analysis of Concurrency Control Methods.- Numerical Analysis Methods.- Product Form Queueing Networks.- Stochastic Modeling Formalisms for Dependability, Performance and Performability.- Analysis and Application of Polling Models.- Discrete-Event Simulation in Performance Evaluation.- Workload Characterization Issues and Methodologies.- Personal Accounts of Key Contributors.- From the Central Server Model to BEST/1(c).- Mean Value Analysis: A Personal Account.- The Early Days of GSPNs.- The Discovery of Self-Similar Traffic.

74 citations


Journal ArticleDOI
TL;DR: A new distributed algorithm for resolving concurrent exceptions is proposed and it is shown that the algorithm works correctly even in complex nested situations, and is an improvement over previous proposals in that it requires only O(n/sub max/N/sup 2/) messages, thereby permitting quicker response to exceptions.
Abstract: We address the problem of how to handle exceptions in distributed object systems. In a distributed computing environment, exceptions may be raised simultaneously in different processing nodes and thus need to be treated in a coordinated manner. Mishandling concurrent exceptions can lead to catastrophic consequences. We take two kinds of concurrency into account: 1) Several objects are designed collectively and invoked concurrently to achieve a global goal and 2) multiple objects (or object groups) that are designed independently compete for the same system resources. We propose a new distributed algorithm for resolving concurrent exceptions and show that the algorithm works correctly even in complex nested situations, and is an improvement over previous proposals in that it requires only O(n/sub max/N/sup 2/) messages, thereby permitting quicker response to exceptions.

70 citations


Journal ArticleDOI
TL;DR: A secure two-phase locking protocol is described and a scheme is proposed to allow partial violations of security for improved timeliness, a measure of the degree to which security is being satisfied by a system.
Abstract: Database systems for real-time applications must satisfy timing constraints associated with transactions in addition to maintaining data consistency. In addition to real-time requirements, security is usually required in many applications. Multi-level security requirements introduce a new dimension to transaction processing in real-time database systems. In this paper, we argue that, due to the conflicting goals of each requirement, tradeoffs need to be made between security and timeliness. We first define mutual information, a measure of the degree to which security is being satisfied by a system. A secure two-phase locking protocol is then described and a scheme is proposed to allow partial violations of security for improved timeliness. Analytical expressions for the mutual information of the resultant covert channel are derived, and a feedback control scheme is proposed that does not allow the mutual information to exceed a specified upper bound. Results showing the efficacy of the scheme obtained through simulation experiments are also discussed.

66 citations


Journal ArticleDOI
TL;DR: This paper presents a fair decentralized mutual exclusion algorithm for distributed systems in which processes communicate by asynchronous message passing that requires between N-1 and 2(N-1) messages per critical section access, where N is the number of processes in the system.
Abstract: This paper presents a fair decentralized mutual exclusion algorithm for distributed systems in which processes communicate by asynchronous message passing. The algorithm requires between N-1 and 2(N-1) messages per critical section access, where N is the number of processes in the system. The exact message complexity can be expressed as a deterministic function of concurrency in the computation. The algorithm does not introduce any other overheads over Lamport's and Ricart-Agrawala's algorithms, which require 3(N-1) and 2(N-1) messages, respectively, per critical section access and are the only other decentralized algorithms that allow mutual exclusion access in the order of the timestamps of requests.

65 citations


Patent
02 Nov 2000
TL;DR: In this paper, a system that facilitates concurrency control for a policy-based management system that controls resources in a distributed computing system is presented, where the system operates by receiving a request to perform an operation on a lockable resource from a controller.
Abstract: A system that facilitates concurrency control for a policy-based management system that controls resources in a distributed computing system. The system operates by receiving a request to perform an operation on a lockable resource from a controller in the distributed computing system. This controller sends the request in order to enforce a first policy for controlling resources in the distributed computing system. In response the request, the system determines whether the controller holds a lock on the lockable resource. If so, the system allows the controller to execute the operation on the lockable resource. If not, the system allows the controller an opportunity to acquire the lock. If the controller is able to acquire the lock, the system allows the controller to execute the operation on the lockable resource.

60 citations


Journal ArticleDOI
01 Oct 2000
TL;DR: The aim of this paper is to show the tight connections between the two approaches to the deadlock problem, by proposing a unitary framework that links graph-theoretic and PN models and results, and establishing a direct correspondence between the structural elements of the PN and those of the digraphs characterizing a deadlock occurrence.
Abstract: Flexible manufacturing systems (FMSs) are modern production facilities with easy adaptability to variable production plans and goals. These systems may exhibit deadlock situations occurring when a circular wait arises because each piece in a set requires a resource currently held by another job in the same set. Several authors have proposed different policies to control resource allocation in order to avoid deadlock problems. These approaches are mainly based on some formal models of manufacturing systems, such as Petri nets (PNs), directed graphs, etc. Since they describe various peculiarities of the FMS operation in a modular and systematic way, PNs are the most extensively used tool to model such systems. On the other hand, digraphs are more synthetic than PNs because their vertices are just the system resources. So, digraphs describe the interactions between jobs and resources only, while neglecting other details on the system operation. The aim of this paper is to show the tight connections between the two approaches to the deadlock problem, by proposing a unitary framework that links graph-theoretic and PN models and results. In this context, we establish a direct correspondence between the structural elements of the PN (empty siphons) and those of the digraphs (maximal-weight zero-outdegree strong components) characterizing a deadlock occurrence. The paper also shows that the avoidance policies derived from digraphs can be implemented by controlled PNs.

Journal ArticleDOI
TL;DR: A transaction shipping approach is proposed to process transactions in a mobile environment by exploring the well-defined behavior of real-time transactions and the notion of similarity in concurrency control is adopted to further reduce the number of transaction restarts due to priority inversion, which could be very costly in aMobile network.

Patent
Nagavamsi Ponnekanti1
18 May 2000
TL;DR: In this paper, a database system providing an efficient methodology for performing an online rebuild of a B+-tree index is described, which operates by copying the index rows to newly-allocated pages in the key order so that good space utilization and clustering are achieved.
Abstract: A database system providing an efficient methodology for performing an online rebuild of a B+-tree index is described. From a high-level perspective, the method operates by copying the index rows to newly-allocated pages in the key order so that good space utilization and clustering are achieved. The old pages are deallocated during the process. This approach differs from the previously-published online index rebuild algorithms in two ways. First, it rebuilds multiple leaf pages and then propagates the changes to higher levels. Also, while propagating the leaf level changes to higher levels, level 1 pages (i.e., the level immediately above the leaf level) are reorganized, eliminating the need for a separate pass. The methodology provides high concurrency, does minimal amount of logging, has good performance and does not deadlock with other index operations. Performance study shows that the approach results in significant reduction in logging and CPU time. Also, the approach uses the same concurrency control mechanism as split and shrink operations, which made it attractive for implementation.

Book ChapterDOI
04 Sep 2000
TL;DR: This paper presents the use of the event calculus for specifying and simulating workflows and maintains a representation of the dynamic world being modeled on the basis of user supplied axioms about preconditions and effects of events and the initial state of the world.
Abstract: The event calculus is a logic programming formalism for representing events and their effects especially in database applications. This paper presents the use of the event calculus for specifying and simulating workflows. The proposed framework maintains a representation of the dynamic world being modeled on the basis of user supplied axioms about preconditions and effects of events and the initial state of the world. The net effect is that a workflow specification can be made at a higher level of abstraction. Within this framework it is possible to model sequential and concurrent activities with synchronization when necessary. It is also possible to model agent assignment and concurrent workflow instances.

Patent
18 Jul 2000
TL;DR: In this article, a dynamic switch from one type of concurrency control technique (e.g., a locking-based technique) to a different type of non-locking-based one is enabled, based on access patterns and/or application requirements for each file.
Abstract: Concurrent access to data is managed through concurrency control techniques. Various types of techniques are employed to manage the access, including locking-based techniques and non-locking-based techniques. A dynamic switch from one type of concurrency control technique (e.g., a locking-based technique) to a different type of concurrency control technique (e.g., a non-locking-based technique) is enabled. This switching is based on access patterns and/or application requirements for each file. The switching allows enhanced performance for both coarse-grain sharing and fine-grain sharing of data.

Journal ArticleDOI
TL;DR: This model allows various types of message blocking to be described precisely, including deadlock, and identifies the necessary and sufficient conditions for the occurrence and resolution of deadlock in interconnection networks, thus providing efficiency and correctness criteria for deadlock resolution mechanisms.
Abstract: This paper presents a theoretical model of resource allocations and dependencies in wormhole and virtual cut-through interconnection networks. This model allows various types of message blocking to be described precisely, including deadlock. The model distinguishes between messages involved in deadlock and those simply dependent upon deadlock, thus establishing a framework for evaluating the accuracy and correctness of deadlock detection mechanisms. The paper also identifies the necessary and sufficient conditions for the occurrence and resolution of deadlock in interconnection networks, thus providing efficiency and correctness criteria for deadlock resolution mechanisms. Theorems derived from the model are related to various routing algorithms which are based on deadlock recovery.

Proceedings ArticleDOI
08 Jan 2000
TL;DR: A new deadlock-free distributed reconfiguration algorithm that is able to asynchronously update routing tables without stopping user traffic is proposed, valid for any topology, including regular as well as irregular topologies.
Abstract: High-speed local area networks (LANs) consist of a set of switches connected by point-to-point links, and hosts linked to switches through a network interface card. High-speed LANs may change their topology due to switches and hosts being turned on/off, link remapping, and component failures. In these cases, a distributed reconfiguration algorithm analyzes the topology, computes the new routing tables, and downloads them to the corresponding switches. Unfortunately, in most cases, user traffic is stopped during the reconfiguration process to avoid deadlock. Although network reconfigurations are not frequent, static reconfiguration such as this may take hundreds of milliseconds to execute, thus degrading system availability significantly. In this paper, we propose a new deadlock-free distributed reconfiguration algorithm that is able to asynchronously update routing tables without stopping user traffic. This algorithm is valid for any topology, including regular as well as irregular topologies. Simulation results show that the behavior of our algorithm is significantly better than for other algorithms based on a spanning-tree formation.

Journal ArticleDOI
TL;DR: The concept of similarity is formalized which has been used on an ad hoc basis by application engineers to provide more flexibility in concurrency control and is extended for real-time applications which may run continually, have concurrent transaction executions, or skip unimportant computations.
Abstract: This paper formalizes the concept of similarity which has been used on an ad hoc basis by application engineers to provide more flexibility in concurrency control. We show how the usual correctness criteria of concurrency control, namely, final-state, view, and conflict serializability, can be weakened to incorporate similarity. We extend the weakened correctness criteria described previously for real-time applications which may run continually, have concurrent transaction executions, or skip unimportant computations. A semantic approach based on the similarity concept is then taken to propose a sufficient condition for scheduling real-time transactions without locking of data.

Journal ArticleDOI
TL;DR: The paper deals with an automatic concurrent control design method derived from the specification of a discrete event control system represented in the form of a decision table, based on rough set theory.
Abstract: The paper deals with an automatic concurrent control design method derived from the specification of a discrete event control system represented in the form of a decision table. The main stages of our approach are: the control specification by decision tables, generation of rules from the specification of the system behavior, and converting rules set into a concurrent program represented in the form of a Petri net. Our approach is based on rough set theory [17].

Proceedings ArticleDOI
08 Oct 2000
TL;DR: This work presents an actor-based workflow architecture that would fit naturally into distributed heterogeneous environments and combines object-oriented and functional programming in order to make the management of concurrency easier for the user.
Abstract: Technological advances in processor power, networking, telecommunications and multimedia are stimulating the development of applications requiring parallel and distributed computing. This new perspective is enticing research into new design methodologies that view the software as an "intelligent" collection of agents that interact by coordinating knowledge-based processes. We present an actor-based workflow architecture that would fit naturally into distributed heterogeneous environments. The actors combine object-oriented and functional programming in order to make the management of concurrency easier for the user.

Proceedings Article
05 Sep 2000
TL;DR: This work investigates transaction commit in Mobile Database Systems (MDS) and develops a commitment protocol based on "timeout" approach that has minimized the use of wireless message cost in commiting transactions.
Abstract: We investigate transaction commit in Mobile Database Systems (MDS) and develop a commitment protocol based on "timeout" approach. Timeout approach is universally used to reach a decision as a last option in all message oriented systems. With this approach we have minimized the use of wireless message cost in commiting transactions.

Book ChapterDOI
03 Jul 2000
TL;DR: This paper analyses the characteristics of Web-multidatabase transactions and associated transaction management issues and proposes a relaxation of the ACID test, based on semantic atomicity, local consistency, and durability, for resilient transactions, i.e., the SACReD properties.
Abstract: This paper analyses the characteristics of Web-multidatabase transactions and associated transaction management issues. Current Web-database transaction management solutions are reviewed. Conclusions drawn are that these are currently too restrictive. Flexibility is required through nested flexible transaction strategies, with compensation, and contingency or alternative subtransactions. Furthermore, the classical ACID test of transaction correctness is over-restrictive and unrealistic in the Web context. A relaxation of the ACID test is proposed, based on semantic atomicity, local consistency, and durability, for resilient transactions, i.e., the SACReD properties. These conclusions motivate the authors ongoing research and development of a prototype CORBA-compliant middleware Web-multidatabase transaction manager based upon a hybrid configuration of open and closed nested flexible transactions.

Proceedings ArticleDOI
21 Aug 2000
TL;DR: By scheduling the transformed graph, this work has improved the performance of an important bottleneck in the system layer of the IM1 player while simultaneously lowering the system energy cost.
Abstract: Addresses the concurrent task management of complex multimedia systems, like the MPEG4 IM1 (IMplementation 1) player. Starting with a critical part of the code in the IM1 player, we extracted the concurrency hidden by implementation decisions and represented it with our "grey-box" modeling approach. Based on this "grey-box" model, high-level transformations have been made to improve the concurrency. By scheduling the transformed graph, we have improved the performance of an important bottleneck in the system layer of the IM1 player while simultaneously lowering the system energy cost. A two-processor target platform is used in the experiment, combining processors running at a high Vdd (drain supply voltage) and a low Vdd respectively.

Proceedings ArticleDOI
04 Jul 2000
TL;DR: The paper revisits some of the assumptions regarding time and event ordering in distributed systems and argues that they are no longer appropriate if the goal is to faithfully preserve user intentions in CSCW systems and proposes that the total ordering of events should give the users the right of participation instead of being solely determined mechanically by the system.
Abstract: The concept of time and the ordering of events are correlated key issues in distributed computing as well as in computer-supported collaborative work (CSCW) systems. The paper revisits some of the assumptions that have generally been made regarding time and event ordering in distributed systems and argues that they are no longer appropriate if the goal is to faithfully preserve user intentions in CSCW systems. In particular, the following contributions are made in the context of collaborative editing systems. First, we discuss how the user intentions might be impacted when the finite duration of drawing operations are considered. Secondly, we propose that the total ordering of events should give the users the right of participation instead of being solely determined mechanically by the system. Thirdly, a new concept of an active whiteboard is proposed which supports various integrity constraints on objects and object groups to maintain user intentions in a more sophisticated way. Additionally, for the sake of completeness, the problem of maintaining consistency in the face of unreliable and high-latency communication channels is also covered.

Book ChapterDOI
06 Sep 2000
TL;DR: The Paper generalizes the well-known ticket method and develops novel federation-level graph testing methods to incorporate sub-serializability component Systems like Oracle to demonstrate the viability of the developed concurrency control methods.
Abstract: This Paper reconsiders the Problem of transactional federations, more specifically the concurrency control issue, with particular consideration of component Systems that provide only snapshot isolation, which is the default setting in Oracle and widely used in practice. The Paper derives criteria and practical protocols for guaranteeing global serializability at the federation level. The Paper generalizes the well-known ticket method and develops novel federation-level graph testing methods to incorporate sub-serializability component Systems like Oracle. These contributions are embedded in a practical project that built a CORBA-based federated database architecture suitable for modern Internet- or Intranet-based applications such as electronie commerce. This prototype System, which includes a federated transaction manager coined Trafic (Transactional Federation of Information Systems Based on CORBA), has been fully implemented with support for Oracle and O2 as component Systems and using Orbix as federation middleware. The Paper presents Performance measurements that demonstrate the viability of the developed concurrency control methods.

Proceedings ArticleDOI
16 Oct 2000
TL;DR: This work shows how a replica management protocol that uses atomic broadcast for replica update reduces the occurrence of deadlocks and the dependency on the number of replicas.
Abstract: Obtaining good performance from a distributed replicated database that allows update transactions to originate at any site while ensuring one-copy serializability is a challenge. A popular analysis of deadlock probabilities in replicated databases shows that the deadlock rate for the system is high and increases as the third power of the number of replicas. We show how a replica management protocol that uses atomic broadcast for replica update reduces the occurrence of deadlocks and the dependency on the number of replicas. The analysis is confirmed by simulation experiments.

Journal ArticleDOI
TL;DR: The new GUARD-link protocol, although based on the B-link approach, delivers the best performance (with respect to all performance metrics) for a variety of real time transaction workloads, by virtue of its admission control mechanism.
Abstract: Real time database systems are expected to rely heavily on indexes to speed up data access and thereby help more transactions meet their deadlines. Accordingly, high performance index concurrency control (ICC) protocols are required to prevent contention for the index from becoming a bottleneck. We develop real time variants of a representative set of classical B-tree ICC protocols and, using a detailed simulation model, compare their performance for real time transactions with firm deadlines. We also present and evaluate a real time ICC protocol called GUARD-link that augments the classical B-link protocol with a feedback based admission control mechanism. Both point and range queries, as well as the undos of the index action transactions are included in the study. The performance metrics used in evaluating the ICC protocols are the percentage of transactions that miss their deadlines and the fairness with respect to transaction type and size. Experimental results show that the performance characteristics of the real time version of an ICC protocol could be significantly different from the performance of the same protocol in a conventional (nonreal time) database system. In particular, B-link protocols, which are reputed to provide the best overall performance in conventional database systems, perform poorly under heavy real time loads. The new GUARD-link protocol, however, although based on the B-link approach, delivers the best performance (with respect to all performance metrics) for a variety of real time transaction workloads, by virtue of its admission control mechanism. GUARD-link provides close to ideal fairness in most environments.

Proceedings ArticleDOI
18 Mar 2000
TL;DR: A scalable prediction-based concurrency control scheme with entity-centric multicasting: only the users surrounding a target entity multicast the ownership requests, by using the multicast address assigned to the entity.
Abstract: Replication is often used to provide users of distributed virtual environments with high-performance interactions. Concurrency control is required to avoid inconsistent views among replicas due to multiple concurrent updates. G. Lann has developed a prediction-based concurrency control scheme to allow real-time interactions for users and to eliminate the need for repairs. The existing scheme does not scale in terms of delivering ownership on time as the number of users increases. In this paper, we propose a scalable prediction-based concurrency control scheme with entity-centric multicasting: only the users surrounding a target entity multicast the ownership requests, by using the multicast address assigned to the entity. The experimental results and analysis reported in this paper show that the proposed scheme achieves the benefits of prediction-based concurrency control with efficiency and scalability for large distributed virtual environments.

Journal ArticleDOI
TL;DR: The authors present a new clustering algorithm to facilitate the parallelization of software systems in a multiprocessor environment that reduces the performance degradation caused by synchronizations, and avoids deadlocks during clustering.
Abstract: A variety of techniques and tools exist to parallelize software systems on different parallel architectures (SIMD, MIMD). With the advances in high-speed networks, there has been a dramatic increase in the number of client/server applications. A variety of client/server applications are deployed today, ranging from simple telnet sessions to complex electronic commerce transactions. Industry standard protocols, like Secure Socket Layer (SSL), Secure Electronic Transaction (SET), etc., are in use for ensuring privacy and integrity of data, as well as for authenticating the sender and the receiver during message passing. Consequently, a majority of applications using parallel processing techniques are becoming synchronization-centric, i.e., for every message transfer, the sender and receiver must synchronize. However, more effective techniques and tools are needed to automate the clustering of such synchronization-centric applications to extract parallelism. The authors present a new clustering algorithm to facilitate the parallelization of software systems in a multiprocessor environment. The new clustering algorithm achieves traditional clustering objectives (reduction in parallel execution time, communication cost, etc.). Additionally, our approach: 1) reduces the performance degradation caused by synchronizations, and 2) avoids deadlocks during clustering. The effectiveness of our approach is depicted with the help of simulation results.

Patent
16 Nov 2000
TL;DR: In this article, a lock delegation mechanism for transactional databases is proposed, which allows specification of ignore-conflict relationships between locking capabilities and allows the specification of uses of parameters with lock modes and facilitates transformation of such uses into a form suitable for utilization in execution environments that support ignore-Conflict relationships.
Abstract: Techniques have been developed whereby concurrency control mechanisms such as nested databases can be expressed in terms of operations implemented by various flexible transaction processing systems. For example, one such implementation of nested databases is particularly suitable for transaction processing systems that provides a lock delegation facility and which allow specification of ignore-conflict relationships between locking capabilities. By providing techniques that support movement of objects from a database to a subdatabase thereof, as well as termination (e.g., commit or abort) of transactions and databases (including subdatabases), transaction processing systems can provide advanced transaction models with unconventional concurrency control mechanisms. Some realizations allow specification of uses of parameters with lock modes and facilitate transformation of such uses into a form suitable for utilization in execution environments that support ignore-conflict relationships.