scispace - formally typeset
Search or ask a question

Showing papers on "Concurrency control published in 1990"


Proceedings ArticleDOI
01 May 1990
TL;DR: A new model of memory consistency, called release consistency, that allows for more buffering and pipelining than previously proposed models is introduced and is shown to be equivalent to the sequential consistency model for parallel programs with sufficient synchronization.
Abstract: Scalable shared-memory multiprocessors distribute memory among the processors and use scalable interconnection networks to provide high bandwidth and low latency communication. In addition, memory accesses are cached, buffered, and pipelined to bridge the gap between the slow shared memory and the fast processors. Unless carefully controlled, such architectural optimizations can cause memory accesses to be executed in an order different from what the programmer expects. The set of allowable memory access orderings forms the memory consistency model or event ordering model for an architecture.This paper introduces a new model of memory consistency, called release consistency, that allows for more buffering and pipelining than previously proposed models. A framework for classifying shared accesses and reasoning about event ordering is developed. The release consistency model is shown to be equivalent to the sequential consistency model for parallel programs with sufficient synchronization. Possible performance gains from the less strict constraints of the release consistency model are explored. Finally, practical implementation issues are discussed, concentrating on issues relevant to scalable architectures.

1,169 citations


Journal ArticleDOI
TL;DR: An innovative approach is presented to the design of fault-tolerant distributed systems that avoids the several rounds of message exchange required by current protocols for consensus agreement.
Abstract: An innovative approach is presented to the design of fault-tolerant distributed systems that avoids the several rounds of message exchange required by current protocols for consensus agreement. The approach is based on broadcast communication over a local area network, such as an Ethernet or a token ring, and on two novel protocols, the Trans protocol, which provides efficient reliable broadcast communication, and the Total protocol, which with high probability promptly places a total order on messages and achieves distributed agreement even in the presence of fail-stop, omission, timing, and communication faults. Reliable distributed operations, such as locking, update, and commitment, typically require only a single broadcast message rather than the several tens of messages required by current algorithms. >

272 citations


Proceedings Article
Chandrasekaran Mohan1
13 Aug 1990
TL;DR: ARIESIKVL, by also using for key value locking the IX and SIX lock modes that were intended originally for table level locking, is able to better exploit the semantics of the operations to improve concurrency, compared to the System R index protocols.
Abstract: This paper presents a method, called ARIES/ KVL (Algorithm for Recovery and Isolation Exploiting Semantics using Key-Value Locking), for concurrency control in B-tree indexes. A transaction may perform any number of nonindex and index operations, including range scans. ARIES/KVL guarantees serializability and it supports very high concurrency during tree traversals, structure modifications, and other operations. Unlike in System R, when one transaction is waiting for a lock on a key value in a page, reads and modifications of that page by other transactions are allowed. Further, transactions that are rolling back will never get into deadlocks. ARIESIKVL, by also using for key value locking the IX and SIX lock modes that were intended originally for table level locking, is able to better exploit the semantics of the operations to improve concurrency, compared to the System R index protocols. These techniques are also applicable to the concurrency control of the classical links-based storage and access structures which are beginning to appear in modern systems also.

245 citations


Journal ArticleDOI
TL;DR: This work presents a concurrency control protocol for systems using the earliest deadline first scheduling algorithm and shows that the protocol prevents both deadlock and chained blocking.
Abstract: Real-time systems have stringent deadline requirements for their tasks. To meet the requirements, a real-time system must use scheduling algorithms that ensure a predictable response even in the face of mutually exclusive accesses to critical sections. We present a concurrency control protocol for systems using the earliest deadline first scheduling algorithm. The protocol specifies a dynamic priority ceiling for each critical section which is the earliest deadline of jobs which are currently in or will enter the critical section. Jobs trying to enter a critical section will be blocked if they do not have a priority higher than the priority ceiling of any critical section which is in use. We show that the protocol prevents both deadlock and chained blocking. The schedulability condition and implementation issues of the protocol are also discussed.

226 citations


Proceedings ArticleDOI
05 Dec 1990
TL;DR: A new real-time optimistic concurrency control algorithm, WAIT-50, is presented that monitors transaction conflict states and gives precedence to urgent transactions in a controlled manner and is shown to provide significant performance gains over OPT-BC under a variety of operating conditions and workloads.
Abstract: The authors (1990) have shown that in real-time database systems that discard late transactions, optimistic concurrency control outperforms locking. Although the optimistic algorithm used in that study, OPT-BC, did not factor in transaction deadlines in making data conflict resolution decisions, it still outperformed a deadline-cognizant locking algorithm. A discussion is presented of why adding deadline information to optimistic algorithms is a nontrivial problem, and some alternative methods of doing so are described. A new real-time optimistic concurrency control algorithm, WAIT-50, is presented that monitors transaction conflict states and gives precedence to urgent transactions in a controlled manner. WAIT-50 is shown to provide significant performance gains over OPT-BC under a variety of operating conditions and workloads. >

194 citations


Proceedings ArticleDOI
01 Jan 1990
TL;DR: Slow memory is presented as a memory that allows the effects of writes to propagate slowly through the system, eliminating the need for costly consistency maintenance protocols that limit concurrency.
Abstract: The use of weakly consistent memories in distributed shared memory systems to combat unacceptable network delay and to allow such systems to scale is proposed. Proposed memory correctness conditions are surveyed, and how they are related by a weakness hierarchy is demonstrated. Multiversion and messaging interpretations of memory are introduced as means of systematically exploring the space of possible memories. Slow memory is presented as a memory that allows the effects of writes to propagate slowly through the system, eliminating the need for costly consistency maintenance protocols that limit concurrency. Slow memory processes a valuable locality property and supports a reduction from traditional atomic memory. Thus slow memory is as expressive as atomic memory. This expressiveness is demonstrated by two exclusion algorithms and a solution to M.J. Fischer and A. Michael's (1982) dictionary problem on slow memory. >

183 citations


Journal ArticleDOI
01 May 1990
TL;DR: This work has examined the sharing and synchronization behavior of a variety of shared memory parallel programs and found that the access patterns of a large percentage of shared data objects fall in a small number of categories for which efficient software coherence mechanisms exist.
Abstract: An adaptive cache coherence mechanism exploits semantic information about the expected or observed access behavior of particular data objects. We contend that, in distributed shared memory systems, adaptive cache coherence mechanisms will outperform static cache coherence mechanisms. We have examined the sharing and synchronization behavior of a variety of shared memory parallel programs. We have found that the access patterns of a large percentage of shared data objects fall in a small number of categories for which efficient software coherence mechanisms exist. In addition, we have performed a simulation study that provides two examples of how an adaptive caching mechanism can take advantage of semantic information.

166 citations


Journal ArticleDOI
TL;DR: Several new optimistic concurrency control techniques for objects in decentralized distributed systems are described here, their correctness and optimality properties are proved, and the circumstances under which each is likely to be useful are characterized.
Abstract: An optimistic concurrency control technique is one that allows transactions to execute without synchronization, relying on commit-time validation to ensure serializability. Several new optimistic concurrency control techniques for objects in decentralized distributed systems are described here, their correctness and optimality properties are proved, and the circumstances under which each is likely to be useful are characterized.Unlike many methods that classify operations only as Reads or Writes, these techniques systematically exploit type-specific properties of objects to validate more interleavings. Necessary and sufficient validation conditions can be derived directly from an object's data type specification. These techniques are also modular: they can be applied selectively on a per-object (or even per-operation) basis in conjunction with standard pessimistic techniques such as two-phase locking, permitting optimistic methods to be introduced exactly where they will be most effective.These techniques can be used to reduce the algorithmic complexity of achieving high levels of concurrency, since certain scheduling decisions that are NP-complete for pessimistic schedulers can be validated after the fact in time, independent of the level of concurrency. These techniques can also enhance the availability of replicated data, circumventing certain tradeoffs between concurrency and availability imposed by comparable pessimistic techniques.

154 citations


Proceedings ArticleDOI
05 Dec 1990
TL;DR: A new concurrency control algorithm for real-time database systems is proposed, by which real- time scheduling and concurrency Control can be integrated.
Abstract: A new concurrency control algorithm for real-time database systems is proposed, by which real-time scheduling and concurrency control can be integrated. The algorithm is founded on a priority-based locking mechanism to support time-critical scheduling by adjusting the serialization order dynamically in favor of high priority transactions. Furthermore, it does not assume any knowledge about the data requirements or execution time of each transaction, making the algorithm very practical. >

144 citations


Proceedings Article
13 Aug 1990
TL;DR: The paper explains how the requests of object reading, updating, creating and deleting are realized and how the particular construction and semantics of version stamps are used to associate object versions with database versions.
Abstract: This paper presents an approach to maintaining consistency of object versions in multiversion database systems. In this approach a multiversion database is considered to be a set of logically independent and identifiable database versions. Each database version is composed of a version of each object stored in the system. However, identical object versions may be shared by many database versions. Database versions are identified by version stamps. Version stamps are also used to associate object versions with database versions. Because of the particular construction and semantics of version stamps, object version management is very efficient. Moreover, it is orthogonal to other problems of version management, such as object addressing, concurrency control, access authorization, etc. The paper explains how the requests of object reading, updating, creating and deleting are realized.

115 citations


Journal ArticleDOI
TL;DR: An approximate analytical model is developed to study the tradeoffs of replicating data in a distributed database environment and it is found that the benefit of replicate data and the optimal number of replicates are sensitive to the concurrency control protocol.
Abstract: The authors develop an approximate analytical model to study the tradeoffs of replicating data in a distributed database environment. Several concurrency control protocols are considered, including pessimistic, optimistic, and semi-optimistic protocols. The approximate analysis captures the effect of the protocol on hardware resource contention and data contention. The accuracy of the approximation is validated through detailed simulations. It is found that the benefit of replicating data and the optimal number of replicates are sensitive to the concurrency control protocol. Under the optimistic and semi-optimistic protocols, replications can significantly improve response time with an additional MIPS (million instructions per second) requirement to maintain consistency among the replicates. The optimal degree of replication is further affected by the transaction mix (e.g. the fraction of read-only transactions), the communications delay and overhead, the number of distributed sites, and the available MIPS. Sensitivity analyses have been carried out to examine how the optimal degree of replication changes with respect to these factors. >

Book ChapterDOI
01 Mar 1990
TL;DR: Techniques are presented for the efficient handling of record ID lists, elimination of some locking, and determination of how many and which indexes to use, and opportunities for exploiting parallelism are identified.
Abstract: Many data base management systems' query optimizers choose at most one index for accessing the records of a table in a given query, even though many indexes may exist on the table. In spite of the fact that there are some systems which use multiple indexes, very little has been published about the concurrency control or query optimization implications (e.g., deciding how many indexes to use) of using multiple indexes. This paper addresses these issues and presents solutions to the associated problems. Techniques are presented for the efficient handling of record ID lists, elimination of some locking, and determination of how many and which indexes to use. The techniques are adaptive in the sense that the execution strategies may be modified at run-time (e.g., not use some indexes which were to have been used), if the assumptions made at optimization-time (e.g., about selectivities) turn out to be wrong. Opportunities for exploiting parallelism are also identified. A subset of our ideas have been implemented in IBM's DB2 V2R2 relational data base management system.

Proceedings ArticleDOI
05 Feb 1990
TL;DR: The top-down approach emerges as a viable paradigm for ensuring the proper concurrent execution of global transactions in an HDDBS, and general schemes for local concurrency control with prespecified global serialization orders are presented.
Abstract: A heterogeneous distributed databases system (HDDBS) is a system which integrates preexisting databases to support global applications accessing more than one database. An outline of approaches to concurrency control in HDDBSs is presented. The top-down approach emerges as a viable paradigm for ensuring the proper concurrent execution of global transactions in an HDDBS. The primary contributions of this work are the general schemes for local concurrency control with prespecified global serialization orders. Two approaches are outlined. One is intended for performance enhancement but violates design autonomy, while the other does not violate local autonomy at the cost of generality (it does not apply to all local concurrency control protocols). This study is intended as a guide to concurrency control in this new environment. >

Proceedings ArticleDOI
02 Apr 1990
TL;DR: It is shown how a high-performance multi-level recovery algorithm can be systematically developed based on few fundamental principles and implemented in the DASDBS database kernel system.
Abstract: Multi-level transactions have received considerable attention as a framework for high-performance concurrency control methods. An inherent property of multi-level transactions is the need for compensating actions, since state-based recovery methods do no longer work correctly for transaction undo. The resulting requirement of operation logging adds to the complexity of crash recovery. In addition, multi-level recovery algorithms have to take into account that high-level actions are not necessarily atomic, e.g., if multiple pages are updated in a single action.In this paper, we present a recovery algorithm for multi-level transactions. Unlike typical commercial database systems, we have striven for simplicity rather than employing special tricks. It is important to note, though, that simplicity is not achieved at the expense of performance. We show how a high-performance multi-level recovery algorithm can be systematically developed based on few fundamental principles. The presented algorithm has been implemented in the DASDBS database kernel system.

Proceedings ArticleDOI
07 May 1990
TL;DR: The authors show that the scheduling protocol gives correct schedules and is free of covert channels due to contention for access to data, i.e. the scheduler is data-conflict-secure.
Abstract: Consideration is given to the application of multiversion schedulers in multilevel secure database management systems (MLS/DBMSs). Transactions are vital for MLS/DBMSs because they provide transparency to concurrency and failure. Concurrent execution of transactions may lead to contention among subjects for access to data, which in MLS/DBMSs may lead to security problems. Multiversion schedulers reduce the contention for access to data by maintaining multiple versions. A description is given of the relation between schedules produced in MLS/DBMSs and those which are multiversion serializable. The authors also propose a secure multiversion scheduler. They show that the scheduling protocol gives correct schedules and is free of covert channels due to contention for access to data, i.e. the scheduler is data-conflict-secure. >

Proceedings ArticleDOI
01 Jan 1990
TL;DR: It was shown that it is easier for a class of multiversion lock-based concurrency control algorithms to maintain temporal consistency of shared data when the conflicting transactions are close in the lengths of their periods.
Abstract: The authors present a model of typical hard real-time applications and the concepts of age and dispersion of data accessed by the real-time transactions. These are used to evaluate the performance of a class of multiversion lock-based concurrency control algorithms in maintaining temporal consistency of data in a real-time shared-data environment. It is shown that it is easier for such a concurrency control algorithm to maintain temporal consistency of shared data when the conflicting transactions are close in the lengths of their periods. The conflict pattern of the transactions has a more significant effect on the temporal inconsistency of data than the load level of the system. It is also desirable to have the transactions' periods within a small range. The best case was obtained when the faster transactions have higher utilizations. It was also shown that the use of the priority inheritance principle with the lock-based concurrency control algorithms can reduce transactions' blocking times and the number of transactions that access temporally inconsistent data as well as the worst-case age and dispersion of data. >

Proceedings ArticleDOI
02 Apr 1990
TL;DR: Results from a performance study indicate that the Half-and-Half algorithm can be very effective at preventing thrashing under a wide range of operating conditions and workloads.
Abstract: A number of concurrency control performance studies have shown that, under high levels of data contention, concurrency control algorithms can exhibit thrashing behavior which is detrimental to overall system performance. In this paper, we present an approach to eliminating thrashing in the case of two-phase locking, a widely used concurrency control algorithm. Our solution, which we call the 'Half-and-Half' Algorithm, involves monitoring the state of the DBMS in order to dynamically control the multiprogramming level of the system. Results from a performance study indicate that the Half-and-Half algorithm can be very effective at preventing thrashing under a wide range of operating conditions and workloads.

Journal ArticleDOI
In Kyung Ryu1, Alexander Thomasian1
TL;DR: The decomposition solution method and the associated iterative scheme are shown to be more accurate than previously defined methods for dynamic locking through validation against simulation results.
Abstract: A detailed model of a transaction processing system with dynamic locking is developed and analyzed. Transaction classes are distinguished on the basis of the number of data items accessed and the access mode (read-only/update). The performance of the system is affected by transaction blocking and restarts, due to lock conflicts that do not or do cause deadlocks, respectively. The probability of these events is determined by the characteristics of transactions and the database access pattern. Hardware resource contention due to concurrent transaction processing is taken into account by specifying the throughput characteristic of the computer system for processing transactions when there is no data contention. A solution method based on decomposition is developed to analyze the system, and also used as the basis of an iterative scheme with reduced computational cost. The analysis to estimate the probability of lock conflicts and deadlocks is based on the mean number of locks held by transactions. These probabilities are used to derive the state transition probabilities for the Markov chain specifying the transitions among the system states. The decomposition solution method and the associated iterative scheme are shown to be more accurate than previously defined methods for dynamic locking through validation against simulation results. Several important conclusions regarding the behavior of dynamic locking systems are derived from parametric studies.

Proceedings ArticleDOI
03 Dec 1990
TL;DR: Two new concurrency control algorithms that are compatible with common security policies are described, based on the multiversion timestamp ordering technique, and implemented with single-level subjects.
Abstract: The concurrency control algorithms used for standard database systems can conflict with the security policies of multilevel secure database systems. The authors describe two new concurrency control algorithms that are compatible with common security policies. They are based on the multiversion timestamp ordering technique, and are implemented with single-level subjects. The use of only single-level subjects cannot introduce any additional threat of compromise of mandatory security; the analysis focuses on correctness. One of these algorithms has been implemented for Trusted ORACLE. >

OtherDOI
01 Apr 1990
TL;DR: Techniques are examined for replicating data and execution in directly distributed systems: systems in which multiple processes interact directly with one another while continuously respecting constraints on their joint behavior.
Abstract: Techniques are examined for replicating data and execution in directly distributed systems: systems in which multiple processes interact directly with one another while continuously respecting constraints on their joint behavior Directly distributed systems are often required to solve difficult problems, ranging from management of replicated data to dynamic reconfiguration in response to failures It is shown that these problems reduce to more primitive, order-based consistency problems, which can be solved using primitives such as the reliable broadcast protocols Moreover, given a system that implements reliable broadcast primitives, a flexible set of high-level tools can be provided for building a wide variety of directly distributed application programs

Journal ArticleDOI
F. Belik1
TL;DR: A deadlock avoidance technique, based on a method of representing directed acyclic graphs, is presented, suitable for systems with single resources of each resource type.
Abstract: A deadlock avoidance technique, based on a method of representing directed acyclic graphs, is presented. This technique is suitable for systems with single resources of each resource type. The deadlock avoidance problem considered is the problem of changing a directed acyclic graph while keeping it acyclic. The resource allocation algorithm involves three operations on edges corresponding to release of a resource from a process, unsuccessful allocation of a resource to a process, and successful allocation of a resource to a process, where the allocations include a previous detection of cycles. A path matrix representation is used, making it possible to detect cycles efficiently. The low cost of cycle detection can amortize the cost of the other operations and linear (or even constant) amortized time for one operation can be attained in dense systems. >

Journal ArticleDOI
TL;DR: A new model for describing and reasoning about transaction-processing algorithms is presented, which provides a comprehensive, uniform framework for rigorous correctness proofs and general conditions for a concurrency control algorithm to be correct-i.e., to ensure that transactions appear to be atomic.

Journal ArticleDOI
01 May 1990
TL;DR: A new lock-based cache scheme which incorporates synchronization into the cache coherency mechanism, and a new simulation model is developed embodying a widely accepted paradigm of parallel programming that outperforms existing cache protocols.
Abstract: Introducing private caches in bus-based shared memory multiprocessors leads to the cache consistency problem since there may be multiple copies of shared data. However, the ability to snoop on the bus coupled with the fast broadcast capability allows the design of special hardware support for synchronization. We present a new lock-based cache scheme which incorporates synchronization into the cache coherency mechanism. With this scheme high-level synchronization primitives as well as low-level ones can be implemented without excessive overhead. Cost functions for well-known synchronization methods are derived for invalidation schemes, write update schemes, and our lock-based scheme. To accurately predict the performance implications of the new scheme, a new simulation model is developed embodying a widely accepted paradigm of parallel programming. It is shown that our lock-based protocol outperforms existing cache protocols.

Journal ArticleDOI
TL;DR: The problem of cooperative work in the software development domain is explored, and a solution that combines object-oriented programming with rule-based modeling is proposed that exploits recent advances in object- oriented databases, extended transaction models, and computer-supported cooperative work.
Abstract: The problem of cooperative work in the software development domain is explored, and a solution that combines object-oriented programming with rule-based modeling is proposed. The solution divides the problem into three components: how to detect potential conflicts between developers' concurrent activities, how to specify the consistency requirements of a project, and how to use the consistency specification to resolve potential conflicts. The focus is on the first component; the other two are merely sketched. The solution exploits recent advances in object-oriented databases, extended transaction models, and computer-supported cooperative work, all of which provide clues as to how to support cooperation while guaranteeing data consistency. >

Proceedings ArticleDOI
05 Feb 1990
TL;DR: Performance results based on detailed simulation models suggest that policies based on various concurrency control methods suggest that such policies offer potential benefits for some system configurations.
Abstract: Various factors suggest that data contention may be of increasing significance in transaction processing systems. One approach to this problem is to run transactions twice, the first time without making any changes to the database. Benefits may result either from data prefetching during the first execution or from determining the locks required for purposes of scheduling. Consideration is given to various concurrency control methods based on this notion, and properties required for these methods to be useful are formalized. Performance results based on detailed simulation models suggest that such policies offer potential benefits for some system configurations. >

Proceedings ArticleDOI
05 Feb 1990
TL;DR: A framework for multilevel secure schedulers which allows analysis of a Schedulers' security properties at the protocol level is presented and necessary and sufficient conditions are developed for DC-Security and proved using noninterference.
Abstract: The implications of multilevel security on database concurrency control are explored. Transactions are vital for multilevel secure database management systems (MLS/DBMSs) because they provide transparency to concurrency and to failure. Concurrent execution of transactions may lead to contention among subjects for access to data, which in MLS/DBMSs may lead to security problems. An abstraction of security models in terms of the transactions which they produce is presented. The notion of DC-Security which identifies a class of covert channels that are caused by contention for access to shared data, is introduced. This notion is useful for evaluating the security of transaction schedulers. A framework for multilevel secure schedulers which allows analysis of a schedulers' security properties at the protocol level is presented. Necessary and sufficient conditions are developed for DC-Security in this framework and proved using noninterference. A wide range of schedulers is evaluated against these conditions. >

Proceedings ArticleDOI
01 May 1990
TL;DR: A new multilevel concurrency protocol is presented that uses a semantics-based notion of conflict, which is weaker than commutativity, called recoverability, and operates according to relative conflict, a conflict notion based on the structure of operations.
Abstract: For next generation information systems, concurrency control mechanisms are required to handle high level abstract operations and to meet high throughput demands. The currently available single level concurrency control mechanisms for reads and writes are inadequate for future complex information systems. In this paper, we will present a new multilevel concurrency protocol that uses a semantics-based notion of conflict, which is weaker than commutativity, called recoverability. Further, operations are scheduled according to relative conflict, a conflict notion based on the structure of operations.Performance evaluation via extensive simulation studies show that with our multilevel concurrency control protocol, the performance improvement is significant when compared to that of a single level two-phase locking based concurrency control scheme or to that of a multilevel concurrency control scheme based on commutativity alone. Further, simulation studies show that our new multilevel concurrency control protocol performs better even with resource contention.

Proceedings ArticleDOI
05 Feb 1990
TL;DR: A DMH system is presented, the tradeoffs between conservative and aggressive update propagation strategies are defined, and promising new strategies are identified.
Abstract: A distributed memory hierarchy (DMH) is a memory system consisting of storage modules distributed over a high-bandwidth local area network. It provides for transaction applications an abstraction of single virtual memory space to which shared data are mapped. As in a conventional memory hierarchy (MH) in a single-machine system, a DMH is responsible for locating, migrating, and caching data pages; however, unlike a conventional MH, a DMH must do so across the storage modules in a network. In addition, a DMH must handle the problem of propagation of transaction updates preserving serializability of transactions. The performance of a DMH system is strongly influenced by concurrency control and update propagation. It is also crucial that performance analysis accounts for memory resources and network requirements. A DMH system is presented, the tradeoffs between conservative and aggressive update propagation strategies are defined, and promising new strategies are identified. >

Book ChapterDOI
01 Mar 1990
TL;DR: The problems involved in integrating concurrency control into object-oriented database systems are described and an adaptation of the locking technique is proposed to satisfy them, without unnecessarily restricting concurrency.
Abstract: In this paper, we describe the problems involved in integrating concurrency control into object-oriented database systems. The object-oriented approach places specific constraints on concurrency between transactions, which we discuss. We then propose an adaptation of the locking technique to satisfy them, without unnecessarily restricting concurrency. Finally, we analyse in detail the impacts of both the transaction model and the locking granularity imposed by the underlying system. The solutions proposed are illustrated with the O2 system developed by the Altair GIP.

Proceedings ArticleDOI
05 Feb 1990
TL;DR: An example of a software development application is considered, and the formal model of H. Korth and G. Speegle (1988) is applied to show how this example could be represented as a set of database transactions.
Abstract: An example of a software development application is considered, and the formal model of H. Korth and G. Speegle (1988) is applied to show how this example could be represented as a set of database transactions. It is shown that, although the standard notion of correctness (serializability) is too strict, the notion of correctness in the Korth and Speegle model allows sufficient concurrency with acceptable overhead. An extrapolation is made from this example to draw some conclusions regarding the potential usefulness of a formal approach to the management of long-duration design transactions. >