scispace - formally typeset
Search or ask a question

Showing papers on "Rollback published in 1981"


Proceedings Article
09 Sep 1981
TL;DR: A method is developed which frees read transactions from any consideration of concurrency control; all responsibility for correct synchronization is assigned to the update transactions, which has the great advantage that, in case of conflicts between read transactions and update Transactions, no backup is performed.
Abstract: Recently, methods for concurrency control have been proposed which were called "optimistic". These methods do not consider access conflicts when they occur; instead, a transaction always proceeds, and at its end a check is performed whether a conflict has happened. If so, the transaction is backed up. This basic approach is investigated in two directions: First, a method is developed which frees read transactions from any consideration of concurrency control; all responsibility for correct synchronization is assigned to the update transactions. This method, has the great advantage that, in case of conflicts between read transactions and update transactions, no backup is performed. Then, the application of optimistic solutions in distributed database systems is discussed, a solution is presented.

49 citations


Proceedings ArticleDOI
29 Apr 1981
TL;DR: A new approach to deadlock removal in such systems based on partial rollbacks is introduced that does not in general require the total removal of a transaction to eliminate a deadlock.
Abstract: The problem of removing deadlocks from concurrent database systems using the two-phase locking protocol is considered. In particular, for systems which use no a priori information about transaction behavior in order to avoid deadlocks, it has generally been assumed necessary to totally remove and restart some transaction involved in a deadlock in order to relieve the situation. In this paper, a new approach to deadlock removal in such systems based on partial rollbacks is introduced. This approach does not in general require the total removal of a transaction to eliminate a deadlock. The task of optimizing deadlock removal using this method is discussed for systems allowing both exclusive and shared locking. A method is given for implementing this approach with no more storage overhead than that required for total removal and restart.

10 citations


Journal ArticleDOI
TL;DR: The results show that if transactions are executed independently of each other, the cost of compile-time validation is never higher than thecost of run-timevalidation; in turn theCost of the latter is never lower than that of post-execution validation.
Abstract: Semantic integrity of a database is guarded by a set of integrity assertions expressed as predicates on database values. The problem of efficient evaluation of integrity assertions in transaction processing systems is considered. Three methods of validation (compile-time, run-time, and post-execution validations) are analyzed in terms of database access costs. The results show that if transactions are executed independently of each other, the cost of compile-time validation is never higher than the cost of run-time validation; in turn the cost of the latter is never higher than the cost of post-execution validation.

3 citations


01 Jan 1981
TL;DR: The classification of the types of errors that occur in a database environment, as-well-as the necessary recovery information and subsequent actions to recover the correct state of the database are analyzed.
Abstract: The classification of the types of errors that occur in a database environment, as-well-as the necessary recovery information and subsequent actions to recover the correct state of the database are analyzed. A model of the parallel associative database processor, along with its functional architectural components, is studied. Three different types of recovery architectures considered for parallel associative database processors are presented. For each architecture, both the workload imposed by the recovery mechanisms on the execution of database operations (retrieve, modify, delete, and insert), and the workload involved in the recovery actions (rollback, restart, restore, and reconstruct) are analyzed. The performance of the alternative architectures are analyzed and compared on a quantative basis. This comparison is made in terms of the number of extra revolutions of the database area required to process a transaction versus the number of records affected by a transaction. A variety of different design parameters of the database processor, of the database, of a mix of transaction types, namely, modify, insert, and delete are considered. A large number of combinations are exercised and the effects of the parameters on the extra processing time are identified. The utilization of fault-tolerance techniques and models in parallel associative database processors are studied. Fault-tolerance for the storage unit is provided by error detection/correction codes, and duplication of the database. Fault-tolerance for the processing unit is provided by: periodically checking, duplication, and the triplication of the processing units. To study the reliability of the fault-tolerant system, a variety of parameters are considered and are applied to the General Markov Reliability Model.

1 citations


01 Jan 1981
TL;DR: An analytic model which reflects occurrences of long latent errors is then developed to support determination of an optimal c heckpoint interval as well as an optimal multi-step rollback s trategy.
Abstract: Rollback-and-ret ry is a technique of sav- ing computer system states at various checkpoints during program execution and, on detection of an error, reestablishing the computer system to a previously saved state and resuming program execu- tion. A novel rollback-and-ret ry scheme called a two-level rollback is dev,eloped in which two types of checkpoints are established for reduction of both time overhead and rcllback distance. The two types of checkpoints are called major checkpoints and minor checkpoint,^. Major checkpoints correspond to the checkpoints in existing single- level schemes. In order to establish minor checkpoints without incurring a significant amount of time overhead, p rogram execution and saving of minor checkpoint records (i.e.? information necessary for rollback) proceed in parallel,. This parallelism exploitation as well as compression of minor checkpoint records is realized by using a content-addressa ble memory as a buffer of the records transfered between a processor and a backup memory. Sometimes a multi-step rollback, i.e., backing up past t he most recent checkpoint, is executed to recover from a long latent error. An analytic model which reflects occurrences of long latent errors is then developed to support determination of an optimal c heckpoint interval as well as an optimal multi-step rollback s trategy.

Proceedings ArticleDOI
05 Apr 1981