scispace - formally typeset
Search or ask a question

Showing papers on "Serialization published in 1994"


Patent
16 Jun 1994
TL;DR: In this article, the modification time field of a file can be accessed by multiple readers or multiple writers and each client in the cluster system can update their own copy of the modification times.
Abstract: A system and method for avoiding serialization on updating the modification time of files in a cluster system. In accordance with the method, the modification time field of a file can be accessed by multiple readers or multiple writers and each client in the cluster system can update their own copy of the modification time. Whenever a client requests to read the modification time, the copies of the modification time are reconciled. The copies of the modification times are also reconciled when a cache flush or synchronization operation forces such reconciliation. The present system and method supports the requirement (of certain operating systems such as UNIX) that an explicit user-issued command to set the modification time is accomplished by granting an exclusive-write mode for the modification field.

88 citations


Book ChapterDOI
04 Jul 1994
TL;DR: It has been shown that atomic objects can be incompatible when they assume different Global Serialization Protocols (GSPs) and if combined, do not ensure transaction atomicity anymore.
Abstract: A worthwhile approach to achieve transaction atomicity within object-based distributed systems is to confine concurrency control and recovery mechanisms within the shared objects themselves. Such objects, called atomic objects, enhance their modularity and can increase transaction concurrency. Nevertheless, when designed independently, atomic objects can be incompatible, and if combined, do not ensure transaction atomicity anymore. It has been shown that atomic objects can be incompatible when they assume different Global Serialization Protocols (GSPs).

16 citations


Journal ArticleDOI
TL;DR: This paper shows how to insert a priority-based serialization discipline in token-based M.E.S. algorithms for multiple entries to a critical section, avoiding starvation and investigates its implementation overhead in the algorithm and the number of messages per C.S.) entry.

12 citations


Journal ArticleDOI
TL;DR: A novel notion of correctness for parallel programs is demonstrated by showing that a specific asynchronous program enforces synchronous execution, which always halts, showing that true concurrency can be useful in the context of parallel program verification.
Abstract: Parallel execution of a programR (intuitively regarded as a partial order) is usually modeled by sequentially executing one of the total orders (interleavings) into which it can be embedded. Our work deviates from this serialization principle by usingtrue concurrency to model parallel execution. True concurrency is represented via completions ofR tosemi total orders, called time diagrams. These orders are characterized via a set of conditions (denoted byCt), yielding orders or time diagrams which preserve some degree of the intended parallelism inR. Another way to express semi total orders is to use re-writing or derivation rules (denoted byCx) which for any programR generates a set of semi-total orders. This paper includes a classification of parallel execution into three classes according to three different types ofCt conditions. For each class a suitableCx is found and a proof of equivalence between the set of all time diagrams satisfyingCt and the set of all terminalCx derivations ofR is devised. This equivalence between time diagram conditions and derivation rules is used to define a novel notion of correctness for parallel programs. This notion is demonstrated by showing that a specific asynchronous program enforces synchronous execution, which always halts, showing that true concurrency can be useful in the context of parallel program verification.

6 citations


Journal ArticleDOI
TL;DR: This priority-based protocol addresses the problem of satisfying data consistency, with the goal being to increase the number of transactions that commit by their deadlines and intends to meet more deadlines of higher priority transactions then lower priority transactions.
Abstract: This paper presents an optimistic priority-based concurrency control protocol that schedules active transactions accessing firm deadline real-time database systems. This protocol combines the forward and backward validation processes in order to control concurrent transactions with different priorities more effectively. For a transaction in the validation phase, it can be committed successfully if the serialization order is adjusted in favour of the transactions with higher priority and aborted otherwise. Thus, this protocol establishes a priority ordering technique whereby a serialization order is selected and transaction execution is forced to obey this order. This priority-based protocol addresses the problem of satisfying data consistency, with the goal being to increase the number of transactions that commit by their deadlines. In addition, for desirable real-time conflict resolution, this protocol intends to meet more deadlines of higher priority transactions then lower priority transactions.

3 citations


Journal ArticleDOI
01 Dec 1994
TL;DR: The intention is to provide an efficient medium for expressing concurrency and synchronization that is amenable to modular programming, and which can be used to succinctly and efficiently describe a variety of diverse concurrency paradigms useful for parallel symbolic computing.
Abstract: We describe a parallel object-oriented dialect of Scheme called TS/SCHEME that provides a simple and expressive interface for building asynchronous parallel programs. The main component in TS/SCHEME's coordination framework is an abstraction that serves the role of a distributed data structure. Distributed data structures are an extension of conventional data structures insofar as many tasks may simultaneously access and update their contents according to a well-defined serialization protocol. The semantics of these structures also specifies that consumers which attempt to access an as-of-yet undefined element are to block until a producer provides a value. TS/SCHEME permits the construction of two basic kinds of distributed data structures, those accessed by content, and those accessed by name. These structures can be further specialized and composed to yield a number of other synchronization abstractions. Our intention is to provide an efficient medium for expressing concurrency and synchronization that is amenable to modular programming, and which can be used to succinctly and efficiently describe a variety of diverse concurrency paradigms useful for parallel symbolic computing.

3 citations


Proceedings ArticleDOI
22 Aug 1994
TL;DR: A distributed transaction management scheme is introduced that maintains the autonomy of the local database systems and guarantees fairness in the execution of the transactions in the system.
Abstract: Transaction management in a multidatabase system must ensure global serializability. Local serializable execution is, by itself, not sufficient to ensure global serializability, since local serialization orders of subtransactions of global transactions must be the same in all systems. In this paper, a distributed transaction management scheme is introduced. The scheme maintains the autonomy of the local database systems. It is free from global deadlock and guarantees fairness in the execution of the transactions in the system. >

2 citations


Journal ArticleDOI
Hyeokmin Kwon1, Songchun Moon1
TL;DR: A new validation scheme for OCC is proposed called reordering serial equivalence (RSE) by introducing reverse serializability, which ensures the correctness of RSE when the authors allow the serialization in the reverse order of transactions' commits.

1 citations


Book ChapterDOI
01 Jun 1994
TL;DR: A simple, semi-empirical, tool for gauging the possible throughput scaleup on shared-memory multiprocessors executing online transaction processing (OLTP) workloads and the concept of super-seriality is introduced to account for performance penalties due to compound serialization in OLTP workloads.
Abstract: We present a simple, semi-empirical, tool for gauging the possible throughput scaleup on shared-memory multiprocessors executing online transaction processing (OLTP) workloads. Unlike most scientific workloads, concurrent, multi-user, OLTP workloads exhibit a high degree of data sharing. The attendant sublinear scaling is found not to follow an Amdahl-like law due to simple seriality or single point of serialization. The concept of super-seriality is introduced to account for performance penalties due to compound serialization in OLTP workloads. Super-seriality has the effect of inducing a premature maximum in the scaling curve beyond which it is counter-productive to add further processors. This concept is incorporated into a performance tool called qcomp which has been used for both competitive benchmarking and system sizing.

1 citations


Patent
14 Jan 1994
TL;DR: In this article, a serializing mechanism for operations in which the definition of a minimum unit data type object is generated related with the type of the data object is designated as the minimum unit operation, and the serialization of access to the shared storage area is realized by establishing lock to the resource.
Abstract: PURPOSE: To selectively serialize access to a shared data object by executing a serializing mechanism for operations in which the definition of a minimum unit data type object is generated related with the type of the data object, and the minimum unit data object definition is designated as the minimum unit operation. CONSTITUTION: Overall processings are operated in a multiple processing computer system 100. All of processors 110, 112, 114, and 116 can perform access to a shared storage area 120. Each processor 110-116 executes the flow of instructions for applying access to a storage place 122 of the shared storage area 120 especially among the other functions. The serialization of access to the shared storage area 120 is realized by establishing lock to the resource. The access to the shared storage area 120 must be serialized only when an opposed access can generate the change of data.

1 citations


12 Nov 1994
TL;DR: In this article, a probabilistic Value Induced Shadow Allocation (VISA) policy is developed which aims at preserving the most valuable shadows for each system transaction, and a generic SCC algorithm (SCC-kS) that operates under a limited redundancy assumption is presented; it allows no more than a constant number of shadows to coexist on behalf of any uncommitted transaction.
Abstract: Concurrency control methods developed for traditional database systems are not appropriate for real-time database systems (RTDBS), where, in addition to database consistency requirements, satisfying timing constraints is an integral part of the correctness criterion. Most real-time concurrency control protocols considered in the literature combine time-critical scheduling with traditional concurrency control methods to conform to transaction timing constraints. These methods rely on either transaction blocking or restarts, both of which are inappropriate for real-time concurrency control because of the unpredictability they introduce. Moreover, RTDBS performance objectives differ from those of conventional database systems in that maximizing the number of transactions that complete before their deadlines becomes the decisive performance objective, rather than merely maximizing concurrency (or throughput). Recently, Speculative Concurrency Control (SCC) was proposed as a categorically different approach to concurrency control for RTDBS. SCC relies on the use of redundant processes ( shadows), which speculate on alternative schedules, once conflicts that threaten the consistency of the database are detected. SCC algorithms utilize added system resources to ensure that correct (serializable) executions are discovered and adopted as early as possible, thus increasing the likelihood of the timely commitment of transactions. This dissertation starts by reviewing the Order-Based SCC (SCC-OB) algorithm which associates almost as many shadows as there are serialization orders of transactions. After demonstrating SCC-OB''s excessive use of redundancy, a host of novel SCC-based protocols is introduced. Conflict-Based SCC (SCC-CB) reduces the number of shadows that a running transaction needs to keep by maintaining one shadow per uncommitted conflicting transaction. It is shown that the quadratic number of shadows maintained by SCC-CB is optimal, covering all serialization orders produced by SCC-OB. SCC-CB''s correctness is established by showing that it admits only serializable histories. Next, the trade-off between the number of shadows and timeliness is considered. A generic SCC algorithm (SCC-kS) that operates under a limited redundancy assumption is presented; it allows no more than a constant number $k$ of shadows to coexist on behalf of any uncommitted transaction. Next, a novel technique is proposed that incorporates additional information such as deadline, priority and criticalness within the SCC methodology. SCC with Deferred Commit (SCC-DC) utilizes this additional information to improve the timeliness through the controlled deferment of transaction commitments. A probabilistic Value Induced Shadow Allocation (VISA) policy is developed which aims at preserving the most valuable shadows for each system transaction. The thesis of this dissertation is that SCC-based algorithms offer a new dimension, redundancy, to improve the timeliness of RTD

Patent
13 Dec 1994
TL;DR: In this paper, a data restoration mechanism is used to synchronize primary and secondary data stores without the need of holding access to the primary during a synchronization process, by using an error instruction led from the discrepant suffix records and check record.
Abstract: PURPOSE: To share data among plural systems while maintaining the consistency of the data. CONSTITUTION: User data 402A and B are maintained in a primary data store and an optional alternate data store. The respective data stores are provided with one set of lock blocks 401A-N and the respective lock blocks correspond to the respective systems sharing the data. The contents of the lock block are normally a time value TOD and indicate the state of the system possession right of relating data. In order to guarantee the consistency of the data, suffix records 404A and B and a check record 403 are used. By using an error instruction led from the discrepant suffix records and check record, a data restoration mechanism is triggerd. The restoration mechanism synchronizes primary and secondary data stores without the need of holding access to the primary during a synchronization process.