scispace - formally typeset
Search or ask a question

Showing papers on "Serialization published in 1987"


Proceedings ArticleDOI
01 Dec 1987
TL;DR: This paper re-examined issues related to concurrency control and the behavior of a block context, and the evolution of the solutions for overcoming these shortcomings is described along with the model of computation in ConcurrentSmalltalk.
Abstract: ConcurrentSmalltalk is an object-oriented concurrent programming language/system which has been running since late 1985. ConcurrentSmalltalk has the following features: Upper-compatibility with Smalltalk-80.Asynchronous method calls and CBox objects yield concurrency.Atomic objects have the property of running one at a time so that it can serialize the many requests sent to it.Through experience in writing programs, some disadvantages have become apparent related to concurrency control and the behavior of a block context. In this paper, these issues are re-examined in detail, and then the evolution of the solutions for overcoming these shortcomings is described along with the model of computation in ConcurrentSmalltalk. New features are explained with an example program. The implementation of the ConcurrentSmalltalk virtual machine is also presented along with the evaluation of it.

98 citations


Journal ArticleDOI
TL;DR: The main advantage of this certification method is that it allows a chronological commit order which differs from the serialization one (thus avoiding rejections or delays of transactions which occur in usual certification methods or in classical locking or timestamping ones).
Abstract: This paper introduces, as an optimistic concurrency control method, a new certification method by means of intervals of timestamps, usable in a distributed database system. The main advantage of this method is that it allows a chronological commit order which differs from the serialization one (thus avoiding rejections or delays of transactions which occur in usual certification methods or in classical locking or timestamping ones). The use of the dependency graph permits both classifying this method among existing ones and proving it. The certification protocol is first presented under the hypothesis that transactions' certifications are processed in the same order on all the concerned sites; it is then extended to allow concurrent certifications of transactions.

58 citations


Patent
01 Sep 1987
TL;DR: In this article, a method and apparatus for providing arbitration between and serialization of plural processors in a multiprocessor system comprising, in each processor, a delay network, a priority circuit, a REQUEST generator, an ORDER generator, a serialization program, an ACK generator, and a receiver.
Abstract: A method and apparatus for providing arbitration between and serialization of plural processors in a multiprocessor system comprising, in each processor, a delay network, a priority circuit, a REQUEST generator, an ORDER generator, a serialization program, an ACK generator and an ACK receiver. In operation, the delay network insures that simultaneously generated REQUESTS received from plural processors are received by the priority circuit at the same time. A processor awarded priority issues an ORDER to the other processors and thereafter drops its REQUEST to allow an award of priority to another processor. An ACK is received by the ORDER issuing processor from each processor when it executes the ORDER. The ORDER issuing processor then completes the task which gave rise to the ORDER. To conserve processing time, priority awards may be made before previously issued ORDERS are completed. Alternatively, REQUEST issuing processors can simply hold their REQUEST and thereby prevent interruption of instructions or groups of instructions.

22 citations


Patent
Tadaaki Isobe1, Isobe Toshiko1
16 Nov 1987
TL;DR: An access instruction pipeline for receiving an access instruction for accessing data to be inputted to the pipeline of a vector processor includes a plurality of buffers for buffering a memory request and sending it to a storage control unit.
Abstract: An access instruction pipeline for receiving an access instruction for accessing data to be inputted to the pipeline of a vector processor includes a plurality of buffers for buffering a memory request and sending it to a storage control unit, and a detector for judging at the last stage of the plurality of buffers if an instruction is an access instruction or a serialization instruction for serializing the memory access instructions among access instruction pipelines. If a serialization instruction is detected at the last stage of a pipeline, the pipelining operation is stopped, but instructions are filled up in the stopped pipeline. After a serialization instruction has been detected at the last stage of all the pipeline, a pipelining operation starts again.

21 citations


Journal ArticleDOI
TL;DR: It is shown that in any serializable execution, if all transactions see the failures and recoveries of data item copies in a consistent order, then the execution is correct.
Abstract: A replicated database is a distributed database in which copies of some data items are stored redundantly at multiple sites. In such a system, an execution of transactions iscorrect if it is equivalent to a serial execution of those transactions on a one copy database. We show that in any serializable execution, if all transactions see the failures and recoveries of data item copies in a consistent order, then the execution is correct. We model this condition using a modified type of serialization graph, and show that if this graph is acyclic then the corresponding execution is correct. We demonstrate the value of this model by using it to prove the correctness of an algorithm for synchronizing access to a replicated database.

17 citations


Proceedings Article
01 Sep 1987
TL;DR: This work proposes an integrated controller for handling both global concurrency and coherency control of local buffers in each system, and shows that this leads to a performance gain.
Abstract: In a multi-system data sharing complex, the systems have direct access to all data. with sharing typically at the disk level. This necessitates global concurrency control and coherency control of local buffers in each system. We propose an integrated controller for handling both global concurrency and coherency control. and show that this leads to a si@fiint performance gain. ‘Ihe multi-system performance can be enhanced by use of an intermediate shared semiconductor memory. This gives rise 10 additional read-write synchronization and disk write serialization problems. We show these can be handled efficienlly by the integmted controller, while allowing for early transaction commit. Significant tnnsaction speedup and reduction in lock contention among transaclions are obtained. The decrease in lock contention allows the multiple systems 10 sustain a higher transaction throughput. A queueing model is used to quantify the performance bnproyement. Although intermediate memory can be employed as a buffering device our analysis shows that substantial performance gains can be realized when combined with the integrated concurrency-coherency controL

14 citations


Proceedings Article
01 Sep 1987
TL;DR: This paper reports on an implementation of Bayer’s Time Interval concurrency control method and compares it to the performance of a conventional timestamp method, which is the first actual implementation of the time interval technique.
Abstract: This paper reports on an implementation of Bayer’s Time Interval concurrency control method and compares it to the performance of a conventional timestamp method. The implementation was done on the Eden experimental local area network. Insofar as the authors are aware, this is the first actual implementation of the time interval technique. The time interval approach clearly is better than time stamp ing. It provides higher throughput, causes one-third as many distributed transaction aborts, and requires very little additional overhead compared to time stamps. Within the time interval method we further explored and compared the early and late serialization schemes described by Bayer and hi colleagues. Early and late serialialization with time intervals show comparable performance over a range of read/write ratios and multiprogr amming levels. In systems that write to disk at the end of all alterations, rather than writing incrementally, late serialization performs better than early serialization because checkpointing to disk can run in parallel with the concurrency control phase. 1 Motivation For This Study There has been a great deal of interest in the performance of concurrency control algorithms in the literature in recent years [l, 7,9,10,11,13,17,20]. Most of these studies were either simulationbased or analytical in nature, although some used a combination Pamission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the tide of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or tu republish, requires a fee and/or special permission from the Endowment. Proceedings of the 13th VLDB Conference, Brighton 1987 of the two approaches. To the best of our knowledge, there have been no comparative implementations of distributed concurrency control methods. The comparison of actual implementations of concurrency control methods is desirable because modeling and simulation studies generally do not include enough detail to exhibit the effects of finite processing limitations (CPU, disk, and network) on performance. (Three notable exceptions to this are the papers by Agrawal and Carey [l], Carey and Muhanna [7], and Sevcik [20].) Unless the system components are heavily underutilized, this factor should have a noticeable effect on performance whenever a large percentage of the work done by transactions is wasted work, i.e. work which is spent on transaction attempts that ultimately abort. In fact, Agrawal and Carey [l] postulated that contradictory results obtained by different researchers comparing the same algorithms are caused by the inclusion of finite processing resources in the model by some and not by others. Their simulation study included these factors and bore out their hypothesis. The current study reports measurements on the first known implementation of the Tie Interval method proposed by Bayer et. al. [3]. The implementation was carried out on the Eden local-area network, an object-oriented, experimental distributed system [2]. 2 Description of the Research 2.1 Choice of Protocols The Time Interval method was an outgrowth of the RAC protocol [4], which took advantage of the “before” and “after” images used by transaction systems for recovery purposes. RAC was a lock-based protocol that allowed multiple readers, using the old image, even during preparation of the new image by another (single) writer. This meant, of course, that the updating transaction had to be serialized to follow the commitment of the read-only transactions. However, this allowed more concurrency due to the one-writer, multiple-reader compatibility. It turns out that, in order to guarantee the correctness of the RAC protocol, both the “before” and “after” images of an object may be needed by the system for some period of time, even aper the new image has been successfully committed’. Bayer points out that ‘The miterk+ for image deletion are explained briefly in Section 3.2.2; for more detaila refer to [4].

12 citations