scispace - formally typeset
Search or ask a question

Showing papers on "Serialization published in 1989"


Patent
18 Aug 1989
TL;DR: A fault-tolerant memory system (FTMS) as discussed by the authors uses a dedicated microprocessor-controlled computer system which serializes blocks of user data as they are received from the host system, deserializes those blocks when they are returned to the host systems, implements an error correction code system for the user data blocks, scrubs the data stored in the user memory, remaps data block storage locations within the user memories, and performs host computer interface operations.
Abstract: A fault-tolerant memory system or "FTMS" is intended for use as mass data storage for a host computer system. The FTMS incorporates a dedicated microprocessor-controlled computer system which serializes blocks of user data as they are received from the host system, deserializes those blocks when they are returned to the host system, implements an error correction code system for the user data blocks, scrubs the data stored in the user memory, remaps data block storage locations within the user memory as initial storage locations therein acquire too may hard errors for error correction to be effected with the stored error correction data, and performs host computer interface operations. Data in the FTMS is not bit-addressable. Instead, serialization of the user data permits bytes to be stored sequentially within the user memory much as they would be stored on a hard disk, with bytes being aligned in the predominant direction of serial bit failure within the off-spec DRAM devices. Such a data storage method facilitates error correction capability.

176 citations


Patent
David Bernstein1, Kimming So1
29 Jun 1989
TL;DR: The serializatin debugging facility as mentioned in this paper allows the programmer to select parallel sections of the program for single processor execution in order to locate errors in the program and to locate parallel constructs.
Abstract: A serializatin debugging facility operates by assisting the computer programmer in the selection of parallel sections of the parallel program for single processor execution in order to locate errors in the program. Information is collected regarding parallel constructs in the source program. This information is used to establish program structure and to locate sections of the program in which parallel constructs are contained. Program structure and the locations of parallel constructs within a program are then displayed as a tree graph. Viewing this display, a programmer selects parallel sections for serialization. Object code for the program is then generated in accordance with the serialization instructions entered by the programmer. Once executed, the programmer can compare the results of execution of parallel sections of the program in a single processor and a multiprocessor environment. Differing execution results in each environment is indicative of a parallel programming error which can then be corrected by the programmer. The programmer can repeat these steps, each time selecting different sections of the program for serialization. In this way, erroneous sections of the program can be localized and identified.

39 citations


Patent
29 Sep 1989
TL;DR: In this article, a data processor with a serialization attribute on a page basis is described, where a set of page descriptors and transparent translation registers encode the serialization attributes as a cache mode.
Abstract: A data processor having a serialization attribute on a page basis is provided. A set of page descriptors and transparent translation registers encode the serialization attribute as a cache mode. The data processor is a pipelined machine, having at least two function units, which operate independently of each other. The function units issues requests, for access to information stored in an external memory, to an access controller. The access controller serves as an arbitration mechanism, and grants the requests of the function units in accordance with the issuance order of the requests by the function units. When the memory access is marked serialized in the page descriptor, an access controller postpones the serialized access, until the completion of all pending memory accesses in the instruction sequence. All pending requests are then completed in a predetermined order, independent of the issuance order of the requests made by the function units, and all appropriate exception processing is completed. The postponed serialized access is then completed.

34 citations


Journal ArticleDOI
TL;DR: The approach has three characteristics: it utilizes the semantics of an application to improve concurrency, it reduces the complexity of application-dependent synchronization code by analyzing the process of writing it, and it hides the protocol used to arrive at a serialization order from the applications.
Abstract: In this paper we describe an approach to implementing atomicity. Atomicity requires that computations appear to be all-or-nothing and executed in a serialization order. The approach we describe has three characteristics. First, it utilizes the semantics of an application to improve concurrency. Second, it reduces the complexity of application-dependent synchronization code by analyzing the process of writing it. Third, our approach hides the protocol used to arrive at a serialization order from the applications. As a result, different protocols can be used without affecting the applications. Our approach uses a history abstraction. The history captures the ordering relationship among concurrent computations. By determining what types of computations exist in the history and their parameters, a computation can determine whether it can proceed.

28 citations


Journal ArticleDOI
01 Sep 1989
TL;DR: A proposed concurrent execution strategy for applicable productions, which surpasses in performance, the traditional sequential OPS5 production execution algorithm, and is attractive for implementation in parallel computing environments.
Abstract: In this paper we highlight the basic approach taken in the design of the DIPS system, and briefly present the main contributions. These include the use of special data structures to store rule definitions; they are implemented using relations. A matching algorithm uses these structures to efficiently identify when the antecedents of productions are satisfied, making them applicable for execution. Partial match information stored in the data structures is used by the matching algorithm. We also describe a proposed concurrent execution strategy for applicable productions, which surpasses in performance, the traditional sequential OPS5 production execution algorithm. The requirements for a correct, serializable execution, based on locking, is described. An advantage of the matching technique in DIPS is that it is fully parallelizable, which makes it attractive for implementation in parallel computing environments.

27 citations


Journal ArticleDOI
TL;DR: It is shown that the OSN method provides more concurrency than basic timestamp ordering and two-phase locking methods and handles successfully some logs which cannot be handled by any of the past methods.
Abstract: A method for concurrency control in distributed database management systems that increases the level of concurrent execution of transactions, called ordering by serialization numbers (OSN), is proposed. The OSN method works in the certifier model and uses time-interval techniques in conjunction with short-term locks to provide serializability and prevent deadlocks. The scheduler is distributed, and the standard transaction execution policy is assumed, that is, the read and write operations are issued continuously during transaction execution. However, the write operations are copied into the database only when the transaction commits. The amount of concurrency provided by the OSN method is demonstrated by log classification. It is shown that the OSN method provides more concurrency than basic timestamp ordering and two-phase locking methods and handles successfully some logs which cannot be handled by any of the past methods. The complexity analysis of the algorithm indicates that the method works in a reasonable amount of time. >

19 citations


Patent
Aiichiro Inoue1
12 Oct 1989
TL;DR: In this article, a serialization of accesses to main storage in a tightly coupled multi-processing apparatus is described, where the serialization occurs subsequent to a "STORE" instruction in a particular CPU.
Abstract: A system and method for controlling a "serialization" of accesses to main storage in a tightly coupled multi-processing apparatus is disclosed. The system includes a plurality of central processing units (CPUs), a main storage unit commonly shared by the plurality of CPUs and a memory control unit operatively connected to each of the CPUs. A process for ensuring that a correct sequence of accesses to the main storage is followed is called a "serialization." When a serialization occurs subsequent to a "STORE" instruction in a particular CPU, the system for controlling a serialization notifies the occurrence of a serialization to all other CPUs before the particular CPU requests the memory control unit for the serialization. If the particular CPU is not notified of any occurrence of a serialization in the other CPUs, the particular CPU immediately executes the following "FETCH" operation without waiting for completion of the particular CPU's serialization. Even if notified of an occurrence of a serialization in the other CPUs, the particular CPU need only wait for completion of the other CPU's serialization to execute the following "FETCH". When a serialization does not occur in the particular CPU, serialization notifications by the other CPUs are disregarded.

12 citations


01 Jan 1989
TL;DR: The investigation and evaluation of different hardware architectures and their suitability to efficiently cope with workloads generated by database operations on complex objects, and different kinds of architectures combining multiple processors: loosely-, tightly-, and closely-coupled are discussed.
Abstract: New database applications, primarily in the areas of engineering and knowledge-based systems, refer to complex objects (e.g. representation of a CAD workpiece or a VLSI chip) while performing their tasks. Retrieval, maintenance, and integrity checking of such complex objects consume substantial computing resources which were traditionally used by conventional database management systems in a sequential manner. Rigid performance goals dictated by interactive use and design environments imply new approaches to master the functionality of complex objects under satisfactory time restrictions. Because of the object granularity, the set orientation of the database interface, and the complicated algorithms for object handling, the exploitation of parallelism within such operations seems to be promising. Our main goal is the investigation and evaluation of different hardware architectures and their suitability to efficiently cope with workloads generated by database operations on complex objects. Apparently, employing just a number of processors is not a panacea for our database problem. The sheer horse power of machines does not help very much when data synchronization and event serialization requirements play a major role during object handling. What are the critical hardware architecture properties? How can the existing MIPS be best utilized for the data management functions when processing complex objects? To answer these questions and related issues, we discuss different kinds of architectures combining multiple processors: loosely-, tightly-, and closely-coupled. Furthermore, we consider parallelism at different levels of abstraction: the distribution of (sub-)queries or the decomposition of such queries and their concurrent evaluation at an inter- or intra-object level. Finally, we give some thoughts as to the problems of load control and transaction management.

12 citations


Book ChapterDOI
01 Jun 1989
TL;DR: This paper presents a transitive closure algorithm that combines the possibility of working with a reasonable amount of memory space without creating extra Inputs/Outputs, the use of on-disk clustering accomplished by double hashing, and the parallelization of the transitiveclosure operation.
Abstract: The importance of the performance problem brought about by the evaluation of recursive queries brings one to consider parallel execution strategies for the transitive closure operation. Such strategies constitute one of the keys to efficiency in a very large data base environment. In this paper we present a transitive closure algorithm. The innovative aspects of this algorithm concern: 1) the possibility of working with a reasonable amount of memory space without creating extra Inputs/Outputs; 2) the use of on-disk clustering accomplished by double hashing; and 3) the parallelization of the transitive closure operation. The processing time is reduced by a factor of p, where p is the number of processors allocated for the operation. Communication times remain limited; a cyclic organization eliminates the need for serialization of transfers. The evaluation shows the importance of the benefits of a parallel transitive closure execution.

8 citations


03 Jan 1989
TL;DR: This thesis shows that a run-time supervisor for an implementation of Ada on highly parallel machines can be written which is free of costly serialization points, and reduces the overhead of Ada tasking by means of micro-tasking.
Abstract: The programming language Ada is primarily intended for the construction of large scale and real time systems. Although the tasking model of Ada was aimed mainly at embedded systems, its rich set of synchronization operators together with its support for programming in the large, make Ada increasingly attractive for writing inherently parallel, computationally intensive, numeric and symbolic applications. Highly parallel shared-memory MIMD machines such as the NYU Ultracomputer have traditionally been regarded as suitable for large-scale scientific code, and not for more symbolic or heterogeneous concurrent applications such as are found in Artificial Intelligence or real-time programming. However, these applications would benefit greatly from (and even require) the computational power provided by highly parallel machines. It is therefore desirable to develop Ada implementations for highly parallel machines. The concern has been that the cost of managing large numbers of Ada tasks will negate the speedup obtained from their parallel execution. Indeed, a run-time supervisor for Ada must contend with many potentially expensive serialization points, that is to say, constructs that may take time proportional to the number of tasks involved. In this thesis we show that a run-time supervisor for an implementation of Ada on highly parallel machines can be written which is free of costly serialization points. The run-time supervisor SMARTS (Shared-memory Multiprocessor Ada Run Time Supervisor) depends on the hardware synchronization primitive $fetch\&\Phi$, and supports the tasking features of Ada in a highly parallel manner. We further reduce the overhead of Ada tasking, by means of micro-tasking, i.e. the explicit scheduling of a family of Ada tasks on a specified number of processors. Thus, Ada tasks are implemented as light weight processes managed by SMARTS, rather than full blown operating systems processes. Finally, SMARTS implements Ada shared variables efficiently by means of relay sets. Relay sets not only provide a means for identifying and resolving references to shared variables, but also facilitate the implementation of the Ada rendezvous mechanism as a remote procedure call.

3 citations


Patent
Aiichiro Inoue1
11 Oct 1989
TL;DR: In this article, the serialization is requested by a particular CPU after a "STORE", the particular CPU notifies serialization to other CPUs, and the following fetch is executed without waiting for completion of serialization.
Abstract: When a serialization is requested by a particular CPU the situation is divided into two cases. In the first case, the particular CPU is locked to prevent the main storage access. In the second case, the particular CPU can perform the fetch of the data without waiting for completion of the serialization. When a serialization is requested by a particular CPU after a "STORE", the particular CPU notifies the serialization to other CPUs. When the serialization is notified from other CPUs to the particular CPU, then the particular CPU lets the following fetch wait until the serialization of the other CPUs is completed. If serialization is not notified from other CPUs then the particular CPU immediately executes the following fetch. When the particular CPU does not request the serialization, it disregards the serialization notification from other CPUs.

Patent
11 May 1989
TL;DR: In this paper, a lock instruction accompanied with serialization from an instruction processor 1 or 2, when a storage controller 3 applies a lock, a block cancel means of the storage controller 1 sends a block cancellation request to the other instruction processor 2 or 3.
Abstract: PURPOSE:To execute at a high speed the processing of serialization in a lock instruction accompanied with a serialization operation by providing a means for executing a block cancel to a data block of a buffer storage device of other instruction processor. CONSTITUTION:In a lock instruction accompanied with serialization from an instruction processor 1 or 2, when a storage controller 3 applies a lock, a block cancel means of the storage controller 3 sends a block cancel request to the other instruction processor 1 or 2. As a result, in the other instruction processor 1 or 2, a data block of a buffer storage device is cancelled, and also, to a main storage device 4, an access is inhibited by a lock control. In such a way, the lock instruction accompanied with serialization can be ended without waiting for the end of the store.