scispace - formally typeset
Search or ask a question

Showing papers on "Distributed database published in 1981"


01 Jan 1981
TL;DR: In this article, an algorithm for reliable storage of data in a distributed system, even when different portions of the data base, stored on separate machines, are updated as part of a single transaction.
Abstract: An algorithm is described which guarantees reliable storage of data in a distributed system, even when different portions of the data base, stored on separate machines, are updated as part of a single transaction. The algorithm is implemented by a hierarchy of rather simple abstractions, and it works properly regardless of crashes of the client or servers. Some care is taken to state precisely the assumptions about the physical components of the system (storage, processors and communication).

313 citations


Proceedings ArticleDOI
04 May 1981
TL;DR: The basic architecture of Multibase is described and some of the avenues to be taken in subsequent research are identified, including developing appropriate language constructs for accessing and integrating heterogeneous databases.
Abstract: Multibase is a software system for integrating access to preexisting, heterogeneous, distributed databases. The system suppresses differences of DBMS, language, and data models among the databases and provides users with a unified global schema and a single high-level query language. Autonomy for updating is retained with the local databases. The architecture of Multibase does not require any changes to local databases or DBMSs. There are three principal research goals of the project. The first goal is to develop appropriate language constructs for accessing and integrating heterogeneous databases. The second goal is to discover effective global and local optimization techniques. The final goal is to design methods for handling incompatible data representations and inconsistent data. Currently the project is in the first year of a planned three year effort. This paper describes the basic architecture of Multibase and identifies some of the avenues to be taken in subsequent research.

215 citations


Proceedings ArticleDOI
29 Apr 1981
TL;DR: A careful distinction is made between design decisions concerning communications andDesign decisions concerning the responses to read/write requests, and two schemes for producing such controls are given.
Abstract: Associated with the write of a database entity is both the "before" or old value, and the "after" or new value. Concurrency can be increased by allowing other transactions to read the before values of a given transaction. The ramifications of allowing this, particularly on a distributed system in which limited communications is desirable, are investigated. A careful distinction is made between design decisions concerning communications and design decisions concerning the responses to read/write requests. Two schemes for producing such controls are given, one scheme for systems where processes are committed on termination, and the other for systems where committment is made later.

124 citations


Proceedings Article
09 Sep 1981
TL;DR: DEREL is a mechanism that can be used to define base relations and to derive different classes of views, snapshots, partitioned and replicated data in a relational database management system.
Abstract: In a relational system, a database is composed of base relations, views, and snapshots. We show that this traditional approach can be extended to different classes of derived relations, and we propose a unified data definition mechanism for centralized and distributed databases. Our mechanism, called DEREL, can be used to define base relations and to derive different classes of views, snapshots, partitioned and replicated data. DEREL is intended to be part of a general purpose distributed relational database management system.

53 citations


Proceedings Article
09 Sep 1981
TL;DR: A method is developed which frees read transactions from any consideration of concurrency control; all responsibility for correct synchronization is assigned to the update transactions, which has the great advantage that, in case of conflicts between read transactions and update Transactions, no backup is performed.
Abstract: Recently, methods for concurrency control have been proposed which were called "optimistic". These methods do not consider access conflicts when they occur; instead, a transaction always proceeds, and at its end a check is performed whether a conflict has happened. If so, the transaction is backed up. This basic approach is investigated in two directions: First, a method is developed which frees read transactions from any consideration of concurrency control; all responsibility for correct synchronization is assigned to the update transactions. This method, has the great advantage that, in case of conflicts between read transactions and update transactions, no backup is performed. Then, the application of optimistic solutions in distributed database systems is discussed, a solution is presented.

49 citations


Book
01 Nov 1981
TL;DR: This document summarizes current capabilities, research and operational priorities, and plans for further studies that were established at the 2015 USGS workshop on quantitative hazard assessments of earthquake-triggered landsliding and liquefaction in the Central American region.
Abstract: Executive Summary.................................................................................................................................... 2

42 citations


Proceedings ArticleDOI
29 Apr 1981
TL;DR: In this paper, the problem of finding the minimum response-time schedule for executing a given strategy in an m-bus system taking into account local processing and system capacity is investigated.
Abstract: Semijoin strategies are a technique for query processing in distributed database systems. In the past, methodologies for constructing minimum communication-cost strategies for solving tree queries have been developed. These assume point-to-point communication and ignore local processing costs and the limited communication capacity of the system. In this paper, query processing in bus or loop systems is considered. The definition of strategy is extended to allow for broadcast mode of communication. We then address the problem of finding the minimum response-time schedule for executing a given strategy in an m-bus system taking into account local processing and system capacity. It is shown that the problem is computationally intractable for general tree queries, even in a 1-bus system, and for special classes of tree queries in an m-bus system. However, there is a polynomial-time algorithm for simple queries in a 1-bus system.

29 citations




Proceedings Article
01 Jan 1981
TL;DR: In this article, a formal model for atomic commit protocols for distributed database systems is introduced, which is used to prove existence results about resilient protocols for site failures that do not partition the network and then for partitioned networks.
Abstract: A formal model for atomic commit protocols for a distributed database system is introduced. The model is used to prove existence results about resilient protocols for site failures that do not partition the network and then for partitioned networks. For site failures, a pessimistic recovery technique, called independent recovery, is introduced and the class of failures for which resilient protocols exist is identified. For partitioned networks, two cases are studied: the pessimistic case in which messages are lost, and the optimistic case in which no messages are lost. In all cases, fundamental limitations on the resiliency of protocols are derived.

14 citations


Proceedings Article
01 Jan 1981
TL;DR: Some of the issues raised in the implementation of a DDBMS by the requirements of site autonomy are discussed from the perspective of the R ∗ research project at IBM's San Jose Research Lab.

Journal ArticleDOI
TL;DR: A quantitative method is presented for evaluating availability in Distributed Database Systems in terms of a flow graph and solution techniques are discussed both for the case in which transition rates are independent of the system state and for the cases in which they depend on it.


Book
01 Jan 1981
TL;DR: This paper limits the discussion to a class of DCSs which have an interconnection of dedicated/shared, programmable, functional PEs working on a set of jobs which may be related or unrelated.
Abstract: The recent advances in large-scale integrated logic and memory technology, coupled with the explosion in size and complexity of the application areas, have led to the design of distributed architectures. Basically, a Distributed Computer System ( DCS ) is considered as an interconnection of digital systems called Processing Elements ( PEs ), each having certain processing capabilities and communicating with each other. This definition encompasses a wide range of configurations from an uniprocessor system with different functional units to multiplicity of general-purpose computers (e.g. ARPANET). In general, the notion of "distributed systems" varies in character and scope with different people. 30 So far, there is no accepted definition and basis for classifying these systems. In this paper, we limit our discussion to a class of DCSs which have an interconnection of dedicated/shared, programmable, functional PEs working on a set of jobs which may be related or unrelated.

Proceedings ArticleDOI
29 Apr 1981
TL;DR: A simulation program SPADE (Simulation Program for the Analysis of Database Machines and Environments) is implemented, written to evaluate one database machine proposal, MUFFIN [STON79], but is of sufficient generality to model other homogeneous database machine environments.
Abstract: In this paper we present a first step toward evaluating the performance of database machines. More precisely, we have implemented a simulation program SPADE (Simulation Program for the Analysis of Database Machines and Environments). This program was written to evaluate one database machine proposal, MUFFIN [STON79], but is of sufficient generality to model other homogeneous database machine environments.

01 Jan 1981
TL;DR: A new mechanism for a distributed database management system that allows nonstop operation under a network partition and considers external actions that may be performed in response to transactions on the database system and restricts them in a way that prevents inconsistencies at partition merge.
Abstract: This dissertation proposes a new mechanism for a distributed database management system that allows nonstop operation under a network partition It the network that supports a distributed database with redundant data becomes partitioned, independent updates may cause inconsistencies to arise Existing solutions to this problem totally block updates in all but one partition, in which case mutual consistency can be easily obtained upon partition merge by propagating the updates In systems for which updates are essential, these solutions are often unacceptable because system availability is reduced The approach proposed allows mutual consistency to be violated in a controlled way such that database reconciliation can be made automatically by the DBMS In addition to obtaining mutual consistency after partition merge, this approach also considers external actions that may be performed in response to transactions on the database system and restricts them in a way that prevents inconsistencies at partition merge The method is based on the division of database operations into classes of semantics Five classes are defined, and a merge algorithm is provided for each class Strong data types and integrity assertions are used to enforce semantic integrity The assertions to be enforced may vary dynamically according to the network topology As the semantic freedom of the operations increases, so does the complexity of the merge algorithms To demonstrate the feasibility of this approach we present a case study involving an electronic funds transfer system For this system, only the two simplest classes of semantics, which involve very little overhead, were necessary to support all the required operations The mechanism proposed allows normal operation to proceed while the database reconciliation algorithms are executing and partial partition merges

01 Feb 1981
TL;DR: This report attempts to survey all the published proposals on concurrency control, and finds that a taxonomy is developed for the classification of concurrence control techniques for distributed database systems.
Abstract: : One of the most important problems in the design of centralized and distributed database management systems is the problem of concurrency control. Even though many different solutions have been proposed, a unifying theory is still not in sight. This report attempts to survey all the published proposals on concurrency control. In particular, a taxonomy is developed for the classification of concurrency control techniques for distributed database systems. The survey of these twenty some concurrency control mechanisms are in the framework of this taxonomy.

Journal ArticleDOI
G. P. Benincasa1, F. Giudici1, P. Skarek1
TL;DR: One of the design objectives of the new CPS controls system is to permit changing the setting conditions from one machine cycle to the next in order to serve different users with different beam properties.
Abstract: One of the design objectives of the new CPS controls system is to permit changing the setting conditions from one machine cycle to the next in order to serve different users with different beam properties. Cycles may be as short as 650 ms and for the PS Booster (PSB) about 1000 parameters may have to be refreshed each cycle. This task is handled by 20 microprocessor-based Auxiliary CAMAC Crate Controllers (ACC). This layout offers three major possibilities: (i) fast and reliable parameter refreshment in each cycle; (ii) the microprocessors constitute a distributed database, which allows autonomous execution of complicated tasks triggered by simple commands from the process computer; (iii) the microprocessors allow complete decoupling between the severe process real-time constraints and human interaction: asynchronous operator commands are executed in a precise synchronous way with the process.

01 May 1981
TL;DR: In this article, the authors present a model for a distributed data base consisting of two completely distinct levels, a physical level consisting of node processors connected by a message system and communicating with users by ports, and a logical level, consisting of a centralized concurrent application data base.
Abstract: : This report presents a model for a distributed data base consisting of two completely distinct levels--a physical level consisting of node processors connected by a message system and communicating with users by ports, and a logical level consisting of a centralized concurrent application data base. (The logical level does not involve nodes, messages, or any other distribution information.) It is the job of the physical system to implement, in some appropriate sense, the application data base. (Author)

Proceedings Article
09 Sep 1981
TL;DR: The relationship between dependencies of local databases and dependencies appearing in a global view of a distributed database by local relational databases is shown and some conditions for a JD of a local database to appear in aglobal view as an EJD are shown.
Abstract: This paper discusses a constraint integration problem, which occurs in constructing a global view of a distributed database by local relational databases. Each local database has its own semantic constraints. such as functional, join and embedded join dependencies (FDs. JDs and EJDs, respectively). A global view of a distributed database is assumed to be defined by taking a join of these local databases. Some dependency constraints on a local database may be violated on a global view. In this paper, we show (a) the relationship between dependencies of local databases and dependencies appearing in a global view, (b) a testing method whether an EJD appears in a global view when only FDs and JDs are given to each local database, end (c) some conditions for a JD of a local database to appear in a global view as an EJD.



01 Dec 1981
TL;DR: This paper designs models for solving both of the above problems of distributing a database and how much of the data should be replicated and how should the replicated fragments be allocated.
Abstract: : The distributed information systems area has seen a rapid growth in terms of research interest as well as in terms of practical applications in the past three years. Distributed systems are becoming a reality, however truly distributed databases are still rare. For a large organization with a distributed computer network the problem of distributing a database includes determination of; (1) How can the database be split into components to be allocated to distinct sites, and (2) How much of the data should be replicated and how should the replicated fragments be allocated? In this paper we design models for solving both of the above problems.




Journal ArticleDOI
TL;DR: A quadratic programming model is developed to take into consideration a number of factors that can influence the process of optimal allocation of data among the nodes in a distributed database.
Abstract: In this paper, a quadratic programming model is developed to take into consideration a number of factors that can influence the process of optimal allocation of data among the nodes in a distributed database. The factors include communication costs, translation costs, congestion costs and storage costs. Beale's method is used to solve the resulting quadratic program. Some numerical examples are presented and the potentials of such an approach in the design and analysis of distributed databases are discussed.

Journal ArticleDOI
TL;DR: The paper reviews the main factors which impact on network architecture and describes the special requirements of the distributed database environment which impose stringent constraints on both the DDBMS software and the teleprocessing network.


Journal ArticleDOI
TL;DR: A high speed general purpose distributed computer network controls both the MEA linac and its experiments and a distributed database concept is implemented.
Abstract: A high speed general purpose distributed computer network controls both the MEA linac and its experiments. Fast realtime operation as well as timesharing are supported. A general addressing scheme allows networkwide communication. A distributed database concept is implemented. Control actions operate on a centralised copy of the total accelerator status. Network layout and software approach are outlined, and control operation is described.