scispace - formally typeset
Search or ask a question

Showing papers in "ACM Computing Surveys in 1990"


Journal ArticleDOI
TL;DR: The state machine approach is a general method for implementing fault-tolerant services in distributed systems and protocols for two different failure models—Byzantine and fail stop are described.
Abstract: The state machine approach is a general method for implementing fault-tolerant services in distributed systems. This paper reviews the approach and describes protocols for two different failure models—Byzantine and fail stop. Systems reconfiguration techniques for removing faulty components and integrating repaired components are also discussed.

2,559 citations


Journal ArticleDOI
Amit P. Sheth, James A. Larson1
TL;DR: In this paper, the authors define a reference architecture for distributed database management systems from system and schema viewpoints and show how various FDBS architectures can be developed, and define a methodology for developing one of the popular architectures of an FDBS.
Abstract: A federated database system (FDBS) is a collection of cooperating database systems that are autonomous and possibly heterogeneous. In this paper, we define a reference architecture for distributed database management systems from system and schema viewpoints and show how various FDBS architectures can be developed. We then define a methodology for developing one of the popular architectures of an FDBS. Finally, we discuss critical issues related to developing and operating an FDBS.

2,376 citations


Journal ArticleDOI
TL;DR: This work provides a common terminology and collection of mechanisms that underlie any approach for representing engineering design information in a database, and proposes a single framework, based on these mechanisms, which can be tailored for the needs of a given version environment.
Abstract: Support for unusual applications such as computer-aided design data has been of increasing interest to database system architects. In this survey, we concentrate on one aspect of such support, namely, version modeling. By this, we mean the concepts suitable for structuring a database of complex engineering artifacts that evolve across multiple representations and over time and the operations through which such artifact descriptions are created and modified. There have been many proposals for new models and mechanisms to support such concepts within database data models in general and engineering data models in particular; here we not only describe such proposals; we also unify them. We do not propose yet another model but provide a common terminology and collection of mechanisms that underlie any approach for representing engineering design information in a database. The key remaining challenge is to construct a single framework, based on these mechanisms, which can be tailored for the needs of a given version environment.

535 citations


Journal ArticleDOI
TL;DR: It is argued for a new approach to solving data management system problems, called multidatabase or federated systems, which make databases interoperable, that is, usable without a globally integrated schema.
Abstract: Database systems were a solution to the problem of shared access to heterogeneous files created by multiple autonomous applications in a centralized environment. To make data usage easier, the files were replaced by a globally integrated database. To a large extent, the idea was successful, and many databases are now accessible through local and long-haul networks. Unavoidably, users now need shared access to multiple autonomous databases. The question is what the corresponding methodology should be. Should one reapply the database approach to create globally integrated distributed database systems or should a new approach be introduced?We argue for a new approach to solving such data management system problems, called multidatabase or federated systems. These systems make databases interoperable, that is, usable without a globally integrated schema. They preserve the autonomy of each database yet support shared access.Systems of this type will be of major importance in the future. This paper first discusses why this is the case. Then, it presents methodologies for their design. It further shows that major commerical relational database systems are evolving toward multidatabase systems. The paper discusses their capabilities and limitations, presents and discusses a set of prototypes, and, finally, presents some current research issues.

463 citations


Journal ArticleDOI
TL;DR: A suitable solution for deciding upon the starting point of a steady-state analysis and two techniques for obtaining the final simulation results to a required level of accuracy are presented, together with pseudocode implementations.
Abstract: For years computer-based stochastic simulation has been a commonly used tool in the performance evaluation of various systems. Unfortunately, the results of simulation studies quite often have little credibility, since they are presented without regard to their random nature and the need for proper statistical analysis of simulation output data.This paper discusses the main factors that can affect the accuracy of stochastic simulations designed to give insight into the steady-state behavior of queuing processes. The problems of correctly starting and stopping such simulation experiments to obtain the required statistical accuracy of the results are addressed. In this survey of possible solutions, the emphasis is put on possible applications in the sequential analysis of output data, which adaptively decides about continuing a simulation experiment until the required accuracy of results is reached. A suitable solution for deciding upon the starting point of a steady-state analysis and two techniques for obtaining the final simulation results to a required level of accuracy are presented, together with pseudocode implementations.

285 citations


Journal ArticleDOI
TL;DR: The paper claims that the principle of distributed operation is fundamental for a fault tolerant and scalable DFS design and presents alternatives for the semantics of sharing and methods for providing access to remote files.
Abstract: The purpose of a distributed file system (DFS) is to allow users of physically distributed computers to share data and storage resources by using a common file system. A typical configuration for a DFS is a collection of workstations and mainframes connected by a local area network (LAN). A DFS is implemented as part of the operating system of each of the connected computers. This paper establishes a viewpoint that emphasizes the dispersed structure and decentralization of both data and control in the design of such systems. It defines the concepts of transparency, fault tolerance, and scalability and discusses them in the context of DFSs. The paper claims that the principle of distributed operation is fundamental for a fault tolerant and scalable DFS design. It also presents alternatives for the semantics of sharing and methods for providing access to remote files. A survey of contemporary UNIX-based systems, namely, UNIX United, Locus, Sprite, Sun's Network File System, and ITC's Andrew, illustrates the concepts and demonstrates various implementations and design alternatives. Based on the assessment of these systems, the paper makes the point that a departure from the extending centralized file systems over a communication network is necessary to accomplish sound distributed file system design.

277 citations


Journal ArticleDOI
TL;DR: This paper gives a systematic presentation of the literature related to closed queueing networks with finite queues and the results are significant for both researchers and practitioners.
Abstract: Closed queueing networks are frequently used to model complex service systems such as production systems, communication systems, computer systems, and flexible manufacturing systems. When limitations are imposed on the queue sizes (i.e., finite queues), a phenomenon called blocking occurs. Queueing networks with blocking are, in general, difficult to treat. Exact closed form solutions have been reported only in a few special cases. Hence, most of the techniques that are used to analyze such queueing networks are in the form of approximations, numerical analysis, and simulation. In this paper, we give a systematic presentation of the literature related to closed queueing networks with finite queues. The results are significant for both researchers and practitioners.

203 citations


Journal ArticleDOI
TL;DR: This paper outlines approaches to various aspects of heterogeneous distributed data management and describes the characteristics and architectures of seven existingheterogeneous distributed database systems developed for production use.
Abstract: It is increasingly important for organizations to achieve additional coordination of diverse computerized operations. To do so, it is necessary to have database systems that can operate over a distributed network and can encompass a heterogeneous mix of computers, operating systems, communications links, and local database management systems. This paper outlines approaches to various aspects of heterogeneous distributed data management and describes the characteristics and architectures of seven existing heterogeneous distributed database systems developed for production use. The objective is a survey of the state of the art in systems targeted for production environments as opposed to research prototypes.

170 citations


Journal ArticleDOI
TL;DR: Instead of the traditional ad-hoc approach toward developing memory test algorithms, a hierarchy of functional faults and tests is presented, which is shown to cover all likely functional memory faults.
Abstract: This paper presents an overview of deterministic functional RAM chip testing. Instead of the traditional ad-hoc approach toward developing memory test algorithms, a hierarchy of functional faults and tests is presented, which is shown to cover all likely functional memory faults. This is done by presenting a novel way of categorizing the faults. All (possible) fault combinations are discussed. Requirements are put forward under which conditions a fault combination can be detected. Finally, memory test algorithms that satisfy the given requirements are presented.

111 citations


Journal ArticleDOI
TL;DR: The current state of the art of system reliability, safety, and fault tolerance is reviewed, and an approach to designing resourceful systems based upon a functionally rich architecture and an explicit goal orientation is developed.
Abstract: Above all, it is vital to recognize that completely guaranteed behavior is impossible and that there are inherent risks in relying on computer systems in critical environments. The unforeseen consequences are often the most disastrous [Neumann 1986].Section 1 of this survey reviews the current state of the art of system reliability, safety, and fault tolerance. The emphasis is on the contribution of software to these areas. Section 2 reviews current approaches to software fault tolerance. It discusses why some of the assumptions underlying hardware fault tolerance do not hold for software. It argues that the current software fault tolerance techniques are more accurately thought of as delayed debugging than as fault tolerance. It goes on to show that in providing both backtracking and executable specifications, logic programming offers most of the tools currently used in software fault tolerance. Section 3 presents a generalization of the recovery block approach to software fault tolerance, called resourceful systems. Systems are resourceful if they are able to determine whether they have achieved their goals or, if not, to develop and carry out alternate plans. Section 3 develops an approach to designing resourceful systems based upon a functionally rich architecture and an explicit goal orientation.

93 citations