scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Computer in 1991"


Journal Article•DOI•
TL;DR: A queuing-theoretical formulation of the imprecise scheduling problem is presented and workload models that quantify the tradeoff between result quality and computation time are reviewed.
Abstract: The imprecise computation technique, which prevents timing faults and achieves graceful degradation by giving the user an approximate result of acceptable quality whenever the system cannot produce the exact result in time, is considered. Different approaches for scheduling imprecise computations in hard real-time environments are discussed. Workload models that quantify the tradeoff between result quality and computation time are reviewed. Scheduling algorithms that exploit this tradeoff are described. These include algorithms for scheduling to minimize total error, scheduling periodic jobs, and scheduling parallelizable tasks. A queuing-theoretical formulation of the imprecise scheduling problem is presented. >

582 citations


Journal Article•DOI•
B. Nitzberg1, V. Lo1•
TL;DR: An overview of distributed shared memory issues covers memory coherence, design choices, and implementation methods, and algorithms that support process synchronization and memory management are discussed.
Abstract: An overview of distributed shared memory (DSM) issues is presented. Memory coherence, design choices, and implementation methods are included. The discussion of design choices covers structure and granularity, coherence semantics, scalability, and heterogeneity. Implementation issues concern data location and access, the coherence protocol, replacement strategy, and thrashing. Algorithms that support process synchronization and memory management are discussed. >

524 citations


Journal Article•DOI•
TL;DR: The partial ordering of events as defined by their causal relationships, that is, the ability of one event to directly, or transitively, affect another, is defined and its generalized and practical implementations in terms of partially ordered logical clocks are described.
Abstract: The partial ordering of events as defined by their causal relationships, that is, the ability of one event to directly, or transitively, affect another is defined. Its generalized and practical implementations in terms of partially ordered logical clocks are described. Such clocks can provide a decentralized definition of time for distributed computing systems, which lack a common time base. In their full generality, partially ordered logical clocks may be impractically expensive for long-lived computations. Several possible optimizations, depending on the application environment in which the clocks will be used, are described. Some applications are summarized. >

506 citations


Journal Article•DOI•
TL;DR: A complete framework for enumerating and classifying the types of multidatabase system (MDBS) structural and representational discrepancies is developed and is substantially applicable to heterogeneous database systems that use a nonrelational data model.
Abstract: A complete framework for enumerating and classifying the types of multidatabase system (MDBS) structural and representational discrepancies is developed. The framework is structured according to a relational database schema and is both practical and complete. It was used to build the UniSQL/M commercial multidatabase system. This MDBS was built over Structured-Query-Language-based relational database systems and a unified relational and object-oriented database system named UniSQL/X. However, the results are substantially applicable to heterogeneous database systems that use a nonrelational data model (for example, an object-oriented data model) as the common data model and allow the formulation of queries directly against the component database schemas. >

485 citations


Journal Article•DOI•
TL;DR: The techniques used to build highly available computer systems are sketched, and the use of pairs of computer systems at separate locations to guard against unscheduled outages due to outside sources (communication or power failures, earthquakes, etc.) is addressed.
Abstract: The techniques used to build highly available computer systems are sketched. Historical background is provided, and terminology is defined. Empirical experience with computer failure is briefly discussed. Device improvements that have greatly increased the reliability of digital electronics are identified. Fault-tolerant design concepts and approaches to fault-tolerant hardware are outlined. The role of repair and maintenance and of design-fault tolerance is discussed. Software repair is considered. The use of pairs of computer systems at separate locations to guard against unscheduled outages due to outside sources (communication or power failures, earthquakes, etc.) is addressed. >

365 citations


Journal Article•DOI•
Jonathan Grudin1•
TL;DR: Three development contexts are examined to provide a framework for understanding interactive software development projects and strategies to cope with the gaps between developers and prospective users are explored at the general level of the three development paradigms.
Abstract: Three development contexts are examined to provide a framework for understanding interactive software development projects. These contexts are the competitively bid, commercial product, and in-house/custom contexts development. Factors influencing interactive systems development are examined. Specific strategies to cope with the gaps between developers and prospective users are explored at the general level of the three development paradigms. >

356 citations


Journal Article•DOI•
TL;DR: The results show that automated techniques can reduce the amount of code that a domain expert needs to evaluate to identify reusable parts.
Abstract: Identification and qualification of reusable software based on software models and metrics is explored. Software metrics provide a way to automate the extraction of reusable software components from existing systems, reducing the amount of code that experts must analyze. Also, models and metrics permit feedback and improvement to make the extraction process fit a variety of environments. Some case studies are described to validate the experimental approach. They deal with only the identification phase and use a very simple model of a reusable code component, but the results show that automated techniques can reduce the amount of code that a domain expert needs to evaluate to identify reusable parts. >

330 citations


Journal Article•DOI•
Rafi Ahmed1, P. DeSmedt1, Weimin Du1, W. Kent1, M. Ketabchi1, Witold Litwin1, A. Rafii1, Ming-Chien Shan1 •
TL;DR: Data abstraction and encapsulation facilities in the Pegasus object model provide an extensible framework for dealing with various kinds of heterogeneities in the traditional database systems and nontraditional data sources.
Abstract: Pegasus, a heterogeneous multidatabase management system that responds to the need for effective access and management of shared data across in a wide range of applications, is described. Pegasus provides facilities for multidatabase applications to access and manipulate multipole autonomous heterogeneous distributed object-oriented relational, and other information systems through a uniform interface. It is a complete data management system that integrates various native and local databases. Pegasus takes advantage of object-oriented data modeling and programming capabilities. It uses both type and function abstractions to deal with mapping and integration problems. Function implementation can be defined in an underlying database language or a programming language. Data abstraction and encapsulation facilities in the Pegasus object model provide an extensible framework for dealing with various kinds of heterogeneities in the traditional database systems and nontraditional data sources. >

300 citations


Journal Article•DOI•
TL;DR: The method provides logical connectivity among the information resources via a semantic service layer that automates the maintenance of data integrity and provides an approximation of global data integration across systems.
Abstract: A method for integrating separately developed information resources that overcomes incompatibilities in syntax and semantics and permits the resources to be accessed and modified coherently is described. The method provides logical connectivity among the information resources via a semantic service layer that automates the maintenance of data integrity and provides an approximation of global data integration across systems. This layer is a fundamental part of the Carnot architecture, which provides tools for interoperability across global enterprises. >

291 citations


Journal Article•DOI•
C.Y. Park1, A.C. Shaw1•
TL;DR: The timing tool computes the deterministic execution times for programs that are written in a subset of C and run on a bare machine and it was found that all the predicted times are consistent, and most are safe.
Abstract: Analytic methods are employed at the source-language level, using formal timing schema that include control costs, handle interferences such as interrupts, and produce guaranteed best- and worst-case bounds. The timing tool computes the deterministic execution times for programs that are written in a subset of C and run on a bare machine. Two versions of the tool were written, using two granularity extremes for the atomic elements of the timing schema. All overview of the tool is given, timing schema and code prediction are discussed, and machine analysis and timing tool design are examined. Experimental and validation results are reported. It was found that all the predicted times are consistent, and most are safe. Some predictions are fairly tight, while others are a little loose. There are clear technical reasons that explain the differences between measured and predicted times, and technical solutions that should minimize these differences within the timing schema framework are seen. >

285 citations


Journal Article•DOI•
TL;DR: Acme, a network server for digital audio and video I/O, is presented and the nature of logical time systems is discussed, some examples are given, and their implementation is addressed.
Abstract: Acme, a network server for digital audio and video I/O, is presented. Acme lets users specify their synchronization requirements through an abstraction called a logical time system. The nature of logical time systems is discussed, some examples are given, and their implementation is addressed. >

Journal Article•DOI•
TL;DR: Requirements imposed on both the object data model and object management by the support of complex objects are outlined and object-oriented models are compared with semantic, relational, and Codasyl models.
Abstract: Requirements imposed on both the object data model and object management by the support of complex objects are outlined. The basic concepts of an object-oriented data model are discussed. They are objects and object identifiers, aggregation, classes and instantiation mechanisms, metaclasses, and inheritance. Object-oriented models are compared with semantic, relational, and Codasyl models. Object-oriented query languages and query processing are considered. Some operational aspects of data management in object-oriented systems are examined. Schema evolution is discussed. >

Journal Article•DOI•
M. Gokhale, W. Holmes, A. Kopser, S. Lucas, R. Minnich, D. Sweely, Daniel P. Lopresti1 •
TL;DR: A two-slot addition called Splash, which enables a Sun workstation to outperform a Cray-2 on certain applications, is discussed and an example application, that of sequence comparison, is given.
Abstract: A two-slot addition called Splash, which enables a Sun workstation to outperform a Cray-2 on certain applications, is discussed. Following an overview of the Splash design and programming, hardware development is described. The development of the logic description generator is examined in detail. Splash's runtime environment is described, and an example application, that of sequence comparison, is given. >

Journal Article•DOI•
TL;DR: The authors survey the state of distributed database technology, focusing on how well products meet the goals of transparent management of distributed and replicated data, reliability through distributed transactions, better performance, and easier, more economical system expansion.
Abstract: The authors explain what is meant by a distributed database system and discuss its characteristics. They survey the state of distributed database technology, focusing on how well products meet the goals of transparent management of distributed and replicated data, reliability through distributed transactions, better performance, and easier, more economical system expansion. They then consider unsolved problems with regard to network scaling, distribution design, distributed query processing, distributed transaction processing, integration with distributed operating systems, and distributed multidatabase systems. >

Journal Article•DOI•
H. Kitano1•
TL;DR: The integration of speech and natural language processing in Phi DM-Dialog and its cost-based scheme of ambiguity resolution are discussed and its simultaneous interpretation capability, made possible by an incremental parsing and generation algorithm, is examined.
Abstract: Phi DM-Dialog, one of the first experimental speech-to-speech systems and the first to demonstrate simultaneous interpretation possibilities, is described. An overview is given of the model behind Phi DM-Dialog. It consists of a memory network for representing various knowledge levels and markers for inferencing. The markers have rich information content. The integration of speech and natural language processing in Phi DM-Dialog and its cost-based scheme of ambiguity resolution are discussed. Its simultaneous interpretation capability, which is made possible by an incremental parsing and generation algorithm, is examined. Prototype system results are reported. >

Journal Article•DOI•
TL;DR: A Sparcstation facility called Phoenix is described that extends the Etherphone software architecture to permit more flexible conferencing and to control Sparc station-based Ethernet audio transmission, and the integration of the Phoenix capabilities with Macaw, the earlier video extensions.
Abstract: The latest extension of the Etherphone project is described. It creates a powerful conferencing system that lets users control their participation in multiple conferences across multimedia networks. The emphasis is on the software mechanisms that support its new features: first, a Sparcstation facility called Phoenix that extends the Etherphone software architecture to permit more flexible conferencing and to control Sparcstation-based Ethernet audio transmission, and, second, the integration of the Phoenix capabilities with Macaw, the earlier video extensions. Also described is a multicast packet protocol for audio transmission, which reimplements and extends the earlier special-purpose protocols, adding per-channel volume control and full support for the extended conferencing modes. >

Journal Article•DOI•
TL;DR: Cloud as mentioned in this paper is a general-purpose operating system for distributed environments based on an object-thread model adapted from object-oriented programming, which is a paradigm for structuring distributed operating systems, the potential and implications this paradigm has for users, and research directions for the future.
Abstract: The authors discuss a paradigm for structuring distributed operating systems, the potential and implications this paradigm has for users, and research directions for the future. They describe Clouds, a general-purpose operating system for distributed environments. It is based on an object-thread model adapted from object-oriented programming. >

Journal Article•DOI•
TL;DR: The overall process necessary to perform spatial and temporal data composition for a distributed multimedia information system is addressed and it is found that temporal composition can be most suitably achieved at the workstation.
Abstract: The overall process necessary to perform spatial and temporal data composition for a distributed multimedia information system is addressed. With respect to delays introduced through the network, it is found that temporal composition can be most suitably achieved at the workstation. Spatial composition is most effectively performed in a hierarchical fashion as dictated by the availability of system resources. The subsequent composition methodology combines spatial and temporal composition as a network service. Database organizations and data distributions are also investigated, and spatial and temporal composition functions and their composition into the network architecture are discussed. The issue of mapping the composition process onto the network resources as a value-added service is also addressed. >

Journal Article•DOI•
TL;DR: The interaction between computer architecture and IC technology is examined, and architectural trends in the areas of pipelining, memory systems, and multiprocessing are considered.
Abstract: The interaction between computer architecture and IC technology is examined. To evaluate the attractiveness of particular technologies, computer designs are assessed primarily on the basis of performance and cost. The focus is mainly on CPU performance, both because it is easier to measure and because the impact of technology is most easily seen in the CPU. The technology trends discussed concern memory size, design complexity and time, and design scaling. Architectural trends in the areas of pipelining, memory systems, and multiprocessing are considered. Opportunities and problems to be solved in the years ahead are identified. >

Journal Article•DOI•
TL;DR: Three methods for attacking keystream generators are reviewed, and three techniques for designing them are considered, focusing on how they fail or how their weakness is exposed under the attacks previously described.
Abstract: Progress in the design and analysis of pseudorandom bit generators over the last decade is surveyed. Background information is provided, and the linear feedback shift registers that serve as building blocks for constructing the generators are examined. Three methods for attacking keystream generators are reviewed, and three techniques for designing them are considered, focusing on how they fail or how their weakness is exposed under the attacks previously described. These techniques are nonlinear feedforward transformation, step control, and multiclocking. >

Journal Article•DOI•
Edward A. Fox1•
TL;DR: In this article, the authors introduce basic concepts in digital multimedia systems and survey recent literature on digital storage media, including optical, magnetic, and network options, and discuss the characteristics of audio and video and their digital representations.
Abstract: This paper introduces basic concepts in digital multimedia systems and surveys recent literature. Background is provided regarding developments in interactive videodiscs, which first made images and video accessible through computer systems. Digital storage media, including optical, magnetic, and network options, are addressed. The characteristics of audio and video and their digital representations are discussed. Because these media are so demanding of space and channel bandwidth, compression methods are reviewed. Standards for digital multimedia are considered. Current multimedia systems are described, and future prospects are indicated. >

Journal Article•DOI•
TL;DR: Improving software quality, making software engineering technology more transferable, and transferring software technology into an organization are addressed.
Abstract: The engineering process that underlies software development is examined. A brief summary of how information technology has affected both institutions and individuals in the past few decades is given. Engineering with models and metrics is then discussed. Improving software quality, making software engineering technology more transferable, and transferring software technology into an organization are addressed. >

Journal Article•DOI•
TL;DR: It is shown how NuMon, a seismological analysis system for monitoring compliance with nuclear test-ban treaties is managed within the Meta framework, the meta system that solves some longstanding problems of distributed applications.
Abstract: The issues of managing distributed applications are discussed, and a set of tools, the meta system, that solves some longstanding problems is presented. The Meta model of a distributed application is described. To make the discussion concrete, it is shown how NuMon, a seismological analysis system for monitoring compliance with nuclear test-ban treaties is managed within the Meta framework. The three steps entailed in using Meta are described. First the programmer instruments the application and its environment with sensors and actuators. The programmer then describes the application structure using the object-oriented data modeling facilities of the authors' high-level control language, Lomita. Finally, the programmer writes a control program referencing the data model. Meta's performance and real-time behavior are examined. >

Journal Article•DOI•
TL;DR: A model that allows specifications of constraints among multiple databases in a declarative fashion is proposed and the separation of the constraints from the application programs facilitates the maintenance of data constraints and allows flexibility in their implementation.
Abstract: The problem of interdatabase dependencies and the effect they have on applications updating interdependent data are addressed. A model that allows specifications of constraints among multiple databases in a declarative fashion is proposed. The separation of the constraints from the application programs facilitates the maintenance of data constraints and allows flexibility in their implementation. It allows investigation of various mechanisms for enforcing the constraints, independently of the application programs. By grouping the constraints together, it is possible to check their completeness and discover possible contradictions among them. The concepts of polytransactions, which use interdatabase dependencies to generate a series of related transactions that maintain mutual consistency among interrelated databases, is discussed. >

Journal Article•DOI•
TL;DR: The architecture of the Hector multiprocessor, which exploits current microprocessor technology to produce a machine with a good cost/performance tradeoff, is described, and its interconnection backplane is a key design feature that can accommodate future technology.
Abstract: The architecture of the Hector multiprocessor, which exploits current microprocessor technology to produce a machine with a good cost/performance tradeoff, is described. A key design feature of Hector is its interconnection backplane, which can accommodate future technology because it uses simple hardware with short critical paths in logic circuits and short lines in the interconnection network. The system is reliable and flexible and can be realized at a relatively low cost. The hierarchical structure results in a fast backplane and a bandwidth that increases linearly with the number of processors. Hector scales efficiently to larger sizes and faster processors. >

Journal Article•DOI•
TL;DR: The design and implementation of a real-time programming language called Flex, which is a derivative of C++, are presented and it is shown how different types of timing requirements might be expressed and enforced in Flex.
Abstract: The design and implementation of a real-time programming language called Flex, which is a derivative of C++, are presented. It is shown how different types of timing requirements might be expressed and enforced in Flex, how they might be fulfilled in a flexible way using different program models, and how the programming environment can help in making binding and scheduling decisions. The timing constraint primitives in Flex are easy to use yet powerful enough to define both independent and relative timing constraints. Program models like imprecise computation and performance polymorphism can carry out flexible real-time programs. In addition, programmers can use a performance measurement tool that produces statistically correct timing models to predict the expected execution time of a program and to help make binding decisions. A real-time programming environment is also presented. >

Journal Article•DOI•
Kang G. Shin1•
TL;DR: The design, implementation, and evaluation of a distributed real-time architecture called HARTS (hexagonal architecture for real- time systems) are discussed, emphasizing its support of time-constrained, fault-tolerant communications and I/O (input/output) requirements.
Abstract: The design, implementation, and evaluation of a distributed real-time architecture called HARTS (hexagonal architecture for real-time systems) are discussed, emphasizing its support of time-constrained, fault-tolerant communications and I/O (input/output) requirements. HARTS consists of shared-memory multiprocessor nodes, interconnected by a wrapped hexagonal mesh. This architecture is intended to meet three main requirements of real-time computing: high performance, high reliability, and extensive I/O. The high-level and low-level architecture is described. The evaluation of HARTS, using modeling and simulation with actual parameters derived from its implementation, is reported. Fault-tolerant routing, clock synchronization and the I/O architecture are examined. >

Journal Article•DOI•
TL;DR: It is concluded that innovations can confer advantage if they leverage resources available to the innovate or to the innovator's cooperating group but not to competitors.
Abstract: Case studies and recent economic theory are used to show why gaining and defending competitive advantage through innovative use of information technology has proved to be difficult. The principle lines of research in strategic and competitive information systems are surveyed. It is concluded that innovations can confer advantage if they leverage resources available to the innovator or to the innovator's cooperating group but not to competitors. >

Journal Article•DOI•
TL;DR: A unified methodology for modeling both soft and hard real-time systems is presented, using techniques that combine the effects of performance, reliability/availability, and deadline violation into a single model.
Abstract: A unified methodology for modeling both soft and hard real-time systems is presented. Techniques that combine the effects of performance, reliability/availability, and deadline violation into a single model are used. An online transaction processing system is used as an example to illustrate the modeling techniques. Dynamic failures due to a transaction violating a hard deadline are taken into account by incorporating additional transitions in the Markov chain model of the failure-repair behavior. System performance in the various configurations is considered by using throughput and response-time distribution as reward rates. Since the Markov chains used in computing the distribution of response time are often very large and complex, a higher level interface based on a variation of stochastic Petri nets called stochastic reward nets is used. >

Journal Article•DOI•
TL;DR: Key developments for the 1990s are considered, which include base technologies, 3-D user interfaces, virtual realities, multimedia and hypermedia, groupware, and intelligent agents, i.e., computer-based assistants or guides.
Abstract: The purpose and history of user interfaces are briefly recounted. The language model and the implementation model of the user-computer dialogue are examined. User-centered design is discussed, and approaches to design tools are described. Key developments for the 1990s are considered. These include base technologies, 3-D user interfaces, virtual realities, multimedia and hypermedia, groupware, and intelligent agents, i.e., computer-based assistants or guides. >