scispace - formally typeset
Search or ask a question

Showing papers on "Application software published in 1978"


Journal ArticleDOI
01 Oct 1978
TL;DR: SIFT as discussed by the authors is an ultra-reliable computer for critical aircraft control applications that achieves fault tolerance by the replication of tasks among processing units, and it uses a specially designed redundant bus system to interconnect the processing units.
Abstract: SIFT (Software Implemented Fault Tolerance) is an ultrareliable computer for critical aircraft control applications that achieves fault tolerance by the replication of tasks among processing units. The main processing units are off-the-shelf minicomputers, with standard microcomputers serving as the interface to the I/O system. Fault isolation is achieved by using a specially designed redundant bus system to interconnect the proeessing units. Error detection and analysis and system reconfiguration are performed by software. Iterative tasks are redundantly executed, and the results of each iteration are voted upon before being used. Thus, any single failure in a processing unit or bus can be tolerated with triplication of tasks, and subsequent failures can be tolerated after reconfiguration. Independent execution by separate processors means that the processors need only be loosely synchronized, and a novel fault-tolerant synchronization method is described. The SIFT software is highly structured and is formally specified using the SRI-developed SPECIAL language. The correctness of SIFT is to be proved using a hierarchy of formal models. A Markov model is used both to analyze the reliability of the system and to serve as the formal requirement for the SIFT design. Axioms are given to characterize the high-level behavior of the system, from which a correctness statement has been proved. An engineering test version of SIFT is currently being built.

549 citations


Proceedings ArticleDOI
R.C. Cheung1
13 Nov 1978
TL;DR: A user-oriented software reliability figure of merit is defined to measure the reliability of a software system with respect to a user environment and the effects of the user profile, which summarizes the characteristics of the users of a system, on system reliability are discussed.
Abstract: A user-oriented reliability model has been developed to measure the reliability of service that a system provides to a user community. It has been observed that in many systems, especially software systems, reliable service can be provided to a user when it is known that errors exist, provided that the service requested does not utilize the defective parts. The reliability of service, therefore, depends both on the reliability of the components and the probabilistic distribution of ulitization of the components to provide the service. In this paper a user-oriented reliability figure of merit is defined to measure the reliability of a software system with respect to a user environment. The effects of the user profile, which summarizes the characteristics of the users of a system, on system reliability is discussed. A simple Markov model is formulated to determine the reliability of a software system based on the reliability of each individual module and the measured inter-modular transition probabilities as the user profile. Sensitivity analysis techniques are developed to determine modules most critical to system reliability. The applications of this model to develop cost-effective testing strategies and to determine the expected penalty cost of failure are also discussed. Some future refinements and extensions of the model are presented.

355 citations


Journal ArticleDOI
01 Feb 1978
TL;DR: Some recent research results on algorithms designed for use in stand-alone, single-loop calculator or microprocessor-based controllers are presented, specifically tailored for simple implementation in a relatively low computing power, discrete-time environment.
Abstract: Process control applications and control algorithms suited for microprocessors are surveyed. Applications are noted both in large, general purpose process control systems and in specialized applications that have been made possible by the availability of computing power in small packages. Distributed control and use of extended data buses (data highways), both made possible by extensive use of microprocessors, are becoming standard in general purpose systems. General purpose process control systems still utilize proportional-integral-derivative (PID) algorithms and variants of them for the most part. Some recent research results on algorithms designed for use in stand-alone, single-loop calculator or microprocessor-based controllers are presented. These algorithms, which could also be used in direct digital control (DDC) systems, are specifically tailored for simple implementation in a relatively low computing power, discrete-time environment.

66 citations


Patent
18 Oct 1978
TL;DR: The microprocessor support system as mentioned in this paper provides a total "laboratory" environment for developing and testing application software as well as for debugging the microprocessor-based application machine itself.
Abstract: The disclosed microprocessor support system provides a total "laboratory" environment for developing and testing application software as well as for debugging the microprocessor-based application machine itself. The microprocessor support system contains a time shared minicomputer equipped with a full set of peripherals which functions as the main or operating system. A data link connects this operating system with test equipment located at the site of the application machine. This test equipment consists of a field test unit which provides an interface between the application machine, a local keyboard terminal and the operating system such that an engineer at the site of the application machine has access through the field test unit to both the microprocessor-based application machine and the operating system with its sophisticated hardware and software resources to assist in developing and testing application software, as well as debugging the application machine itself.

49 citations


Journal ArticleDOI
TL;DR: Vertical migration is a technique which improves system performance by moving software primitives through layers of application program and operating system software and microcode.
Abstract: Vertical migration is a technique which improves system performance by moving software primitives through layers of application program and operating system software and microcode.

46 citations


Journal ArticleDOI
TL;DR: A simple classification of graphics input requirements relates types of information to types of devices, giving the system designer a framework for overall design decisions.
Abstract: A simple classification of graphics input requirements relates types of information to types of devices, giving the system designer a framework for overall design decisions.

23 citations


Proceedings ArticleDOI
01 Sep 1978
TL;DR: The goal is to develop guidelines for writing distributed applications software by showing how to develop distributed software for discrete-event simulatiom by taking a radically different view of the application.
Abstract: This paper is concerned with the pros and cons of writing distributed applications software. Most applications software is highly sequential due to the sharing of variables. Here we focus attention on one such application: discrete-event simulatiom. We show how to develop distributed software for this application by taking a radically different view of the application. We outline proofs that our distributed software is correct. Our goal is to develop guidelines for writing distributed applications software.

21 citations


Proceedings ArticleDOI
G. D. Bergland1
19 Jun 1978
TL;DR: This tutorial considers the structure and construction of reliable software and three of the major structured design methodologies which have been reported in the literature are developed and compared.
Abstract: This tutorial considers the structure and construction of reliable software (see Figure 1). By way of introduction, several of the structured programming and software engineering techniques are classified into three groups; those which impact primarily on the program structure, the development process, and the development support tools. Structural Analysis Concepts are described which have their major impact at the code level, the module level, and the system level. Finally, three of the major structured design methodologies which have been reported in the literature are developed and compared. Functional Decomposition, the Data Flow Design Method, and the Data Structure Design Method are described, characterized, and applied to a specific example. While no one design methodology can be shown to be "correct" for all types of problems, it is felt that these three can cover a variety of applications. An "interim" approach for large software design problems is suggested which may be useful until an accepted "correct" methodology comes along.

16 citations


Journal ArticleDOI
TL;DR: Deterministic and probabilistic models capable of representing more and more system parameters are being developed and one of their primary ractions is low cost.
Abstract: Deterministic and probabilistic models capable of representing more and more system parameters are being developed One of their primary ractions is low cost.

13 citations


Proceedings ArticleDOI
13 Nov 1978
TL;DR: The overall design of some modular capabilities for error detection testing, verification, and documentation of concurrent process HAL/S programs are described.
Abstract: This paper describes the overall design of some modular capabilities for error detection testing, verification, and documentation of concurrent process HAL/S programs. The work described draws upon many ideas first advanced in building tools for single process software. In this paper, these ideas are significantly extended and adapted to realize the power of these tools for concurrent software. Particular attention is paid to the design of static data flow analysis capabilities for concurrent software.

12 citations


Proceedings ArticleDOI
13 Nov 1978
TL;DR: The evolution and future development of the CPS architectures are discussed and the intercommunication subsystem requirement, interconnection networks structure, network hardware and software design aspects are carefully examined.
Abstract: In this paper, various Communication Processor Systems (CPS) architectures are reviewed and classified according to their interconnection organizations. The evolution and future development of the CPS architectures are discussed and the intercommunication subsystem requirement, interconnection networks structure, network hardware and software design aspects are carefully examined. The functional baseline for the processing unit of the CPS is identified and a comparative study of various existing and proposed processing units is included. The critical review is based on the important features such as the reliability improvement, the interrupt handling, the priority encoding, and the optimized repertoire, etc. Finally, general software considerations are also outlined.

Journal ArticleDOI
01 Feb 1978
TL;DR: The assembly and use of more new and versatile instrumentation systems is now enhanced dramatically with the widespread application of IEEE Standard 488-1975, "Digital Interface for Programmable Instrumentation."
Abstract: The assembly and use of more new and versatile instrumentation systems is now enhanced dramatically with the widespread application of IEEE Standard 488-1975, "Digital Interface for Programmable Instrumentation." This Standard interface provides an easy-touse high performance concept that links instruments, calculators or computers, and peripheral devices to function as automated instrumentation systems. Microprocessor technology applied to both smart instrumentation and the implementation of IEEE Standard 488's interface functions provides still further benefits to system designers and users. This article describes both the interface function concepts important to instrumentation systems and the roles played by IEEE Standard 488 and microprocessor techniques in support of these functional concepts. Microprocessor and IEEE Standard 488 interface techniques combine to provide designers and users with significant new tools for improved product performance, something neither technology alone could provide. In the near future, special LSI chips optimized to integrate the two technologies should further enhance the benefits for all concerned.

Journal ArticleDOI
J. Doerr1
01 Feb 1978
TL;DR: The origins of both applications are traced and current products and services described in a tutorial fashion and trends are forecasted and conclusions drawn about these two low-cost computing revolutions.
Abstract: Personal computing is one of the most revolutionary applications of microprocessors; single-board computing is another. The origins of both applications are traced and current products and services described in a tutorial fashion. Trends are forecasted and conclusions drawn about these two low-cost computing revolutions.

Proceedings ArticleDOI
13 Nov 1978
TL;DR: This extended-hash-code methodology permits the integration into the system of such user-oriented features as overlapping the query input and database search processes to give rapid apparent response, continual feedback to the user of the progress of the search, and abbreviation of keys by truncation.
Abstract: Experimental information-retrieval systems for telephone directory assistance and for filing office correspondence utilizing a superimposed-coded partial-match-retrieval scheme have been implemented. This extended-hash-code methodology permits the integration into the system of such user-oriented features as overlapping the query input and database search processes to give rapid apparent response, continual feedback to the user of the progress of the search, and abbreviation of keys by truncation. Simple coding and retrieval logic allow updating and rapid response to be achieved easily in either software or hardware realizations. The flexibility of the technique offers special advantages when casual users with fragmentary and ill-formulated queries need to approach a computerized information-retrieval system.

Proceedings ArticleDOI
13 Nov 1978
TL;DR: The issues involved in building a real-time control system using a message-directed distributed architecture using a hierarchical models, including the viability of using hierarchical models to organize the software are discussed.
Abstract: This paper discusses the issues involved in building a real-time control system using a message-directed distributed architecture. We begin with a discussion of the nature of real-time soft ware, including the viability of using hierarchical models to organize the software. Next we discuss some realistic design objectives for a distributed real-time system including fault isolation, independent module verification, context independence, decentralized control and partitioned system state. We conclude with some observations concerning the general nature of distributed system software.

Journal ArticleDOI
TL;DR: This experimental microprocessor-based office system incorporates architectural features–such as memory paging and distributed processing–normally associated with large computer systems.
Abstract: This experimental microprocessor-based office system incorporates architectural features–such as memory paging and distributed processing–normally associated with large computer systems.

Proceedings ArticleDOI
13 Nov 1978
TL;DR: The design representation schemes used in the IDAP framework, and the capabilities of the system being developed, are discussed.
Abstract: This paper discusses the Integrated Design Analysis Programs (IDAP) system, which is currently being developed at BCS. The objectives of this system are to automate the specification, analysis, and verification of software design. This paper discusses the design representation schemes used in the IDAP framework, and the capabilities of the system being developed.

Proceedings ArticleDOI
19 Jun 1978
TL;DR: An interactive computer graphics method is presented for generating three- dimensional geometric information by drawing in two-dimensional plans and simultaneously using a powerful three-dimensional editor, which has been successfully used in structural analysis, energy analysis, and visual applications.
Abstract: An interactive computer graphics method is presented for generating three-dimensional geometric information by drawing in two-dimensional plans and simultaneously using a powerful three-dimensional editor. The system was initially developed for its suitability in architectural design procedures, but the strength of this method is not unique to architectural applications. The advantage of this approach is that the user may trace a two-dimensional drawing directly onto a plane in the system's three-dimensional data space. To generate a three-dimensional form, the designer may extrude any or all of the lines into planes, creating a three-dimensional representation from the plan. The user receives continuous visual feedback from the graphic screen at all times. This input cycle may be used or the database may be edited any number of times as a complex representation is generated.The program has been interfaced to a number of application routines and color display routines. It has been successfully used in structural analysis, energy analysis, and visual applications. The following describes the specific details of the program, a typical scenario, and some of the architectural and engineering applications.

Proceedings ArticleDOI
13 Nov 1978
TL;DR: The kernel of an Executive is described, being implemented for the Honeywell Experimental Distributed Processor (HXDP) -- a vehicle for research in distributed computers for real-time control.
Abstract: This paper describes the kernel of an Executive being implemented for the Honeywell Experimental Distributed Processor (HXDP) -- a vehicle for research in distributed computers for real-time control. The kernel provides message transmission primitives for use by application programs or higher level executive functions. In the paper we describe the message transmission primitives provided by the kernel and the rationale for their selection based upon the objectives and constraints described in a companion paper.

Proceedings ArticleDOI
19 Jun 1978
TL;DR: From discussion of user encountered production bottlenecks it is demonstrated that personnel sub-system requirements will drive developments in hardware, software, systems configuration and data base structure over the near-term future.
Abstract: From discussion of user encountered production bottlenecks it is demonstrated that personnel sub-system requirements will drive developments in hardware, software, systems configuration and data base structure over the near-term future. Examples are taken from the discussion and used to illustrate exploitation of known capabilities.

Proceedings ArticleDOI
T.S. Chow1
01 Jan 1978
TL;DR: An algorithmic way of evaluating the correctness of software designs by considering interprocess communications, and algorithms to detect anomalies in the design are developed.
Abstract: This paper offers an algorithmic way of evaluating the correctness of software designs by considering interprocess communications. A process is an activation of a program module, whose design is in the form of a finite state machine. Processes communicate with each other by sending messages. Anomalies in the design are defined, such as oscilliation, surprise, deadlock, race condition, and failure to handle concurrent messages. Algorithms to detect these anomalies are developed. Application experience is summarized.

Journal ArticleDOI
TL;DR: A computer system to control and acquire data from a set of ten neutron and x-ray scattering and diffraction experiments located at the High Flux Beam Reactor at Brookhaven National Laboratory has operated in a routine manner for over two years.
Abstract: A computer system to control and acquire data from a set of ten neutron and x-ray scattering and diffraction experiments located at the High Flux Beam Reactor at Brookhaven National Laboratory has operated in a routine manner for over two years. The system has been constructed according to a functionally distributed architecture and thus consists of a set of functional nodes. Ten of these nodes, the private or application nodes, perform the function "execute programs to control and acquire data from experiment number x". An additional functional node, the common or shared service node, performs the function "provide a set of shared services to the application nodes". The shared service node has been successfully implemented in software with in-house code oriented towards transaction processing and in hardware with a Digital Equipment Corporation PDP-11/40 computer. However, recent demands that this node provide an expanded set of services have required that its implementation elements be modified and extended. In particular, the node hardware has been changed to a PDP-11/45 processor and the software present at the node has been extended from operation in two modes of logical address space to three modes. A discussion of the systems analysis principles which influenced the manner in which these modifications and extensions were carried out is given. The structure of the old two-mode software is briefly reviewed in order to provide a basis for an examination of its three-mode replacement.

Journal ArticleDOI
TL;DR: An ordered sequence of stages, well-supported development tools, good programming practices–management backing brings it all together in microprocessor-based development.
Abstract: An ordered sequence of stages, well-supported development tools, good programming practices–management backing brings it all together in microprocessor-based development.

Proceedings ArticleDOI
J. Baer1
13 Nov 1978
TL;DR: A classification of alterable architectures is given and problem areas in the software control and program design for these architectures are delineated by considering an application area, namely searching.
Abstract: A classification of alterable architectures is given. Problem areas in the software control and program design for these architectures are delineated. This is illustrated by considering an application area, namely searching.

Proceedings ArticleDOI
J.G. Yee1, S.Y.H. Su
13 Nov 1978
TL;DR: A scheme which generates estimated data to replace the wrong or unexpected input data of a real time system which accepts highly correlated numerical data as input is presented.
Abstract: The goal of this research is to provide software fault tolerance in a real time system. As the first step, this paper presents a scheme which generates estimated data to replace the wrong or unexpected input data of a real time system which accepts highly correlated numerical data as input. Such an estimated data allows a program to continue its operation without interruption. Reliability evaluation of this scheme is given to show that the scheme improves software reliability.

Proceedings ArticleDOI
19 Jun 1978
TL;DR: This paper presents several concepts which permit the implementation of the construction of the PROTOTYPE of an application by using techniques of computer assisted construction and adaptation to the final run structure defined by software - hardware - firmware facilities.
Abstract: SIGMA-CAD is a general purpose CAD system based essentially on two new ideas: - construction of the PROTOTYPE of an application, which facilitates the implementation of an application by using techniques of computer assisted construction, allowing an incompletely defined system to fonction in simulation and thus permitting close collaboration between the builder and the furure user. - adaptation of this PROTOTYPE to the final run structure defined by software - hardware - firmware facilities (choice, for example, of method of data management, structure of working station, connections with OS, use of special processors, etc...). This paper presents several concepts which permit the implementation of these ideas.

Proceedings ArticleDOI
13 Nov 1978
TL;DR: Methods for achieving an acceptable level of fault-tolerance in a multiprocessor designed for the execution of critical flight tasks on commercial aircraft and how the identification of faulty units can be effected are described.
Abstract: We describe methods for achieving an acceptable level of fault-tolerance in a multiprocessor designed for the execution of critical flight tasks on commercial aircraft. The SIFT (Software Implemented Fault-tolerant) system differs from other reliable computer architectures in the manner of utilization of redundant resources. The SIFT system is designed to cope with hardware faults by detecting discrepancies in the results computed by separately executed versions of critical programs. When such a discrepancy occurs we identify the processor/memory unit, bus unit, or processor/bus interface that has failed. A reconfiguration of task execution among the available resources is carried out to avoid the use of hardware units that have been identified as faulty or unreliable. We explain how the identification of faulty units can be effected, how the system may be reconfigured to avoid use of faulty units, and show that these reconfiguration policies are "safe" with respect to faults that do not occur in separated units during the short interval required to carry out the reconfiguration of tasks.

Proceedings ArticleDOI
13 Nov 1978
TL;DR: In this paper, the SDEM (Software Development Engineer's Methodology) and SDSS (software development support system) software development tools have been developed to automate application software development.
Abstract: During the past decade, we have continuously tried to apply severl methodologies to automate application software development. We have also been encouraged to develop powerful tools for supporting effective software development. In this paper, we introduce backgrounds, abstracts, and preliminary evaluations of SDEM (Software Development Engineer's Methodology) and SDSS (Software Development Support System) which have resulted from our efforts. We started SDEM by defining a clear life cycle of software development, then we clarified purposes, contents, and procedures of activities at each phase. We Provide the WHY, WHAT, and HOW at each development phase following the basic concept of requirements engineering, and we especially emphasize clarifying activity criteria at the early stages of development. SDSS, on the other hand, is being developed as a systematic set of practical tools to aid system development using today's high-level languages.

Journal ArticleDOI
TL;DR: A theoretical nonhomogeneous Markov model is introduced for reliability analysis for computing systems in aerospace applications that does not account for actual computations the system performs in the use environment.
Abstract: Reliability analysis for computing systems in aerospace applications must account for actual computations the system performs in the use environment. This paper introduces a theoretical nonhomogeneous Markov model for such applications.

Proceedings ArticleDOI
01 Nov 1978
TL;DR: In this article, an approach to evolutionary database systems design and evolutionary database design which is intended to lower the total costs of maintaining, developing and executing application programs against a database is presented.
Abstract: We outline herein an approach to evolutionary database systems design and evolutionary database design which is intended to lower the total costs of maintaining, developing and executing application programs against a database. The premise is that it should be possible to have an evolutionary database system which executes efficiently while providing explicit support for changes in application usage. The key concepts are: hierarchical layering of a database specification, inclusion of static/dynamic characteristics in specification of the database, and a hybrid compile/interpret system for the execution phase of the data management system itself.