scispace - formally typeset
Search or ask a question

Showing papers on "Software published in 1972"


Book ChapterDOI
01 Jan 1972
TL;DR: The chapter describes the means of predicting mission success on the basis of errors which occur during testing and describes the problems in categorizing software anomalies.
Abstract: Publisher Summary This chapter presents the software reliability research. Software reliability study was initiated by Advanced Information Systems subdivision of McDonnell Douglas Astronautics Company, Huntington Beach, California, to conduct research into the nature of the software reliability problem including definitions, contributing factors, and means for control. Discrepancy reports, which originated during the development of two large-scale real-time systems, form two separate primary data sources for the reliability study. A mathematical model, descriptively entitled the De-Eutrophication Process, was developed to describe the time pattern of the occurrence of discrepancies. This model has been employed to estimate the initial or residual error content in a software package as well as to estimate the time between discrepancies at any phase of its development. The chapter describes the means of predicting mission success on the basis of errors which occur during testing. Moreover, it also describes the problems in categorizing software anomalies and discusses the special area of the genesis of discrepancies during the integration of modules.

1,019 citations


Journal ArticleDOI
TL;DR: This paper presents an approach to writing specifications for parts of software systems sufficiently precise and complete that other pieces of software can be written to interact with the piece specified without additional information.
Abstract: This paper presents an approach to writing specifications for parts of software systems. The main goal is to provide specifications sufficiently precise and complete that other pieces of software can be written to interact with the piece specified without additional information. The secondary goal is to include in the specification no more information than necessary to meet the first goal. The technique is illustrated by means of a variety of examples from a tutorial system.

747 citations


Proceedings ArticleDOI
05 Dec 1972
TL;DR: An overview of the goals, design, and status of this hardware/software complex, and some of the research problems raised and analytic problems solved in the course of its construction are described.
Abstract: In the Summer of 1971 a project was initiated at CMU to design the hardware and software for a multi-processor computer system using minicomputer processors (i.e., PDP-11's). This paper briefly describes an overview (only) of the goals, design, and status of this hardware/software complex, and indicates some of the research problems raised and analytic problems solved in the course of its construction.

297 citations



Book ChapterDOI
01 Jan 1972
TL;DR: A newly developed probabilistic model for predicting software reliability that permits estimation of software reliability before any code is written and allows later updating to improve the accuracy of the parameters when integration or operational tests begin.
Abstract: Summary With the advent of large sophisticated hardware-software systems developed in the 1960s, the problem of computer system reliability has emerged. The reliability of computer hardware can be modeled in much the same way as other devices using conventional reliability theory; however, computer software errors require a different approach. This paper discusses a newly developed probabilistic model for predicting software reliability. The model constants are calculated from error data collected from similar previous programs. The calculations result in a decreasing probability of no software errors versus operating time (reliability function). The rate at which reliability decreases is a function of the man-months of debugging time. Similarly, the mean time between operational software errors (MTBF) is obtained. The MTBF increases slowly and then more rapidly as the debugging effort (man-months) increases. The model permits estimation of software reliability before any code is written and allows later updating to improve the accuracy of the parameters when integration or operational tests begin.

140 citations


Book
C. L. Smith, K. Carter1
01 Jan 1972

102 citations


Journal ArticleDOI
Barbara Liskov1
TL;DR: The Venus Operating System is an experimental multiprogramming system which supports five or six concurrent users on a small computer and is defined by a combination of microprograms and software.
Abstract: The Venus Operating System is an experimental multiprogramming system which supports five or six concurrent users on a small computer. The system was produced to test the effect of machine architecture on complexity of software. The system is defined by a combination of microprograms and software. The microprogram defines a machine with some unusual architectural features; the software exploits these features to define the operating system as simply as possible. In this paper the development of the system is described, with particular emphasis on the principles which guided the design.

85 citations


Proceedings ArticleDOI
05 Dec 1972
TL;DR: Many computer applications have stringent requirements for continued correct operation of the computer in the presence of internal faults, and these computers are termed "fault tolerant"; examples of applications are found in the aerospace industry, communication systems, and computer networks.
Abstract: Many computer applications have stringent requirements for continued correct operation of the computer in the presence of internal faults. The subject of design of such highly reliable computers has been extensively studied, and numerous techniques have been developed to achieve this high reliability. Such computers are termed "fault tolerant"; examples of applications are found in the aerospace industry, communication systems, and computer networks. Several designs of such systems have been proposed and some have been implemented. In general, these designs contain extensive hard-wired logic for such functions as fault masking, comparison, switching, and encoding-decoding.

50 citations


Patent
05 Oct 1972
TL;DR: A multi-level storage system as mentioned in this paper provides multiple levels of storage comprising a high-speed low capacity storage device (buffer store) coupled serially to successive levels of lower speed, higher capacity storage devices including means for varying key physical buffer store parameters such as mapping, replacement algorithm, and buffer store size.
Abstract: A multi-level storage system providing multiple levels of storage comprising a high-speed low capacity storage device (buffer store) coupled serially to successive levels of lower speed, higher capacity storage devices including means for varying key physical buffer store parameters such as mapping, replacement algorithm, and buffer store size. The buffer store is capable of being accessed in a plurality of modes, each mode being under the dynamic control of a program being executed. An additional mode under control of the software or a switch in the maintenance panel is also provided for bypassing the buffer memory.

50 citations


Patent
25 Sep 1972
TL;DR: In this article, a small size digital computer system is designed so that a hardware memory violation protect subsystem may be added to the computer system as a hardware option, which monitors each attempt to alter data within the memory subsystem.
Abstract: A small size digital computer system is designed so that a hardware memory violation protect subsystem may be added to the computer system as a hardware option. The memory protect subsystem includes hardware which may operate in parallel with the digital computer system memory subsystem and which monitors each attempt to alter data within the memory subsystem. Any attempt to alter data within a protected region may be defeated. Following such an attempt, program execution is interrupted and program control is transferred to the computer system executive software. The computer system is also designed so that it may either modify or prevent the execution of certain instructions at times when the memory protect subsystem is in operation so as to defeat all attempts on the part of any software entity to destroy the integrity of the operating system.

46 citations


Journal ArticleDOI
TL;DR: The development of computers has been influenced by three factors: the technology (i.e., the components from which the authors build); the hardware and software techniques they have learned to use; and the user (market).
Abstract: The development of computers has been influenced by three factors: the technology (i.e., the components from which we build); the hardware and software techniques we have learned to use; and the user (market). The improvements in technology seem to dominate in determining the possible resulting structures. Specifically, we can observe the evolution of four classes of computers:

Proceedings ArticleDOI
05 Dec 1972
TL;DR: The results are a drastic improvement over the state of the art and the programmers generally inexperienced and poor, and the programming system used was not designed for the task.
Abstract: In two earlier reports we have suggested some techniques to be used producing software with many programmers. The techniques were especially suitable for software which would exist in many versions due to modifications in methods or applications. These techniques have been taught in an undergraduate course and used in an experimental project in that course. The purpose of this report is to describe the results that have been obtained and to discuss some conclusions which we have reached. The experiment was completely uncontrolled, the programmers generally inexperienced and poor, and the programming system used was not designed for the task. The numerical data presented below have no real value. We include them primarily as an illustration of the type of result that can be obtained by use of the techniques described in the earlier reports. We consider these results a drastic improvement over the state of the art. Major changes in a system can be confined to well-defined, small, subsystems. No intellectual effort is required in the final assembly or "integration" phase.

Journal ArticleDOI
TL;DR: The use of abstract machine modelling as a technique for producing portable software, i.e. software which can be moved readily from one computer to another, is discussed.
Abstract: This paper discusses the use of abstract machine modelling as a technique for producing portable software, i.e. software which can be moved readily from one computer to another. An overview of the principles involved is presented and a critical examination made of three existing abstract machines which were used respectively to implement a macro processor, a text editor and a BASIC compiler.

01 Jan 1972
TL;DR: Results of fault-tolerance experiments performed using an experimental computer with dynamic (standby) redundancy, including replaceable subsystems and a 'program rollback' provision to eliminate transient-caused errors are summarized.
Abstract: Results of fault-tolerance experiments performed using an experimental computer with dynamic (standby) redundancy, including replaceable subsystems and a 'program rollback' provision to eliminate transient-caused errors. After a brief review of the specification of fault-tolerance with respect to transient faults, including a description of the method of injection of transient faults in software and system tests, fault-tolerance experiments carried out with this computer with regard to the determination of fault classes, software verification, system verification, and recovery stability are summarized. A test and repair processor is described which constitutes a special monitor unit of the computer and is used to obtain information for fault detection in the other subsystems of the computer and to ensure that proper recovery occurs when a fault is detected.

Journal ArticleDOI
TL;DR: The relative merits of pitching this language at a high level or a low level is discussed, and some comparative results are presented.
Abstract: An increasing amount of software is being implemented in a portable form. A popular way of accomplishing this is to encode the software in a specially designed machine-independent language and then to map this language, often using a macro processor, into the assembly language of each desired object machine. The design of the machine-independent language is the key factor in this operation. This paper discusses the relative merits of pitching this language at a high level or a low level, and presents some comparative results.

Proceedings ArticleDOI
05 Dec 1972
TL;DR: In computer performance analysis, the analyst must formulate hypotheses about possible inefficiencies and/or bottlenecks in the system by gathering and analyzing computer performance data and suggest alternative system modifications that will improve performance as mentioned in this paper.
Abstract: Computer performance analysis often evokes an image of a hardware monitor dictating a particular hardware modification that doubles the system's capacity. In fact, it usually involves measuring system performance, but is not necessarily limited to the use of hardware monitors, nor does it necessarily involve a hardware modification. It also includes the use of such measurement data sources as software monitors, computer accounting systems, sign-in logs, maintenance logs, and observations from computer operators, system programmers, and users. No specific improvement modification (hardware, etc.) is dictated by the measurements; the analyst must (1) formulate hypotheses about possible inefficiencies and/or bottlenecks in the system by gathering and analyzing computer performance data and (2) suggest alternative system modifications that will improve performance. Such modifications may deal with computer hardware, but they may also deal with computer software, operational procedures, job scheduling, job costing, and any system elements that directly or indirectly affect total system performance.

Proceedings ArticleDOI
16 May 1972
TL;DR: The subject of performance monitoring and measurement has grown from infancy to childhood, and with this growth came substantial performance improvements even with superficial monitoring analysis, mainly from the high cost of development, purchase, and use of large systems.
Abstract: The subject of performance monitoring and measurement has grown from infancy to childhood, and with this growth came substantial performance improvements even with superficial monitoring analysis The recent increased interest in applying measurement techniques by manufacturers and users of large systems stems mainly from the high cost of development, purchase, and use of such systems This cost obligates each to obtain quantitative information on the dynamic behavior of proposed or purchased equipment and software This quantitative information is necessary when a determination is to be made of the difference between potential and actual performance of hardware and software

Journal ArticleDOI
TL;DR: This presentation describes the approaches employed in characterizing and instrumenting the usage and servicing capacity of two operating systems designed for use on Xerox Sigma computer systems, the Batch time-sharing Monitor and the Universal Time-sharing System.
Abstract: Frequently designers of computer systems have few (if any) convenient means of investigating system per- formance during actual operation. This kind of analy- sis must, of course, take into account user demands and characteristics of the host system's hardware/software complement. Moreover such studies are complicated because they typically involve a large number of variables which, because of their random nature, do not exhibit unique values. The problem is not only to identify criteria which characterize the manner in which the system is used but also to select parameters which are readily measurable. This presentation describes the approaches employed in characterizing and instrumenting the usage and servicing capacity of two operating systems designed for use on Xerox Sigma computer systems. These operating systems are the Batch Time-sharing Monitor (BTM) and the Universal Time-sharing System (UTS). The performance monitors are designed with emphasis on sampling, sorting and ordering of statistical data. Examples of performance monitoring data are presented which were obtained from actual measurements.


Proceedings ArticleDOI
05 Dec 1972
TL;DR: This paper attempts to develop a simple framework within which different methods for fault-tolerant computer design can be compared, using a set of very elementary indices to construct the framework.
Abstract: The theory of fault-tolerant computer design has developed rapidly. Several techniques using hardware or software have been suggested. A student is often faced with the problem of developing a common perspective for a variety of methods. In this paper we attempt to develop a simple framework within which different methods can be compared. We use a set of very elementary indices to construct the framework. The indices are quite crude and our framework is somewhat ad hoc. Though a unified theory would be extremely useful we have not attempted to develop one here. Our discussion is a first pass at identifying some goals of reliable design and an attempt at quantifying some parameters. We discuss only a very small set of the techniques that have been proposed for fault-tolerant computers. Methods for constructing relevant indices for these techniques are presented. We feel that these indices are relevant for most reliability techniques.

Book ChapterDOI
01 Jan 1972
TL;DR: The objective of the calibration - a “good fit” between the behavior of real and simulated jobs - was obtained by the development of a semi-automated methodology which makes use of various statistical methods.
Abstract: SOUL, a trace-driven simulator of a large batch computing system, was developed to permit controlled experiments on different hardware and software configurations for the purpose of improving system performance. SOUL's input is real data representing one day's computer workload. Its output is data describing simulated job behavior as well as a series of comparisons between the real and the simulated worlds. This paper describes the calibration methodology used for SOUL. The objective of the calibration - a “good fit” between the behavior of real and simulated jobs - was obtained by the development of a semi-automated methodology which makes use of various statistical methods.

Journal ArticleDOI
TL;DR: The specific topic of this paper is the network's unique hardware configuration, its general design, and the rationale behind its particular specifications.
Abstract: Three Michigan universities are currently working on a project to evolve a network that will link their educational and research computing centers. This project requires the concurrent development of several interrelated technical activities, the most important of which are the network's specialized hardware, software for the network's communications computers, and support software for each of the host computers. The specific topic of this paper is the network's unique hardware configuration, its general design, and the rationale behind its particular specifications. Also included are descriptions of special interfaces developed for the system's synchronous data sets, and the IBM system 360 and Control Data 6000 series computers.

Proceedings ArticleDOI
01 Aug 1972
TL;DR: The design and initial flight tests of the first digital fly-by-wire system to be flown in an aircraft showed highly successful system operation, although quantization of pilot's stick and trim were areas of minor concern from the piloting standpoint.
Abstract: This paper discusses the design and initial flight tests of the first digital fly-by-wire system to be flown in an aircraft. The system, which used components from the Apollo guidance system, was installed in an F-8 aircraft. A lunar module guidance computer is the central element in the three-axis, single-channel, multimode, digital, primary control system. An electrohydraulic triplex system providing unaugmented control of the F-8 aircraft is the only backup to the digital system. Emphasis is placed on the digital system in its role as a control augmentor, a logic processor, and a failure detector. A sampled-data design synthesis example is included to demonstrate the role of various analytical and simulation methods. The use of a digital system to implement conventional control laws was shown to be practical for flight. Logic functions coded as an integral part of the control laws were found to be advantageous. Verification of software required an extensive effort, but confidence in the software was achieved. Initial flight results showed highly successful system operation, although quantization of pilot's stick and trim were areas of minor concern from the piloting standpoint.


ReportDOI
01 Nov 1972
TL;DR: Initial attempts to develop a methodology for Naval Tactical Data System software reliability analysis are reported on and the results of several statistical analyses are presented.
Abstract: : The increase in importance of software in command and control and other complex systems requires increased attention to the problems of software reliability and quality control. The paper reports on initial attempts to develop a methodology for Naval Tactical Data System software reliability analysis and presents the results of several statistical analyses.

Journal ArticleDOI
TL;DR: An interactive computer system for the acquisition and reduction of data for activation analysis studies has been developed that is automatic, but achieves considerable flexibility by allowing the researcher, who need not be familiar with computers, to direct the progress of the analysis.
Abstract: An interactive computer system for the acquisition and reduction of data for activation analysis studies has been developed. It is automatic, but achieves considerable flexibility by allowing the researcher, who need not be familiar with computers, to direct the progress of the analysis. Hardware and software form an integrated system which allows the construction and maintenance of a library of standards, acquires and manipulates data under user control, and performs the calculations for quantitative analysis of trace element concentrations.

Journal ArticleDOI
TL;DR: A large number of techniques, including hardware and software monitoring, simulation, analytical modeling, and decision-theoretic approaches, are developed and emplopyed to provide well-utilized, high-performance systems.
Abstract: The high cost of modern remote communications systems dictates optimum utilization of such facilities. Design and performance evaluation of these systems must not be left to intuitive or heuristic reasoning; instead, formal procedures, both analytic and empirical, must be developed to provide well-utilized, high-performance systems. To this end a large number of techniques1,2 have been developed and emplopyed in the recent past. Among these are hardware and software monitoring, simulation, analytical modeling, and decision-theoretic approaches.

Proceedings ArticleDOI
26 Jun 1972
TL;DR: Estimates of large scale software development schedules and associated costs have had a history of being wrong more than right, and some empirical “rules of thumb” for estimating development times, computer support and costs will be presented.
Abstract: Estimates of large scale software development schedules and associated costs have had a history of being wrong more than right. The landscape is littered with examples of such projects, including some design automation systems, which were over a year late in becoming operational and costing millions of dollars more than estimated. There are a number of reasons for this situation including poorly stated system requirements, shortage of personnel capable of managing large scale software developments, and a tendency to try and solve system problems late in the development cycle by taking advantage of the inherent flexibility of the general purpose computer.To provide a background for a discussion of estimating techniques for software development schedules and costs, a brief description will be given of the development process. Next, some empirical “rules of thumb” for estimating development times, computer support and costs will be presented. Finally some interesting schedule, cost, and computer time estimating equations will be described. These are based upon a regression analysis of a large number of software development projects in terms of observed parameters.


Journal ArticleDOI
TL;DR: This paper summarizes some of the practical considerations and implementation techniques used in the development of the onboard software for an experimental navigation system, including a Kalman filter implementation in square-root form with gyro and accelerometer noises modeled as random forcing functions in the filter.
Abstract: This paper summarizes some of the practical considerations and implementation techniques used in the development of the onboard software for an experimental navigation system. This software includes (1) a Kalman filter implementation in square-root form with gyro and accelerometer noises modeled as random forcing functions in the filter; (2) operational modes for inflight alignment, ground alignment, normal aided inertial navigation operation, and postflight analysis; (3) a sophisticated time-sharing system for obtaining a very flexible input-output capability during real-time operations; and (4) a problem formulation and scaling for complete operation in single-precision arithmetic using a 24-bit word. The paper emphasizes the procedures used in formulation, scaling, and time-sharing for the real-time Kalman filter application. Some flight results are presented to illustrate the performance of the filter during real-time operation.