scispace - formally typeset
Search or ask a question

Showing papers on "Software published in 1985"


Journal ArticleDOI
TL;DR: Principal requirements for the implementation of N-version software are summarized and the DEDIX distributed supervisor and testbed for the execution of N -version software is described.
Abstract: Evolution of the N-version software approach to the tolerance of design faults is reviewed. Principal requirements for the implementation of N-version software are summarized and the DEDIX distributed supervisor and testbed for the execution of N-version software is described. Goals of current research are presented and some potential benefits of the N-version approach are identified.

1,093 citations



01 Jan 1985
TL;DR: It is pointed out that faults in production software are often soft (transient) and that a ransaction mechanism combined with persistent processpairs provides fault-tolerant execution -- the key to software fault -tolerance.
Abstract: An analysis of the failure statistics of a commercially available fault-tolerant system shows that administration and software are the major contributors to failure. Various approaches to software fault-tolerance are then discussed -- notably process-pairs, transactions and reliable storage. It is pointed out that faults in production software are often soft (transient) and that a transaction mechanism combined with persistent processpairs provides fault-tolerant execution -- the key to software fault-tolerance.

808 citations


Journal ArticleDOI
TL;DR: Unless computer-mediated communication systems are structured, users will be overloaded with information, but structure should be imposed by individuals and user groups according to their needs and abilities, rather than through general software features.
Abstract: Unless computer-mediated communication systems are structured, users will be overloaded with information. But structure should be imposed by individuals and user groups according to their needs and abilities, rather than through general software features.

704 citations


Journal ArticleDOI
TL;DR: Using this model, the properties required by languages and their execution environments to support dynamic configuration are determined and CONIC, the distributed system which has been developed at Imperial College, is described to illustrate the feasibility of the model.
Abstract: Dynamic system configuration is the ability to modify and extend a system while it is running. The facility is a requirement in large distributed systems where it may not be possible or economic to stop the entire system to allow modification to part of its hardware or software. It is also useful during production of the system to aid incremental integration of component parts, and during operation to aid system evolution. The paper introduces a model of the configuration process which permits dynamic incremental modification and extension. Using this model we determine the properties required by languages and their execution environments to support dynamic configuration. CONIC, the distributed system which has been developed at Imperial College with the specific objective of supporting dynamic configuration, is described to illustrate the feasibility of the model.

360 citations


Journal ArticleDOI
TL;DR: A condition under which a multiversion system is a better strategy than relying on a single version is given and some differences between the coincident errors model developed here and the model that assumes independent failures of component verions are studied.
Abstract: Fundamental to the development of redundant software techniques (known as fault-tolerant software) is an understanding of the impact of multiple joint occurrences of errors, referred to here as coincident errors. A theoretical basis for the study of redundant software is developed which 1) provides a probabilistic framework for empirically evaluating the effectiveness of a general multiversion strategy when component versions are subject to coincident errors, and 2) permits an analytical study of the effects of these errors. An intensity function, called the intensity of coincident errors, has a central role in this analysis. This function describes the propensity of programmers to introduce design faults in such a way that software components fail together when executing in the application environment. We give a condition under which a multiversion system is a better strategy than relying on a single version and we study some differences between the coincident errors model developed here and the model that assumes independent failures of component verions.

339 citations


Journal ArticleDOI
TL;DR: In this paper, the authors show how the software design technique known as information hiding, or abstraction, can be supplemented by a hierarchically structured document, which they call a module guide, intended to allow both designers and maintainers to identify easily the parts of the software that they must understand.
Abstract: This paper discusses the organization of software that is inherently complex because of very many arbitrary details that must be precisely right for the software to be correct. We show how the software design technique known as information hiding, or abstraction, can be supplemented by a hierarchically structured document, which we call a module guide. The guide is intended to allow both designers and maintainers to identify easily the parts of the software that they must understand, without reading irrelevant details about other parts of the software. The paper includes an extract from a software module guide to illustrate our proposals.

315 citations


Book ChapterDOI
01 Jun 1985
TL;DR: It is shown that a sequence of documents that should be produced on the way to producing the software can serve several purposes, and how these documents can be constructed using the same principles that should guide the software design is discussed.
Abstract: Software Engineers have been searching for the ideal software development process: a process in which programs are derived from specifications in the same way that lemmas and theorems are derived from axioms in published proofs. After explaining why we can never achieve it, this paper describes such a process. The process is described in terms of a sequence of documents that should be produced on the way to producing the software. We show that such documents can serve several purposes. They provide a basis for preliminary design review, serve as reference material during the coding, and guide the maintenance programmer in his work. We discuss how these documents can be constructed using the same principles that should guide the software design. The resulting documentation is worth much more than the "afterthought" documentation that is usually produced. If we take the care to keep all of the documents up-to-date, we can create the appearance of a fully rational design process.

227 citations


Patent
23 Jul 1985
TL;DR: In this article, a tamper proof co-processor which forms a part of the computing machine is used to restrict the use of a program on a single machine to be executed on the original medium.
Abstract: Method and apparatus which restricts software, distributed on magnetic media, to use on a single computing machine. The original medium is functionally uncopyable, until it is modified by the execution of a program stored in a tamper proof co-processor which forms a part of the computing machine. The modified software on the original medium may then be copied, but the copy is operable only on the computing machine containing the co-processor that performed the modification.

204 citations


Journal ArticleDOI
TL;DR: Through a detailed analysis of three software products and their error discovery histories, simple metrics related to the amount of data and the structural complexity of programs are found to be of value for identifying error-prone software.
Abstract: A major portion of the effort expended in developing commercial software today is associated with program testing. Schedule and/ or resource constraints frequently require that testing be conducted so as to uncover the greatest number of errors possible in the time allowed. In this paper we describe a study undertaken to assess the potential usefulness of various product-and process-related measures in identifying error-prone software. Our goal was to establish an empirical basis for the efficient utilization of limited testing resources using objective, measurable criteria. Through a detailed analysis of three software products and their error discovery histories, we have found simple metrics related to the amount of data and the structural complexity of programs to be of value for this purpose.

199 citations


Patent
20 Aug 1985
TL;DR: In this article, a software module is encrypted using the data encryption standard (DES) algorithm, and the key is encrypted with the public key of a public/private key algorithm.
Abstract: In order to prevent the unauthorized copying of software, a software module is encrypted using the data encryption standard (DES) algorithm, and the key is encrypted using the public key of a public/private key algorithm. To use the module it is entered into a software protection device where the private key held in a RAM 11 is used by a processor 13 to decode the DES key using instructions held in a ROM 12. Further instructions held by this ROM are used by the processor 13 to decode the module. Once the process of decoding keys and software has started, the processor 13 runs through a sequence of predetermined instructions and cannot be interrupted (except by switching off). When the sequence is complete processor 13, or for example a host computer 30, is enabled to use the decoded software, but a switch/reset circuit 17 operates preventing access to the RAM 11 and the ROM 12 so preserving the secrecy of the private key and any decoded DES key which is now stored in the RAM 11.

Patent
21 Feb 1985
TL;DR: In this article, a semiconductor device (12) is used to control access to a software program resident in a computer (68), which includes a continuously running pulse generator (60) that produces an output representative of real time, a shift register permanently storing a unique number and circuitry (64) for executing an algorithm that combines real-time and the permanently stored unique number to produce a password.
Abstract: A semiconductor device (12) that functions as a key (12) to control access to a software program resident in a computer (68). The device (12) includes a continuously running pulse generator (60) that produces an output representative of real time, a shift register permanently storing a unique number and circuitry (64) for executing an algorithm that combines real time and the permanently stored unique number to produce a password (18). The password (18) is input to the computer (68). The computer (68) is coded to execute an equivalent algorithm to produce a password (88) within the computer (68). The two passwords are compared and access to the computer program is afforded only if they bear a prescribed relationship. The computer (68) can be coded to produce on the video display (74, 76) thereof a time-space pattern on the computer video display (74, 76, 78), circuitry (86) for deriving the stimulus number therefrom, and circuitry (88, 66) for processing the stimulus number so that the password (18) displayed by the key (12) is a function of the value of the stimulus number. The computer (68) executes a similar procedure on the stimulus number so that access to the software program is afforded only if correspondence exists between the user input password (18) and the password (88) generated in the computer (68).

Journal ArticleDOI
TL;DR: This paper extends an optimal software release problem to both cost and reliability requirements, and the underlying model is software reliability growth described by a nonhomogeneous Poisson process.
Abstract: This paper extends an optimal software release problem to both cost and reliability requirements. The optimum software release time is determined both by minimizing a total average software cost and satisfying a software reliability requirement. The underlying model is software reliability growth described by a nonhomogeneous Poisson process.

Patent
01 Jul 1985
TL;DR: In this article, a protection subroutine with a unique reference code is emplaced in a protected software package and a validation program is included in the package, which connects with an ESD and communicates with a secure computer.
Abstract: A protection subroutine with a unique reference code is emplaced in a protected software package. The package also contains a validation program. The protection subroutine and validation program connect with an ESD and both the ESD and the program communicate with a secure computer. Upon receipt of inputs of the software serial number and reference code and the ESD identifier, the computer generates a validation code which causes the protection subroutine to command execution of the protected software by its hose computer.

Journal ArticleDOI
TL;DR: The implementation of a flexible data storage system for the UNIX environment that has been designed as an experimental vehicle for building database management systems is described.
Abstract: We describe the implementation of a flexible data storage system for the UNIX environment that has been designed as an experimental vehicle for building database management systems. The storage component forms a foundation upon which a variety of database systems can be constructed including support for unconventional types of data. We describe the system architecture, the design decisions incorporated within its implementation, our experiences in developing this large piece of software, and the applications that have been built on top of it.

Journal ArticleDOI
TL;DR: Design stability measures which indicate the potential ripple effect characteristics due to modifications of the program at the design level are presented which enables early maintainability feedback to the software developers.
Abstract: The high cost of software during its life cycle can be attributer largely to software maintenance activities, and a major portion of these activities is to deal with the modifications of the software. In this paper, design stability measures which indicate the potential ripple effect characteristics due to modifications of the program at the design level are presented. These measures can be generated at any point in the design phase of the software life cycle which enables early maintainability feedback to the software developers. The validation of these measures and future research efforts involving the development of a user-oriented maintainability measure, which incorporates the design stability measures as well as other design measures, are discussed.

Patent
18 Nov 1985
TL;DR: In this article, a system for programming a computer provides a set of software-based virtual machines each for instructing a computer to carry out a selected operation, each virtual machine is represented by a virtual front panel displayed on a screen.
Abstract: A system for programming a computer provides a set of software-based virtual machines each for instructing a computer to carry out a selected operation. Each virtual machine is represented by a virtual front panel displayed on a screen and each virtual front panel graphically displays operator controllable values of input and output parameters utilized by the virtual machine it represents. The system is adapted to synthesize a new virtual machine for instructing the computer to perform a sequence of operations wherein each operation is carried out by the computer according to the instructions of an operator selected one of the existing virtual machines. The system also creates a new virtual front panel for displaying input and output parameters associated with the new virtual machine. The system permits the operator to program the computer by directing synthesis of a hierarchy of virtual machines.

Journal ArticleDOI
TL;DR: This work speeds up the software in geometric algorithms for solid modeling, CAD/CAM, and robotics applications by using boundary data structures that are fast and use less storage.
Abstract: Speed up the software in geometric algorithms for solid modeling, CAD/CAM, and robotics applications. How? By using boundary data structures that are fast and use less storage.

Journal ArticleDOI
F. Cristian1
TL;DR: A new approach is suggested for specifying, understanding, and verifying the correctness of fault-tolerant software by modeling faults as being operations that are performed at random time intervals on any computing system by the system's adverse environment.
Abstract: The design of programs that are tolerant of hardware fault occurrences and processor crashes is investigated. Using a stable storage management system as a running example, a new approach is suggested for specifying, understanding, and verifying the correctness of fault-tolerant software. The approach extends previously developed axiomatic reasoning methods to the design of fault-tolerant systems by modeling faults as being operations that are performed at random time intervals on any computing system by the system's adverse environment.

Proceedings ArticleDOI
25 Mar 1985
TL;DR: The conceptual design, analysis, synthesis and software organization of an advanced teleoperator control system with sensory feedback that features maximum autonomy of the local hand controller and remote manipulator subsystems, along with kinematic and dynamic coordination between these subsystems is presented.
Abstract: This paper presents the conceptual design, analysis, synthesis and software organization of an advanced teleoperator control system with sensory feedback. The design requirements for the system are discussed in detail and an implementation strategy is presented. The resulting system features maximum autonomy of the local hand controller and remote manipulator subsystems, along with kinematic and dynamic coordination between these subsystems. The final design emphasizes cooperation and interaction between the human operator and the computers in control of the sensor-based manipulator system. The hardware and software modules being used to implement the system at JPL are described.

Book
01 Mar 1985
TL;DR: This collection of articles documents the design of the Massively Parallel Processor, a single instruction multiple data stream (SIMD) class supercomputer with 16,834 processing units capable of over 6 billion 8 bit operations per second.
Abstract: From the Publisher: The development of parallel processing, with the attendant technology of advanced software engineering, VLSI circuits, and artificial intelligence, now allows high-performance computer systems to reach the speeds necessary to meet the challenge of future complex scientific and commercial applications. This collection of articles documents the design of one such computer, a single instruction multiple data stream (SIMD) class supercomputer with 16,834 processing units capable of over 6 billion 8 bit operations per second. It provides a complete description of the Massively Parallel Processor (MPP), including discussions of hardware and software with special emphasis on applications, algorithms, and programming. This system with its massively parallel hardware and advanced software is on the cutting edge of parallel processing research, making possible AI, database, and image processing applications that were once thought to be inconceivable. The massively parallel processor represents the first step toward the large-scale parallelism needed in the computers of tomorrow. Orginally built for a variety of image-processing tasks, it is fully programmable and applicable to any problem with sizeable data demands. Contents: "History of the MPP," D. Schaefer; "Data Structures for Implementing the Classy Algorithm on the MPP," R. White; "Inversion of Positive Definite Matrices on the MPP," R. White; "LANDSAT-4 Thematic Mapper Data Processing with the MPP," R. O. Faiss; "Fluid Dynamics Modeling," E. J. Gallopoulas; "Database Management," E. Davis; "List Based Processing on the MPP," J. L. Potter; "The Massively Parallel Processor System Overvew," K. E. Batcher; "Array Unit," K. E.Batcher; "Array Control Unit," K. E. Batcher; "Staging Memory," K. E. Batcher; "PE Design," J. Burkley; "Programming the MPP," J. L. Potter; "Parallel Pascal and the MPP," A. P Reeves; "MPP System Software," K. E. Batcher; "MPP Program Development and Simulation," E. J. Gallopoulas. J. L. Potter is Associate Professor of Computer Science at Kent State University. The Massively Parallel Processor is included in the Scientific Computation Series, edited by Dennis Gannon.

Journal ArticleDOI
TL;DR: A large quantity of well-respected software is tested against a series of metrics designed to measure program lucidity, with intriguing results.
Abstract: A large quantity of well-respected software is tested against a series of metrics designed to measure program lucidity, with intriguing results. Although slanted toward software written in the C language, the measures are adaptable for analyzing most high-level languages.

Journal ArticleDOI
TL;DR: An analysis of operating system failures on an IBM 3081 running VM/SP finds three broad categories of software failures: error handling, program control or logic, and hardware related; it is found that more than 25 percent ofSoftware failures occur in the hardware/software interface.
Abstract: This paper presents an analysis of operating system failures on an IBM 3081 running VM/SP. We find three broad categories of software failures: error handling (ERH), program control or logic (CTL), and hardware related (HS); it is found that more than 25 percent of software failures occur in the hardware/software interface. Measurements show that results on software reliability cannot be considered representative unless the system workload is taken into account. For example, it is shown that the risk of a software failure increases in a nonlinear fashion with the amount of interactive processing, as measured by parameters such as the paging rate and the amount of overhead (operating system CPU time). The overall CPU execution rate, although measured to be close to 100 percent most of the time, is not found to correlate strongly with the occurrence of failures. The paper discusses possible reasons for the observed workload failure dependency based on detailed investigations of the failure data.

Journal ArticleDOI
TL;DR: A model for whenever a program encounters an error, a system failure results, and the software is inspected to determine and remove the error responsible for the failure, and an estimation and stopping rule procedure is proposed.
Abstract: When a new computer software package is developed and all obvious erros removed, a testing procedure is often put into effect to eliminate the remaining errors in the package. One common procedure is to try the package on a set of randomly chosen problems. We suppose that whenever a program encounters an error, a system failure results. At this point the software is inspected to determine and remove the error responsible for the failure. This goes on for some time and two problems of interest are 1) to estimate the error rate of the software at a given time t, and 2) to develop a stopping rule for determining when to discontinue the testing and declare that the software is ready for use. In this paper, a model for the above is proposed as an estimation and stopping rule procedure.

Patent
31 Jan 1985
TL;DR: In this paper, a software debugging analyzer nonintrusively acquires data concerning the execution of software on a real-time basis, which is stored in memory in either a sequential or random access mode.
Abstract: A software debugging analyzer nonintrusively acquires data concerning the execution of software on a real-time basis. Low-level event recognition is accomplished with programmable comparators, whose outputs are fed to high-level recognition comparators to define complex events. Dynamic recognition is provided by recognition cmparators programmable on a real time basis as variables are actuated. Acquired data is stored in memory in either a sequential or random access mode. A microprocessor translates high level commands into event constructs and processes the acquired data into a format suitable for display to a user.

Journal ArticleDOI
TL;DR: The analysis shows that the operating system is seldom able to diagnose that a software error may be hardware-related, and the observed HW/SW errors are seen to have a specific pattern, suggesting the possibility of the use of such error patterns for intelligent error prediction and recovery.
Abstract: This paper describes an analysis of hardware-related software (HW/SW) errors on an MVS/SP operating system at Stanford University. The analysis procedure demonstrates a methodology for evaluating the interaction between hardware and software as it relates to system reliability. The paper examines the operating system's handling of HW/SW errors and also the effectiveness of recovery management. Nearly 35 percent of all observed software failures were found to be hareware-related. The analysis shows that the operating system is seldom able to diagnose that a software error may be hardware-related. The impact of HW/SW errors on the system is evaluated by measuring the effectiveness of system recovery in containing the propagation of HW/SW errors. The system failure probability for HW/SW errors is close to three times that for software errors in general. The observed HW/SW errors are seen to have a specific pattern, suggesting the possibility of the use of such error patterns for intelligent error prediction and recovery.

Book ChapterDOI
01 Jan 1985
TL;DR: Before embarking on an automated assembly project, a user would be well advised to look beyond the mechanical performance of the rival machines and to look closely at the software features offered.
Abstract: Many robot manufacturers claim that their machines are suitable for assembly operations, but before embarking on an automated assembly project, a user would be well advised to look beyond the mechanical performance of the rival machines and to look closely at the software features offered.

Journal ArticleDOI
TL;DR: In this article, Voss outlines the key factors that determine whether a new software development is likely to be successful and suggests that the software innovation process has many demands that are common to innovation in other product areas.

Journal ArticleDOI
TL;DR: A collection of new methods are described, invisible to the user, capable of generating good solutions to the mathematical programming problems that underlie each major design component and obtain answers in seconds to minutes on a minicomputer.
Abstract: We describe the development and successful implementation of a decision support system now being used by several leading firms in the architecture and space planning industries. The system, which we call SPDS (spatial programming design system) has the following characteristics: (i) user-friendly convenience features permitting architects and space planners to operate the system without being experienced programmers; (ii) interactive capabilities allowing the user to control and to manipulate relevant parameters, orchestrating conditions to which his or her intuition provides valuable input; (iii) informative and understandable graphics, providing visual displays of interconnections that the computer itself treats in a more abstract methematical form; (iv) convenient ways to change configurations, and to carry out ‘what if’ analyses calling on the system’s decision support capabilities; (v) a collection of new methods, invisible to the user, capable of generating good solutions to the mathematical programming problems that underlie each major design component. These new methods succeed in generating high quality solutions to a collection of complex discrete, highly nonlinear problems. While these problems could only be solved in hours, or not at all, with previously existing software, the new methods obtain answers in seconds to minutes on a minicomputer. Major users, including Dalton, Dalton, Newport, and Marshal Erdwin, report numerous advantages of the system over traditional architectural design methods.

Journal ArticleDOI
TL;DR: The state-of-the-art general purpose vehicle system dynamics software is reviewed in this article, where two representative programs, MEDYNA and NEWEUL, are described with respect to modeling options, computational methods, software engineering as well as interfaces to other software.
Abstract: SUMMARY This paper pursues two objectives: Firstly, to review the state-of-the-art of general purpose vehicle system dynamics software and secondly, to describe two representatives, the program MEDYNA and the program NEWEUL. The general modeling requirements for vehicle dynamics software, the multibody system approach and a comparative discussion of multibody software are given. The two programs NEWEUL and MEDYNA are described with respect to modeling options, computational methods, software engineering as well as their interfaces to other software. The applicability of these programs is demonstrated on two selected examples, one from road vehicle problems and the other from wheel/rail dynamics. It is concluded that general purpose software based on multibody formalisms will play the same role for mechanical systems, especially vehicle systems, as finite element methods play for elastic structures.