scispace - formally typeset
Search or ask a question

Showing papers on "Software published in 1976"


Book
01 Jan 1976
TL;DR: In this paper, the authors present an introduction to the mathematical theory underlying computer graphic applications, including transformations, projections, 2-D and 3-D curve definition schemes, and surface definitions.
Abstract: From the Publisher: This text is ideal for junior-,senior-,and graduate-level courses in computer graphics and computer-aided design taught in departments of mechanical and aeronautical engineering and computer science. It presents in a unified manner an introduction to the mathematical theory underlying computer graphic applications. It covers topics of keen interest to students in engineering and computer science: transformations,projections,2-D and 3-D curve definition schemes,and surface definitions. It also includes techniques,such as B-splines,which are incorporated as part of the software in advanced engineering workstations. A basic knowledge of vector and matrix algebra and calculus is required.

1,086 citations


Proceedings ArticleDOI
13 Oct 1976
TL;DR: The study reported in this paper provides for the first time a clear, well-defined framework for assessing the often slippery issues associated with software quality, via the consistent and mutually supportive sets of definitions, distinctions, guidelines, and experiences cited.
Abstract: The study reported in this paper establishes a conceptual framework and some key initial results in the analysis of the characteristics of software quality. Its main results and conclusions are:• Explicit attention to characteristics of software quality can lead to significant savings in software life-cycle costs.• The current software state-of-the-art imposes specific limitations on our ability to automatically and quantitatively evaluate the quality of software.• A definitive hierarchy of well-defined, well-differentiated characteristics of software quality is developed. Its higher-level structure reflects the actual uses to which software quality evaluation would be put; its lower-level characteristics are closely correlated with actual software metric evaluations which can be performed.• A large number of software quality-evaluation metrics have been defined, classified, and evaluated with respect to their potential benefits, quantifiability, and ease of automation.•Particular software life-cycle activities have been identified which have significant leverage on software quality.Most importantly, we believe that the study reported in this paper provides for the first time a clear, well-defined framework for assessing the often slippery issues associated with software quality, via the consistent and mutually supportive sets of definitions, distinctions, guidelines, and experiences cited. This framework is certainly not complete, but it has been brought to a point sufficient to serve as a viable basis for future refinements and extensions.

739 citations


Journal ArticleDOI
J. R. Sklaroff1
TL;DR: This paper describes how a set of off-the-shelf general purpose digital computers is being managed in a redundant avionic configuration while performing flight-critical functions for the Space Shuttle.
Abstract: This paper describes how a set of off-the-shelf general purpose digital computers is being managed in a redundant avionic configuration while performing flight-critical functions for the Space Shuttle. The description covers the architecture of the redundant computer set, associated redundancy design requirements, and the technique used to detect a failed computer and to identify this failure on-board to the crew. Significant redundancy management requirements consist of imposing a total failure coverage on all flight-critical functions, when more than two redundant computers are operating in flight, and a maximum failure coverage for limited storage and processing time, when only two are operating. The basic design technique consists of using dedicated redundancy management hardware and software to allow each computer to judge the "health" of the others by comparing computer outputs and to "vote" on the judgments. In formulating the design, hardware simplicity, operational flexibility, and minimum computer resource utilization were used as criteria.

184 citations


Proceedings ArticleDOI
James E. White1
07 Jun 1976
TL;DR: This paper proposes a high-level, application-independent framework for the construction of distributed systems within a resource sharing computer network that eliminates the need for application-specific communication protocols and support software, thus easing the task of the applications programmer and so encouraging the sharing of resources.
Abstract: This paper proposes a high-level, application-independent framework for the construction of distributed systems within a resource sharing computer network. The framework generalizes design techniques in use within the ARPA Computer Network. It eliminates the need for application-specific communication protocols and support software, thus easing the task of the applications programmer and so encouraging the sharing of resources. The framework consists of a network-wide protocol for invoking arbitrary named functions in a remote process, and machine-dependent system software that interfaces one applications program to another via the protocol. The protocol provides mechanisms for supplying arguments to remote functions and for retrieving their results; it also defines a small number of standard data types from which all arguments and results must be modeled. The paper further proposes that remote functions be thought of as remotely callable subroutines or procedures. This model would enable the framework to more gracefully extend the local programming environment to embrace modules on other machines.

125 citations


Journal ArticleDOI
TL;DR: The formal methodology of Higher Order Software (HOS), specifically aimed toward large-scale multiprogrammed/multiprocessor systems, is dedicated to systems reliability.
Abstract: The key to software reliability is to design, develop, and manage software with a formalized methodology which can be used by computer scientists and applications engineers to describe and communicate interfaces between systems. These interfaces include: software to software; software to other systems; software to management; as well as discipline to discipline within the complete software development process. The formal methodology of Higher Order Software (HOS), specifically aimed toward large-scale multiprogrammed/multiprocessor systems, is dedicated to systems reliability. With six axioms as the basis, a given system and all of its interfaces is defined as if it were one complete and consistent computable system. Some of the derived theorems provide for: reconfiguration of real-time multiprogrammed processes, communication between functions, and prevention of data and timing conflicts.

108 citations


Book
01 Sep 1976
TL;DR: Focuses on the unreliability of computer programs and offers state-of-the-art solutions in software development, software testing, structured programming, composite design, language design, proofs of program correctness, and mathematical reliability models.
Abstract: From the Publisher: Deals constructively with recognized software problems. Focuses on the unreliability of computer programs and offers state-of-the-art solutions. Covers--software development, software testing, structured programming, composite design, language design, proofs of program correctness, and mathematical reliability models. Written in an informal style for anyone whose work is affected by the unreliability of software. Examples illustrate key ideas, over 180 references.

77 citations


Patent
16 Sep 1976
TL;DR: In this article, a microprocessor controlled digital multimeter offers significant advantages to the user, such as automatic calibration and correction routines ensure long term stability and simplify maintenance; self diagnostics greatly help troubleshooting and even warn of impending failures; software implementation of logic design simplifies hardware, improves reliability and adds capabilities such as fast auto ranging and software I/O control.
Abstract: A new microprocessor controlled digital multimeter offers significant advantages to the user. Automatic calibration and correction routines ensure long term stability and simplify maintenance; self diagnostics greatly help troubleshooting and even warn of impending failures; software implementation of logic design simplifies hardware, improves reliability and adds capabilities such as fast auto ranging and software I/O control; the ability to process data permits averaging, linearization and normalization of input measurements, minimum/maximum storage and limit detection.

72 citations


Book
01 Jan 1976
TL;DR: In this paper, a conceptual model of an evaluated computer system, the P-model, is defined using the principles of general systems theory; it provides a convenient uniform description for observing a computer system at any of these levels.
Abstract: : This study concentrates on the measurement problem of a complex computer system. Several issues are attacked: system representation, evaluation and application of computer performance evaluation tools, power of a performance monitor, design of a performance monitor. For an external observer, performance of a computer system is the quality and the quantity of service delivered by the system. However, a computer system is a hierarchy of several levels, the lowest level being the circuit level, the highest the Software Support level. Performance of the system as a whole is determined by performance of individual levels. A conceptual model of an evaluated computer system, the P-model, is defined in this study using the principles of general systems theory; it provides a convenient uniform description for observing a computer system at any of these levels. The elements of the P-model are the level components; the output are performance measures relevant to the particular level and the purpose of evaluation.

71 citations


Journal ArticleDOI
TL;DR: This paper aims to provide designers with a framework to help them organise and apply ergonomics and human factors literature to the design of man-computer interfaces.

56 citations



Book ChapterDOI
01 Jan 1976
TL;DR: This paper concentrates on the third step from the viewpoint of a numerical analyst working on software for elementary and special functions.
Abstract: There are three distinct steps in the development of a numerical computer program: the development of theoretical methods to perform the desired computation, the development of practical computational algorithms utilizing one or more theoretical methods, and the implementation of these practical algorithms in documented computer software. This paper concentrates on the third step from the viewpoint of a numerical analyst working on software for elementary and special functions.

Proceedings ArticleDOI
13 Oct 1976
TL;DR: An abstract, Markov-like model is used to describe the reliability behavior of SIFT, a fault-tolerant computer in which fault tolerance is achieved primarily by software mechanisms.
Abstract: The SIFT (Software Implemented Fault Tolerance) computer is a fault-tolerant computer in which fault tolerance is achieved primarily by software mechanisms. Tasks are executed redundantly on multiple, independent processors that are loosely synchronized. Each processor is multiprogrammed over a set of distinct tasks. A system of independently accessible busses interconnects the processors. When Task A needs data from Task B, each version of A votes, using software, on the data computed by the different versions of B. (A processor cannot write into another processor; all communication is accomplished by reading.) Thus, errors due to a malfunctioning processor or bus can be confined to the faulty unit and can be masked, and the faulty unit can be identified. An executive routine effects the fault location and reconfigures the system by assigning the tasks, previously assigned to the faulty unit, to an operative unit. Since fault-tolerant computers are used in environments where reliability is at a premium, it is essential that the software of SIFT be correct. The software is realized as a hierarchy of modules in a way that significantly enhances proof of correctness. The behavior of each module is characterized by a formal specification, and the implementation of the module is verified with respect to its specification and those of modules at lower level of the hierarchy. An abstract, Markov-like model is used to describe the reliability behavior of SIFT. This model is formally related to the specifications of the top-most modules of the hierarchy; thus the model can be shown to describe accurately the behavior of the system. At the time of writing, the verification of the system is not complete. The paper describes the design of SIFT, the reliability model, and the approach to mapping from the system to the model.

Journal ArticleDOI
01 Jan 1976
TL;DR: This review focuses on the methods for solving molecular structure problems, using the emerging capabilities of computer hard­ ware, software, and the implementation of complete systems.
Abstract: Computers have been used for molecular structure representation and manipulation for more than a decade. This particular application of computers to scientific problems is just a part of a larger pattern in the development of computer hard­ ware, software, and systems. This review focuses on the methods for solving molecular structure problems, using the emerging capabilities of computer hard­ ware, software, and the implementation of complete systems. The aspects of computer organization can be segmented and defined as follows:

Journal ArticleDOI
TL;DR: A many-state Markov model provides estimates and closed form predictions of the availability and of the most probable number of errors that will have been corrected at a given time in the operation of a large software package.
Abstract: A many-state Markov model has been developed for the purpose of providing performance criteria for computer software. The model provides estimates and closed form predictions of the availability and of the most probable number of errors that will have been corrected at a given time in the operation of a large software package. The model is based on constant rates for error-occurrence ? and error cortection ?. An interesting application case is when ? and ? are functions of the state of debugging achieved. This case is discussed and solved numerically. Extensions and modifications of the basic model are briefly discussed.

Proceedings ArticleDOI
13 Oct 1976
TL;DR: A methodology is presented for transforming system requirements into functional structure and system operating rules, viewed as the first step of a comprehensive software development methodology comprising: top level design, algorithm development, computer selection, and the translation of the functional algorithmic design into operational software.
Abstract: Top level system design is considered, with attention focused on the decomposition of system requirements into subsystem requirements. Primary interest is in the data processing subsystem. Implementation details involving operating systems, selection and configuration of computers, and choice of specific algorithms are excluded from this study. A methodology is presented for transforming system requirements into functional structure and system operating rules. This methodology is viewed as the first step of a comprehensive software development methodology comprising: top level design, algorithm development, computer selection, and the translation of the functional algorithmic design into operational software. The top level design is carried to such detail that algorithms, to be developed subsequently and to be realized ultimately with hardware or software, can be considered bounded by the interfaces of the data processing subsystem (DPS). That is, the interfaces are defined sufficiently well that the algorithm designer needs to consider neither the destination of data leaving the DPS nor the source of data entering the DPS.A system can be decomposed into four structural elements: functions, control, functional flows, and data. Each of these elements is a subject of the decomposition methodology. The inter-relationships of system functions are structured to define a partial ordering of system functions that is amendable to representation as a directed graph. The system control mechanism is defined to be a finite state machine, whose only cycles are loops, having START and END states. For real time operation END states fold onto START states. Functional flows represent each output of the control machine as a serial/parallel execution of the functions, consistent with their partial ordering. Data is used to relate the several functions within a functional flow, to drive the control mechanism, and to link control to the functional flows.

Journal ArticleDOI
D. Edelson1
TL;DR: A chemical compiler developed for this purpose in conjunction with the BELLCHEM kinetics package is described and the implementation is outlined; execution times on a HIS-6000 computer are reported.

Proceedings ArticleDOI
01 Jan 1976
TL;DR: A triplex digital fly-by-wire flight control system was developed and then installed in a NASA F-8C aircraft to provide fail-operative, full authority control and the implementation of computer, sensor, and actuator redundancy management.
Abstract: A triplex digital fly-by-wire flight control system was developed and then installed in a NASA F-8C aircraft to provide fail-operative, full authority control. Hardware and software redundancy management techniques were designed to detect and identify failures in the system. Control functions typical of those projected for future actively controlled vehicles were implemented. This paper describes the principal design features of the system, the implementation of computer, sensor, and actuator redundancy management, and the ground test results. An automated test program to verify sensor redundancy management software is also described.

Proceedings ArticleDOI
01 Dec 1976
TL;DR: A six-legged robot vehicle with eighteen independently powered joints has been constructed at Ohio State University and the structure of the computer software used for interactive real-time vehicle control is described in some detail.
Abstract: A six-legged robot vehicle with eighteen independently powered joints has been constructed at Ohio State University. This paper describes design trade-offs and computer-control concepts as they relate to this machine. the structure of the computer software used for interactive real-time vehicle control is described in some detail. This software is organized so as to permit sequential on-line optimization of stability, tertain adaptability, and energy in the motion of the vehicle over unenev terrain.

Journal ArticleDOI
TL;DR: The system provides for data storage, retrieval, analysis, and display together with utility software to handle data-base maintainance and the integration of processes, the diversity of data-handling capability, and the user-friendly commands are the key design features.

Proceedings ArticleDOI
13 Oct 1976
TL;DR: The Automated Testing and Load Analysis System (ATLAS) formalizes a concept of model-referenced testing for large software systems and has been successfully employed in testing over 40,000 instructions of Bell Laboratories large No. 4 ESS software package.
Abstract: The Automated Testing and Load Analysis System (ATLAS) formalizes a concept of model-referenced testing for large software systems. A directed graph model of the software under test, describing the sequential stimulus-response behavior of the software system, forms the basis of the approach. The objective of ATLAS is to certify the software under test against the model. This objective is met by components of ATLAS that automatically identify, generate, apply, and verify the set of tests required to establish that the software has correctly realized the model. The system has been successfully employed in testing over 40,000 instructions of Bell Laboratories large No. 4 ESS software package. Usage data and experience from this application and a critique of the approach are given.

Proceedings ArticleDOI
13 Oct 1976
TL;DR: A flexible framework, using a System Monitor, to design error-resistant software is presented, followed by a discussion of the strategies to handle errors in the module, program, and system levels.
Abstract: This paper presents a flexible framework, using a System Monitor, to design error-resistant software. The System Monitor contains the code and data for error detection, error containment and recovery at the module level, program level, and system level. It contains five types of components: the Internal Process Supervisor, the External Process Supervisor, the Interaction Supervisor, the System Monitor Kernel, and the Maintenance Program. The functions of each component is discussed, followed by a discussion of the strategies to handle errors in the module, program, and system levels.

Journal ArticleDOI
TL;DR: This paper examines the structure both of subroutine libraries for use with some base language and of complete programming languages, and outlines the advantages and disadvantages of each, along with facilities that should be present in any software package.
Abstract: This paper describes some software packages and programming systems for computer graphics applications, in the process considering software features for both passive and interactive graphics. It examines the structure both of subroutine libraries for use with some base language and of complete programming languages, and outlines the advantages and disadvantages of each, along with facilities that should be present in any software package.


Proceedings ArticleDOI
14 Jul 1976
TL;DR: A configurable approach to software for satellite graphics in which the division of labor between the host and satellite computers can be easily changed after an application program has been written is advocated.
Abstract: This paper advocates a configurable approach to software for satellite graphics in which the division of labor between the host and satellite computers can be easily changed after an application program has been written. A software system, CAGES (Configurable Applications for Graphics Employing Satellites), implements this approach. CAGES can substantially simplify the application programmer's task of programming a host and satellite computer by making the intercomputer interface relatively invisible to him, while at the same time allowing him the efficiency and flexibility that can result from direct application programming of the satellite computer.Proper design of configurable programs is facilitated by a mathematical model defining a pairwise measure of program module inter-dependence. Experience with this model has resulted in a set of programming guidelines that further aid the application programmer in producing a suitable program structure.

Journal ArticleDOI
01 Aug 1976
TL;DR: This article describes a path that has many pieces that must fit together exactly and is a very powerful XML-to-paper path that will not cost a penny, and runs on any platform that runs Java.

01 Sep 1976
TL;DR: The proposed design overcomes many of the traditional problems of database system software and is one of the first to describe a complete data-secure computer capable of handling large databases.
Abstract: : A hardware architecture for a database computer (DBC) is given in this paper. The proposed design overcomes many of the traditional problems of database system software and is one of the first to describe a complete data-secure computer capable of handling large databases.

Book ChapterDOI
01 Jan 1976
TL;DR: The data access requirements for typical sparse matrix computations are considered, and some of the main data structures used to meet these demands are reviewed.
Abstract: In this paper we consider the problem of designing and implementing computer software for sparse matrix computations. We consider the data access requirements for typical sparse matrix computations, and review some of the main data structures used to meet these demands. We also describe some tools and techniques we have found useful for developing sparse matrix software.

Proceedings ArticleDOI
13 Oct 1976
TL;DR: The importance of verifying systems specifications before commencing any software design is described and a technique for accomplishing this objective is delineated.
Abstract: Specifications provide the fundamental link to make the transition between the concept and definition phases of the system development cycle. Straightforward, unambiguous specifications are required to ensure successful results and at the same time minimize cost overruns during the development cycle. Many of the problems currently being addressed by software engineers have their origins in the frequently inconsistent and incomplete nature of system specifications.The U.S. Army Ballistic Missile Defense Advanced Technology Center (BMDATC) is currently studying several advanced software development technologies. BMDATC's efforts are directed toward identifying and resolving the fundamental problems that plague the software community: excessive costs, unrealistic or inappropriate schedules, and inadequate performance. A primary category of the BMDATC program is Data Processing System Engineering Research. This research employs an advanced engineering approach to the generation, verification, and unambiguous communication of a complete and consistent set of system requirements. The key elements of this technology are: (1) a mathematically rigorous decomposition technology that effectively translates system requirements into a traceable graphic representation; (2) a usable System Specification Language (SSL) that supports simulation and specification generation; (3) a set of software tools that aid in the development, verification, and configuration control of the decomposed requirements; and (4) a management approach that supports the designed-in quality of the developed specification.Definitive specifications are of primordial importance to the development process in that they are both the springboard for the design process and the yardstick of the test procedures. This paper describes the importance of verifying systems specifications before commencing any software design and delineates a technique for accomplishing this objective.

Proceedings ArticleDOI
29 Mar 1976
TL;DR: The monitoring capabilities of the Microprogrammable Multi-Processor (MMP), a powerful emulator system that serves as an experimental tool for evaluating computer systems, are described.
Abstract: Emulation of systems makes it possible to combine the predictive power of simulation with the advantages of measurement carried under a real system workload. An emulator is a microprogrammed implementation of the basic hardware machine. It can be easily instrumented to collect performance statistics on the instruction set processor (ISP) level and support performance measurement of different configurations and software of the emulated system. This paper describes the monitoring capabilities of the Microprogrammable Multi-Processor (MMP), a powerful emulator system that serves as an experimental tool for evaluating computer systems. The measurement capabilities of the MMP on various system levels are described, as well as existing performance monitoring tools and their applications. Preliminary results contrasting the Gibson mix and measured instruction frequencies on the AN/GYK-12 computer in a TACFIRE system are given.

Journal ArticleDOI
TL;DR: The role of the computer as a number-crunching device in operations research (OR) is first investigated including techniques like simulation, etc and may be used to improve the design and use of computer systems.