scispace - formally typeset
Search or ask a question

Showing papers on "Software published in 1987"


Journal ArticleDOI
TL;DR: The faceted scheme described here is a partial solution to the classification and retrieval problem of software component reuse.
Abstract: To reuse a software component, you first have to find it. The faceted scheme described here is a partial solution to this classification and retrieval problem.

723 citations


Journal ArticleDOI
TL;DR: This study applies an experimentation methodology to compare three state-of-the-practice software testing techniques: a) code reading by stepwise abstraction, b) functional testing using equivalence partitioning and boundary value analysis, and c) structural testing using 100 percent statement coverage criteria.
Abstract: This study applies an experimentation methodology to compare three state-of-the-practice software testing techniques: a) code reading by stepwise abstraction, b) functional testing using equivalence partitioning and boundary value analysis, and c) structural testing using 100 percent statement coverage criteria. The study compares the strategies in three aspects of software testing: fault detection effectiveness, fault detection cost, and classes of faults detected. Thirty-two professional programmers and 42 advanced students applied the three techniques to four unit-sized programs in a fractional factorial experimental design. The major results of this study are the following. 1) With the professional programmers, code reading detected more software faults and had a higher fault detection rate than did functional or structural testing, while functional testing detected more faults than did structural testing, but functional and structural testing were not different in fault detection rate. 2) In one advanced student subject group, code reading and functional testing were not different in faults found, but were both superior to structural testing, while in the other advanced student subject group there was no difference among the techniques. 3) With the advanced student subjects, the three techniques were not different in fault detection rate. 4) Number of faults observed, fault detection rate, and total effort in detection depended on the type of software tested. 5) Code reading detected more interface faults than did the other methods. 6) Functional testing detected more control faults than did the other methods.

546 citations


Journal ArticleDOI
Boehm1
TL;DR: This article discusses avenues of improving productivity for both custom and mass-produced software, and some of the topics covered are the importance of improving software productivity, measuring software productivity and indicating trends in software productivity.
Abstract: This article discusses avenues of improving productivity for both custom and mass-produced software. Some of the topics covered are: The importance of improving software productivity, measuring software productivity, analyzing software productivity, improving and indicating trends in software productivity

454 citations


Journal ArticleDOI

331 citations


Journal ArticleDOI
TL;DR: The architecture, implementation, and performance of the Warp machine is described, demonstrating that the Warp architecture is effective in the application domain of robot navigation as well as in other fields such as signal processing, scientific computation, and computer vision research.
Abstract: The Warp machine is a systolic array computer of linearly connected cells, each of which is a programmable processor capable of performing 10 million floating-point operations per second (10 MFLOPS). A typical Warp array includes ten cells, thus having a peak computation rate of 100 MFLOPS. The Warp array can be extended to include more cells to accommodate applications capable of using the increased computational bandwidth. Warp is integrated as an attached processor into a Unix host system. Programs for Warp are written in a high-level language supported by an optimizing compiler. The first ten-cell prototype was completed in February 1986; delivery of production machines started in April 1987. Extensive experimentation with both the prototype and production machines has demonstrated that the Warp architecture is effective in the application domain of robot navigation as well as in other fields such as signal processing, scientific computation, and computer vision research. For these applications, Warp is typically several hundred times faster than a VAX 11/780 class computer. This paper describes the architecture, implementation, and performance of the Warp machine. Each major architectural decision is discussed and evaluated with system, software, and application considerations. The programming model and tools developed for the machine are also described. The paper concludes with performance data for a large number of applications.

328 citations


Patent
22 Apr 1987
TL;DR: In this article, a support system for Computer-Aided Software Engineers (CASE) applications provides configuration management and features such as transparent retrieval of named versions of program sequences on a line by line basis as well as task monitoring and reporting.
Abstract: A support system for Computer-Aided Software Engineer (CASE) applications provides configuration management and features such as transparent retrieval of named versions of program sequences on a line by line basis as well as task monitoring and reporting. A modification record is maintained for all changes to the modules in the system build library by version numbers. Any version of a module can be obtained on a line by line basis as well as several different versions simultaneously thus supporting multiple concurrent system work on different versions by multiple users. Module monitoring is provided whereby if a module is modified when it is being monitored, all persons who might be affected thereby are notified. Task monitoring also provides notification and monitoring of tasks being accomplished as well as "blueprints" to follow in the future for the accomplishment of the same or similar tasks.

320 citations


Journal ArticleDOI
TL;DR: An integrated molecular graphics and computational chemistry framework is described which has been designed primarily to handle small molecules of up to 300 atoms and provides a means of integrating software from any source into a single framework.
Abstract: An integrated molecular graphics and computational chemistry framework is described which has been designed primarily to handle small molecules of up to 300 atoms. The system provides a means of integrating software from any source into a single framework. It is split into two functional subsystems. The first subsystem, called COSMIC. runs on low-cost, serial-linked colour graphics terminals and allows the user to prepare and examine structural data and to submit them for extensive computational chemistry. Links also allow access to databases, other modelling systems and user-written modules. Much of the output from COSMIC cannot be examined with low level graphics. A second subsystem, called ASTRAL, has been developed for the high-resolution Evans & Sutherland PS300 colour graphics terminal and is designed to manipulate complex display structures. The COSMIC minimisers, geometry investigators, molecular orbital displays, electrostatic isopotential generators and various interfaces and utilities are described.

300 citations


Book
01 Jan 1987
TL;DR: This applied, self-contained text provides detailed coverage of the practical aspects of multivariate statistical process control (MVSPC) based on the application of Hotelling's T2 statistic.
Abstract: This applied, self-contained text provides detailed coverage of the practical aspects of multivariate statistical process control (MVSPC) based on the application of Hotelling's T2 statistic. MVSPC is the application of multivariate statistical techniques to improve the quality and productivity of an industrial process. The authors, leading researchers in this area who have developed major software for this type of charting procedure, provide valuable insight into the T2 statistic. Intentionally including only a minimal amount of theory, they lead readers through the construction and monitoring phases of the T2 control statistic using numerous industrial examples taken primarily from the chemical and power industries. These examples are applied to the construction of historical data sets to serve as a point of reference for the control procedure and are also applied to the monitoring phase, where emphasis is placed on signal location and interpretation in terms of the process variables. Specifically devoted to the T2 methodology, Multivariate Statistical Process Control with Industrial Applications is the only book available that concisely and thoroughly presents such topics as how to construct a historical data set; how to check the necessary assumptions used with this procedure; how to chart the T2 statistic; how to interpret its signals; how to use the chart in the presence of autocorrelated data; and how to apply the procedure to batch processes. The book comes with a CD-ROM containing a 90-day demonstration version of the QualStat multivariate SPC software specifically designed for the application of T2 control procedures. The CD-ROM is compatible with Windows 95, Windows 98, Windows Me Millennium Edition, and Windows NT operating systems.

269 citations


Posted Content
TL;DR: A model of the implementation process for dedicated packages and a research project to test the model are presented, along with suggestions for package implementation for both the customer and package vendor are described.
Abstract: This paper presents a model of the implementationprocess for dedicated packages and describes a researchproject to test the model undertaken with the cooperation ofa major computer vendor. Data were collected from 78individuals in 18 firms using the package and from thepackage vendor. The results of the study offer some supportfor the model along with suggestions for packageimplementation for both the customer and package vendor.

239 citations


ReportDOI
01 Sep 1987
TL;DR: This document provides guidelines and procedures for assessing the ability of potential DoD contractors to develop software in accordance with modem software engineering methods, and includes spl-:ific questions and a method for evaluating the results.
Abstract: : This document provides guidelines and procedures for assessing the ability of potential DoD contractors to develop software in accordance with modern software engineering methods. It includes specific questions and a method for evaluating the results.

223 citations


Journal ArticleDOI
01 Aug 1987
TL;DR: This note compares the performance of different computer systems while solving dense systems of linear equations using the LINPACK software in a Fortran environment.
Abstract: This note compares the performance of different computer systems while solving dense systems of linear equations using the LINPACK software in a Fortran environment. About 100 com puters, ranging from a CRAY X-MP to the 68000-based systems such as the Apollo and SUN Workstations to IBM PC's, are compared.

Patent
Alan H. Karp1
05 Aug 1987
TL;DR: In this article, the copy protection of personal computer (PC) software distributed on diskettes is assisted by providing a unique identification (ID) stored in read only memory (ROM) of a personal computer in which software on a diskette is to be used.
Abstract: The copy protection of personal computer (PC) software distributed on diskettes is assisted by providing a unique identification (ID) stored in read only memory (ROM) of a personal computer in which software on a diskette is to be used. This ID is accessible to the user of the computer. A vendor who wishes to protect his diskette-distributed software from illegal copying or use provides a source ID on the diskette. The personal computer ID is used with the source ID on the distributed diskette to produce an encoded check word, using any available encryption modality. The check word is generated and written onto the distributed diskette during installation and copied onto all backup versions made by the user's personal computer. Prior to each use of the program, the software on the diskette uses the personal computer and the source IDs and check word to verify that the software is being used on the same personal computer on which it was installed.

Book
31 Jul 1987
Abstract: This MIS text gives students and active managers a thorough and practical guide to IT management practices and issues. This edition covers hardware, software, networks, enterprise systems, e-business systems, and it lays the groundwork for understanding the range of IS leadership roles and current best practices for managing IT assets. This text is intended to serve as a thorough guide to IT management practices and issues for managers.

Journal ArticleDOI
TL;DR: The models are used to show that one method of creating fault-tolerant software systems, the Consensus Recovery Block, is more reliable than the other two, and it presents reliability models for each.
Abstract: In situations in which computers are used to manage life-critical situations, software errors that could arise due to inadequate or incomplete testing cannot be tolerated. This paper examines three methods of creating fault-tolerant software systems, Recovery Block, N-Version Programming, and Consensus Recovery Block, and it presents reliability models for each. The models are used to show that one method, the Consensus Recovery Block, is more reliable than the other two.

Journal ArticleDOI
TL;DR: This analysis characterizes the effect of Cleanroom on the delivered product, the software development process, and the developers.
Abstract: The Cleanroom software development approach is intended to produce highly reliable software by integrating formal methods for specification and design, nonexecution-based program development, and statistically based independent testing. In an empirical study, 15 three-person teams developed versions of the same software system (800-2300 source lines); ten teams applied Cleanroom, while five applied a more traditional approach. This analysis characterizes the effect of Cleanroom on the delivered product, the software development process, and the developers.

Journal ArticleDOI
TL;DR: A Fortran static source code analyzer was developed to study 31 metrics, including a new hybrid metric introduced in this paper, and applied to a database of 255 programs, all of which were student assignments, which confirmed the internal consistency of some of these metrics which belong to the same class.
Abstract: Software metrics are computed for the purpose of evaluating certain characteristics of the software developed. A Fortran static source code analyzer, FORTRANAL, was developed to study 31 metrics, including a new hybrid metric introduced in this paper, and applied to a database of 255 programs, all of which were student assignments. Comparisons among these metrics are performed. Their cross-correlation confirms the internal consistency of some of these metrics which belong to the same class. To remedy the incompleteness of most of these metrics, the proposed metric incorporates context sensitivity to structural attributes extracted from a flow graph. It is also concluded that many volume metrics have similar performance while some control metrics surprisingly correlate well with typical volume metrics in the test samples used. A flexible class of hybrid metric can incorporate both volume and control attributes in assessing software complexity.

Proceedings ArticleDOI
Kemal Ebcioglu1
01 Dec 1987
TL;DR: A compilation algorithm for efficient software pipelining of general inner loops, where the number of iterations and the time taken by each iteration may be unpredictable, due to arbitrary if-then- else statements and conditional exit statements within the loop.
Abstract: We describe a compilation algorithm for efficient software pipelining of general inner loops, where the number of iterations and the time taken by each iteration may be unpredictable, due to arbitrary if-then- else statements and conditional exit statements within the loop. As our target machine, we assume a wide instruction word architecture that allows multi-way branching in the form of if-then-else trees, and that allows conditional register transfers depending on where the microinstruction branches to (a hardware implementation proposal for such a machine is briefly described in the paper). Our compilation algorithm, which we call the pipeline scheduling technique, produces a software- pipelined version of a given inner loop, which allows a new iteration of the loop to begin on every cycle whenever dependencies and resources permit. The correctness and termination properties of the algorithm are studied in the paper.

Proceedings ArticleDOI
27 Apr 1987
TL;DR: ABYSS is shown to be a general security base, in which many security applications may execute, and a novel use-once authorization mechanism, called a token, is introduced as a solution to the problem of providing authorizations without direct communication.
Abstract: ABYSS (A Basic Yorktown Security System) is an architecture for the trusted execution of application software. It supports a uniform security service across the. range of computing systems. The use of ABYSS discussed in this paper is oriented towards solving the software protection problem, especially in the lower end of the market. Both current and planned software distribution channels are supportable by the architecture, and the system is nearly transparent to legitimate users. A novel use-once authorization mechanism, called a token, is introduced as a solution to the problem of providing authorizations without direct communication. Software vendors may use the system to obtain technical enforcement of virtually any terms and conditions of the sale of their software, including such things as rental software. Software may be transferred between systems, and backed up to guard against loss in case of failure. We discuss the problem of protecting software on these systems, and offer guidelines to its solution. ABYSS is shown to be a general security base, in which many security applications may execute.

Journal ArticleDOI
TL;DR: A controlled maintenance experiment was conducted involving twelve medium-size distributed software systems; six of these systems were implemented in LADY, the other six systems in an extended version of sequential Pascal.
Abstract: This paper describes a study on the impact of software structure on maintainability aspects such as comprehensibility, locality, modifiability, and reusability in a distributed system environment. The study was part of a project at the University of Kaiserslautern, West Germany, to design and implement LADY, a LAnguage for Distributed systems. The study addressed the impact of software structure from two perspectives. The language designer's perspective was to evaluate the general impact of the set of structural concepts chosen for LADY on the maintainability of software systems implemented in LADY. The language user's perspective was to derive structural criteria (metrics), measurable from LADY systems, that allow the explanation or prediction of the software maintenance behavior. A controlled maintenance experiment was conducted involving twelve medium-size distributed software systems; six of these systems were implemented in LADY, the other six systems in an extended version of sequential Pascal. The benefits of the structural LADY concepts were judged based on a comparison of the average maintenance behavior of the LADY systems and the Pascal systems; the maintenance metrics were derived by analyzing the interdependence between structure and maintenance behavior of each individual LADY system.

Journal ArticleDOI
TL;DR: A cost-reliability optimal software release problem is investigated for three existing software reliability growth models by evaluating both software cost and software reliability criteria simultaneously.

Proceedings ArticleDOI
Dewayne E. Perry1
01 Mar 1987
TL;DR: The semantic interconnection model is introduced, which incorporates the advantages of the unit and syntactic interconnection models and provides extremely useful extensions to them and provides tools that are knowledgeable about the process of system construction and evolution and that work in symbiosis with the system builders to construct and evolve large software systems.
Abstract: We present a formulation of interconnection models and present the unit and syntactic models — the primary models used for managing the evolution of large software systems. We discuss various tools that use these models and evaluate how well these models support the management of system evolution. We then introduce the semantic interconnection model. The semantic interconnection model incorporates the advantages of the unit and syntactic interconnection models and provides extremely useful extensions to them. By refining the grain of interconnections to level of semantics (that is, to the predicates that define aspects of behavior) we provide tools that are better suited to manage the details of evolution in software systems and that provide a better understanding of the implications of changes. We do this by using the semantic interconnection model to formalize the semantics of program construction, the semantics of changes, and the semantics of version equivalence and compatibility. Thus, with this formalization, we provide tools that are knowledgeable about the process of system construction and evolution and that work in symbiosis with the system builders to construct and evolve large software systems.

Patent
27 Jan 1987
TL;DR: In this paper, a block diagram editor system and method is implemented in a computer workstation that includes a Cathode Ray Tube (CRT) and a mouse, graphics and windowing software, and an external communications interface for test instruments.
Abstract: A block diagram editor system and method is implemented in a computer workstation that includes a Cathode Ray Tube (CRT) and a mouse, graphics and windowing software, and an external communications interface for test instruments. The computer is programmed for constructing, interconnecting and displaying block diagrams of functional elements on the CRT. From prestored routines for each functional element, the software assembles and executes a program that emulates the functional operations of each element and transfers data from output from each element in turn to an input of a succeeding block, as determined by the block diagram configuration. The block functions include signal generating and analysis functions, and functions for control of various types of test instruments, which can be interactively controlled through the CRT and mouse. The computer converts desired outputs of the instruments into control settings and receives, analyzes and displays data from the instruments. Blocks can also be grouped into macroblocks.

Journal ArticleDOI
TL;DR: The Heterogeneous Computer Systems (HCS) project at the University of Washington as mentioned in this paper is a major research and development effort whose goal is to simplify the interconnection of heterogeneous computer systems.
Abstract: Heterogeneity in hardware and software is an inevitable consequence of experimental computer research. At the University of Washington, the Heterogeneous Computer Systems (HCS) project is a major research and development effort whose goal is to simplify the interconnection of heterogeneous computer systems.

Patent
30 Jul 1987
TL;DR: In this paper, a system using a ray-tracing algorithm and a hierarchy of volume elements (called voxels) to process only the visible surfaces in a field of view is presented.
Abstract: A system using a ray-tracing algorithm and a hierarchy of volume elements (called voxels) to process only the visible surfaces in a field of view. In this arrangement, a dense, three-dimensional voxel data base is developed from the objects, their shadows and other features recorded, for example, in two-dimensional aerial photography. The rays are grouped into subimages and the subimages are executed as parallel tasks on a multiple instruction stream and multiple data stream computer (MIMD). The use of a three-dimensional voxel data base formed by combining three-dimensional digital terrain elevation data with two-dimensional plan view and oblique view aerial photography permits the development of a realistic and cost-effective data base. Hidden surfaces are not processed. By processing only visible surfaces, displays can now be produced depicting the nap-of-the-earth as seen in low flight of aircraft or as viewed from ground vehicles. The approach employed here is a highly-parallel data processing system solution to the nap-of-the-earth flight simulation through a high level of detail data base. The components of the system are the display algorithm and data structure, the software which implements the algorithm and data structure and creates the data base, and the hardware which executes the software. The algorithm processes only visible surfaces so that the occulting overload management problem is eliminated at the design level. The algorithm decomposes the image into subimages and processes the subimages independently.

Proceedings ArticleDOI
01 Oct 1987
TL;DR: A new logic simulation technique that uses software levelized compiled-code (LCC) for synchronous designs and experiments indicate that SSIM runs about 250 to 1,000 times faster than the AIDA event simulator that evaluates about 4,500 gates per second.
Abstract: This paper presents a new logic simulation technique that uses software levelized compiled-code (LCC) for synchronous designs. Three approaches are proposed: C source code, target machine code and interpreted code. The evaluation speed for the software LCC simulator (SSIM) is about 140,000 (gate) evaluations per second using C source code or target machine code, or 50,000 evaluations per second using interpreted code. It is about 40 to 100 times slower than the AIDA hardware LCC simulator, but is about one order of magnitude faster than a traditional software event simulator. For a 32-bit multiplier with gate activity more than 100%, experiments indicate that SSIM runs about 250 to 1,000 times faster than the AIDA event simulator that evaluates about 4,500 gates per second.

Book ChapterDOI
01 Jan 1987
TL;DR: Tandem builds single-fault-tolerant computer systems designed for online diagnosis and maintenance and has price/performance competitive with conventional systems.
Abstract: Tandem builds single-fault-tolerant computer systems. At the hardware level, the system is designed as a loosely coupled multi-processor with fail-fast modules connected via dual paths. It is designed for online diagnosis and maintenance. A range of CPUs may be inter- connected via a hierarchical fault-tolerant local network. A variety of peripherals needed for online transaction processing are attached via dual ported controllers. A novel disc subsystem allows a choice between low cost-per-Mbyte and low cost-per-access. System software provides processes and messages as the basic structuring mechanism. Processes provide software modularity and fault isolation. Process pairs tolerate hardware and transient software failures. Applications are structured as requesting processes making remote procedure calls to server processes. Process server classes utilize multi-processors. The resulting process abstractions provide a distributed system which can utilize thousands of processors. Networking protocols such as SNA, OSI, and a proprietary network are built atop this base. A relational database provides distributed data and distributed transactions. An application generator allows users to develop fault-tolerant applications as though the system were a conventional computer. The resulting system has price/performance competitive with conventional systems.

Book
03 Jan 1987
TL;DR: This report incorporates a literature review, a workshop paper, and discussion by workshop participants on the current status of research on what the users of computer systems know, and how these different forms of knowledge fit together in learning and performance.
Abstract: This report incorporates a literature review, a workshop paper, and discussion by workshop participants on the current status of research on what the users of computer systems know, and how these different forms of knowledge fit together in learning and performance. It is noted that such research is important to the problem of designing systems and training programs so that they are easy to use and the learning is efficient. Topics addressed include: (1) the development of mental models to describe the stored knowledge_ of users of computer systems and ways in which that knowledge is used to determine their behavior; (2) types of representations of the user's knowledge, including simple sequences and mental models (surrogates, metaphor models, glass box models, and network representations of the system); (3) how the users' knowledge affects the performance of both novices and experts; and (4) the application of what is known of the user's knowledge to practical problems in designing interfaces and in training users. A listing of 11 recommendations for further research concludes the report, and 90 references are given. CEW) *********************************************************************** Reproductions supplied by EDRS are the best that can be made from the original document. *********************************************************************** Mental Models in Human-Computer Interaction Research Issues About What the User of Software Knows John M. Carroll and Judith Reitman Olson. Editors Workshop on Software Human Factors: Users' Mental Models Nancy Anderson, Chair Committee on Human Factors Commission on Behavioral and Social Sciences and Education National Research Council NATIONAL ACADEMY PRESS Washington. D.C. 1987 NOTICE: The p--;eet that is the subject of this report was approved by the Governing Board of the Nat,,nal Research Council, whose members are drawn from the councils of the National Academy of Sciences, the National Academy of Engineering, and the Institute of Medicine. The members of the committee responsible for the report were chosen for their special competences and with regard for appropriate balance. This report has been reviewed by a group other than the authors according to procedures approved by a Report Review Committee consisting of members of the National Academy of Sciences, the National Academy of Engineering, and the Institute

Journal ArticleDOI
TL;DR: This work presents protocols that enable software protection, without causing substantial overhead in distribution and maintenance, by implemented by a conventional cryptosystem, or by a public key cryptos system, such as the RSA.
Abstract: One of the overwhelming problems that software producers must contend with is the unauthorized use and distribution of their products. Copyright laws concerning software are rarely enforced, thereby causing major losses to the software companies. Technical means of protecting software from illegal duplication are required, but the available means are imperfect. We present protocols that enable software protection, without causing substantial overhead in distribution and maintenance. The protocols may be implemented by a conventional cryptosystem, such as the DES, or by a public key cryptosystem, such as the RSA. Both implementations are proved to satisfy required security criteria.

Journal ArticleDOI
TL;DR: The tasks that must be supported within software environments to support the use of DSSs and how existing prototype Model Management Systems (MMSs) implementations provide this support is described.
Abstract: Decision Support Systems (DSSs) originally were proposed as interactive problem-solving vehicles through which models and analytical techniques could be made available to decision makers. Model management represents a line of research within the DSS field that focuses on the design and implementation of software environments to support the use of DSSs for this purpose. This paper describes the tasks that must be supported within these environments and illustrates how existing prototype Model Management Systems (MMSs) implementations provide this support. The use of artificial intelligence techniques in such implementations are reveiwed, and three scenarios are presented to show how future MMSs could be constructed using these techniques.

Book
01 Jan 1987
Abstract: From the Publisher: This one-volume text covers all important aspects of computer modelling and simulation. Based on the idea of ``learning by doing,'' this text teaches the actual construction and use of both analogue and digital simulation models in continuous and discrete systems, while emphasizing the digital computer simulation of discrete systems. Covers the use of microprocessors and computer graphics for modelling and simulation and the availability of micro-based software. Stresses practical problem-solving with numerous diagrams and numerical examples. Also provided are sample program listings (Pascal, CSMP, GPSS, SIMSCRIPT) and output from actual computer runs.