scispace - formally typeset
Search or ask a question

Showing papers on "Software published in 1984"


Journal ArticleDOI
TL;DR: An effective data collection method for evaluating software development methodologies and for studying the software development process is described and results show that data validation is a necessary part of change data collection.
Abstract: An effective data collection method for evaluating software development methodologies and for studying the software development process is described. The method uses goal-directed data collection to evaluate methodologies with respect to the claims made for them. Such claims are used as a basis for defining the goals of the data collection, establishing a list of questions of interest to be answered by data analysis, defining a set of data categorization schemes, and designing a data collection form. The data to be collected are based on the changes made to the software during development, and are obtained when the changes are made. To ensure accuracy of the data, validation is performed concurrently with software development and data collection. Validation is based on interviews with those people supplying the data. Results from using the methodology show that data validation is a necessary part of change data collection. Without it, as much as 50 percent of the data may be erroneous. Feasibility of the data collection methodology was demonstrated by applying it to five different projects in two different environments. The application showed that the methodology was both feasible and useful.

1,172 citations


Journal ArticleDOI
J. F. Kelley1
TL;DR: A six-step, iterative, empirical human factors design methodology was used to develop CAL, a natural language computer application to help computer-naive business professionals manage their personal calenders and its dictionaries.
Abstract: A six-step, iterative, empirical human factors design methodology was used to develop CAL, a natural language computer application to help computer-naive business professionals manage their personal calenders. Input language is processed by a simple, nonparsing algorithm with limited storage requirements and a quick response time. CAL allows unconstrained English inputs from users with no training (except for a five minute introduction to the keyboard and display) and no manual (except for a two-page overview of the system). In a controlled test of performance, CAL correctly responded to between 86 percent and 97 percent of the storage and retrieval requests it received, according to various criteria. This level of performance could never have been achieved with such a simple processing model were it not for the empirical approach used in the development of the program and its dictionaries. The tools of the engineering psychologist are clearly invaluable in the development of user-friendly software, if that software is to accommodate the unruly language of computer-naive, first-time users. The key is elicit the cooperation of such users as partners in an iterative, empirical development process. 15 references.

684 citations


Journal ArticleDOI
John D. Musa1
TL;DR: A theory of software reliability based on execution or cpu time, and a concomitant model of the testing and debugging process that permits execution time to be related to calendar time are outlined.

644 citations


Journal ArticleDOI
TL;DR: An analysis of the distributions and relationships derived from the change data collected during development of a medium-scale software project produces some surprising insights into the factors influencing software development.
Abstract: An analysis of the distributions and relationships derived from the change data collected during development of a medium-scale software project produces some surprising insights into the factors influencing software development. Among these are the tradeoffs between modifying an existing module as opposed to creating a new one, and the relationship between module size and error proneness.

611 citations


Journal ArticleDOI
TL;DR: An approach called Draco to the construction of software systems from reusable software parts is discussed, concerned with the reuse of analysis and design information in addition to programming language code.
Abstract: This paper discusses an approach called Draco to the construction of software systems from reusable software parts. In particular we are concerned with the reuse of analysis and design information in addition to programming language code. The goal of the work on Draco has been to increase the productivity of software specialists in the construction of similar systems. The particular approach we have taken is to organize reusable software components by problem area or domain. Statements of programs in these specialized domains are then optimized by source-to-source program transformations and refined into other domains. The problems of maintaining the representational consistency of the developing program and producing efficient practical programs are discussed. Some examples from a prototype system are also given.

407 citations


Proceedings ArticleDOI
01 Jan 1984
TL;DR: The conceptual framework that is developed for animating algorithms is outlined, the system that is implemented is described, and several examples drawn from the host of algorithms that are animated.
Abstract: A software environment is described which provides facilities at a variety of levels for “animating” algorithms: exposing properties of programs by displaying multiple dynamic views of the program and associated data structures. The system is operational on a network of graphics-based, personal workstations and has been used successfully in several applications for teaching and research in computer science and mathematics. In this paper, we outline the conceptual framework that we have developed for animating algorithms, describe the system that we have implemented, and give several examples drawn from the host of algorithms that we have animated.

364 citations


Journal ArticleDOI
Barry Boehm1
TL;DR: In this article, the authors provide a good starting point for identifying and resolving software problems early in life cycle, when they are relatively easy to handle (e.g., early in the life cycle).
Abstract: These recommendation provide a good starting point for identifying and resolving software problems early in life cycle—when they're s relatively easy to handle

312 citations


Journal ArticleDOI
TL;DR: A viable method for the software quality assessment, which integrates the capture-recapture method and the models above, is discussed, and its application to actual test data is illustrated.
Abstract: The s-shaped growth curves of detected software errors can be observed in software testing. The delayed s-shaped and inflection s-shaped software reliability growth models based on a nonhomogeneous Poisson process are discussed. The software reliability growth types of the models are investigated in terms of the error detection rate per error. In addition, a viable method for the software quality assessment, which integrates the capture-recapture method and the models above, is discussed, and its application to actual test data is illustrated.

277 citations


Journal ArticleDOI
TL;DR: This paper sketches some problem areas to be addressed if the authors are to achieve the goal of devising practical software reuse systems, including information retrieval problems and finding effective methods to aid us in understanding how programs work.
Abstract: This paper explores software reuse. It discusses briefly some economic incentives for developing effective software reuse technology and notes that different kinds of software reuse, such as direct use without modification and reuse of abstract software modules after refinement, have different technological implications. It sketches some problem areas to be addressed if we are to achieve the goal of devising practical software reuse systems. These include information retrieval problems and finding effective methods to aid us in understanding how programs work. There is a philosophical epilogue which stresses the importance of having realistic expectations about the benefits of software reuse.

272 citations


01 Jan 1984
TL;DR: In this experiment, seven software teams developed versions of the same small-size (2000-4000 source instruction) application software product that yielded products with roughly equivalent performance, but with about 40 percent less code and 45 percent less effort.
Abstract: In this experiment, seven software teams developed versions of the same small-size (2000-4000 source instruction) application software product. Four teams used the Specifying approach. Three teams used the Prototyping approach. The main results of the experiment were the following. 1) Prototyping yielded products with roughly equivalent performance, but with about 40 percent less code and 45 percent less effort. 2) The prototyped products rated somewhat lower on functionality and robustness, but higher on ease of use and ease of learning. 3) Specifying produced more coherent designs and software that was easier to integrate. The paper presents the experimental data supporting these and a number of additional conclusions.

267 citations


Journal ArticleDOI
TL;DR: An architecture for improving computer performance is presented and discussed, with the main feature of a high degree of decoupling between operand access anb execution, which results in an implementation which has two separate instruction streams that communicate via queues.
Abstract: An architecture for improving computer performance is presented and discussed. The main feature of the architecture is a high degree of decoupling between operand access and execution. This results in an implementation which has two separate instruction streams that communicate via queues. A similar architecture has been previously proposed for array processors, but in that context the software is called on to do most of the coordination and synchronization between the instruction streams. This paper emphasizes implementation features that remove this burden from the programmer. Performance comparisons with a conventional scalar architecture are given, and these show that considerable performance gains are possible.Single instruction stream versions, both physical and conceptual, are discussed with the primary goal of minimizing the differences with conventional architectures. This would allow known compilation and programming techniques to be used. Finally, the problem of deadlock in such a system is discussed, and one possible solution is given.

Journal ArticleDOI
TL;DR: This review asserts that most one-channel QRS detectors described in the literature can be considered as having the same basic structure and a discussion of some of the current detection schemes is presented.
Abstract: The QRS detection algorithm is an essential part of any computer-based system for the analysis of ambulatory ECG recordings. This review asserts that most one-channel QRS detectors described in the literature can be considered as having the same basic structure. A discussion of some of the current detection schemes is presented with regard to this structure. Some additional features of QRS detectors are mentioned. The evaluation of performance and the problem of multichannel detection, which is now gaining importance, are also briefly treated.

Journal ArticleDOI
TL;DR: In this paper, seven software teams developed versions of the same small-size (2000-4000 source instruction) application software product using the Specifying approach and the Prototyping approach.
Abstract: In this experiment, seven software teams developed versions of the same small-size (2000-4000 source instruction) application software product Four teams used the Specifying approach Three teams used the Prototyping approach The main results of the experiment were the following 1) Prototyping yielded products with roughly equivalent performance, but with about 40 percent less code and 45 percent less effort 2) The prototyped products rated somewhat lower on functionality and robustness, but higher on ease of use and ease of learning 3) Specifying produced more coherent designs and software that was easier to integrate The paper presents the experimental data supporting these and a number of additional conclusions

Book
01 Jan 1984
TL;DR: In this article, the authors define technologies in western culture: manual technology and the ancient world mechanical technology and western Europe dynamic technology and electronic technology from the clock to the computer the electronic brain.
Abstract: Part 1 Introduction: the measure of technological change the computer as a defining technology Turing's man. Part 2 Defining technologies in western culture: manual technology and the ancient world mechanical technology and western Europe dynamic technology and western Europe electronic technology from the clock to the computer the electronic brain. Part 3 Principles of operation: the Turing machine - states and symbols the von Neumann computer hardware and software. Part 4 Embodied symbol - mathematics by computer: binary representation and numerical analysis mathematics and culture embodied mathematics. Part 5 Embodied symbol - logic by computer: truth and the von Neumann machine the triumph of logic the embodiment of logical thought. Part 6 Electronic space: physical space logical space finite space infinite space the geometry of electronic space. Part 7 Time and progress in the computer age: electronic clocks time experienced and measured progress in circles the idea of progress. Part 8 Electronic language: natural and artificial language the hierarchy of computer language poetry and logic the ancient view the western European view silent structures. Part 9 Electronic memory: digital memory technology the art of memory information retrieval and electronic power. Part 10 Creator and creation: coherence and correspondence electronic limits creating by hand and by machine reason and necessity electronic play. Part 11 Artificial intelligence: Turing's game language, memory, and other games the technology of making man the electronic image of man artifact and artificer. Part 12 Conclusion: natural man from Socrates to Faust to Turing living with Turing's man invention and discovery the computer as a tool synthetic intelligence.

Patent
29 May 1984
TL;DR: In this article, a software vending system comprising a host system including primary memory means for storing a plurality of different software programs, and peripheral vending instruments each operatively connected to the host system for interactive data communication there between.
Abstract: A software vending system comprising a host system including primary memory means for storing a plurality of different software programs, and a plurality of peripheral vending instruments each operatively connected to the host system for interactive data communication therebetween. Each of the peripheral vending instruments includes a selector device for selecting a desired one of the software programs, and a recording device operable to duplicate in a recording medium the selected software program transferred from the primary memory means in response to the operation of the selector device.

Journal ArticleDOI
TL;DR: An interactive on-line computer system that handles the planning requirements of orthognathic surgery--diagnosis, treatment planning and prediction of post-operative soft-tissue profile is described.
Abstract: An interactive on-line computer system is described with application in orthognathic surgery. Both the hardware and software of the system are discussed. The application of the system is outlined under the various features of the system's software. The general software collects, stores and analyses graphic data such as from cephalometric radiographs and facial and dental photographs. The specific software handles the planning requirements of orthognathic surgery—diagnosis, treatment planning and prediction of post-operative soft-tissue profile.

Book
01 Jan 1984
TL;DR: Covering includes the 1st and 2nd Law of Thermodynamics; fluid behavior, thermodynamic networks, heat effects, equilibrium and stability, the thermodynamics of pure substances; phase and chemical equilibrium; thermodynamic analysis of processes; physicomechanical processes, and more.
Abstract: Chemical and Process Thermodynamics is an example-rich guide to chemical engineering thermodynamics that focuses on current techniques, new applications, and today's revolutionary computerized tools Coverage includes the 1st and 2nd Law of Thermodynamics; fluid behavior, thermodynamic networks, heat effects, equilibrium and stability, the thermodynamics of pure substances; phase and chemical equilibrium; thermodynamic analysis of processes; physicomechanical processes, and more The companion CD-ROM contains nine executive programs and three spreadsheets for carrying out professional-level calculations; Polymath numerical analysis software; the WASP computerized steam table; Equations of State software for visualizing thermodynamic processes as 3D PVT diagrams; and much more The computing resources are not just useful adjuncts; their use is integrated into the text and amply illustrated with worked examples A new chapter on the philosophy and practice of modeling thermodynamic systems has also been added

Journal ArticleDOI
Hassan Gomaa1
TL;DR: DARTS—a design method for real-time systems—leads to a highly structured modular system with well-defined interfaces and reduced coupling between tasks.
Abstract: DARTS—a design method for real-time systems—leads to a highly structured modular system with well-defined interfaces and reduced coupling between tasks.

Patent
03 Dec 1984
TL;DR: In this article, a distributed processing unit (DPU) or drop which performs process control and data acquisition functions in distributed processing control system having a data highway linking a plurality of such units is described.
Abstract: A distributed processing unit (DPU) or drop which performs process control and data acquisition functions in a distributed processing control system having a data highway linking a plurality of such units. A DPU functional processor accesses the local process I/O interface thereby continually receiving plant information for storage in digital form and subsequent use in the functional processor or for transmission along the data highway. DPU control programs use process values in a transparent fashion, that is without regard to whether these values were obtained through local process I/O interface or via the data highway. The DPU software structure is made up of execution software and support software. The execution software is a collection of data acquisition and process control programs which are developed at an engineer's console drop using a DPU programming language which operates in text and CRT graphic display modes, the latter programming modes allowing system documentation via hard copy graphic display printout. These programs are initiated, performed consecutively, and repeated at specified intervals. The support software initiates process loop execution. Control programs which have been presented to the DPU as representations of ladder diagrams for sequential control or process flow diagrams for continuous process control are executed in the DPU functional processor to achieve the required process control operation.

Journal ArticleDOI
TL;DR: An approximate model is derived which enables one to account for the failures due to the design faults in a simple way when evaluating a system's dependability.
Abstract: This paper deals with evaluation of the dependability (considered as a generic term, whose main measures are reliability, availability, and maintainability) of software systems during their operational life, in contrast to most of the work performed up to now, devoted mainly to development and validation phases. The failure process due to design faults, and the behavior of a software system up to the first failure and during its life cycle are successively examined. An approximate model is derived which enables one to account for the failures due to the design faults in a simple way when evaluating a system's dependability. This model is then used for evaluating the dependability of 1) a software system tolerating design faults, and 2) a computing system with respect to physical and design faults.

Journal ArticleDOI
William S. Cleveland1
TL;DR: Graphs are used more in some disciplines than in others as discussed by the authors, and a survey of 57 journals revealed natural science journals use far more graphs than mathematical or social science journals, which provides important information for developing new graphical methods for data presentation, guidelines, software, and human graphical perception.
Abstract: Graphical communication in scientific publications can be improved; a detailed analysis of all graphs in one volume of Science revealed that 30% had errors. Graphs are used more in some disciplines than in others; a survey of 57 journals revealed natural science journals use far more graphs than mathematical or social science journals. Usage studies such as these provide important information for developing four other areas: new graphical methods for data presentation, guidelines, software, and human graphical perception.

Patent
20 Jun 1984
TL;DR: In this paper, the use of software in accordance with authorized software license limits, including a limit of the number of concurrent usages of a particular software in a computer system having one or more operator terminals and a central processor containing the software, is discussed.
Abstract: Apparatus for controlling the use of software in accordance with authorized software license limits, including a limit of the number of concurrent usages of a particular software in a computer system having one or more operator terminals and a central processor containing the software. The apparatus includes a receiver that monitors usage requests from the software in the central processor. A microprocessor based controller accesses authorized use data stored in an EEPROM. Depending on the propriety of usage requests, the controller and an interruptor and transmitter coupled to the central processor and its software prevents operation of the software and/or provides warning messages on the terminal screen.

Journal ArticleDOI
TL;DR: This paper examines the concept of reusable software in all of its forms and assess the current state of the art, which includes reusable design, various forms of specification systems, and systems for prototyping.
Abstract: The present crisis in software development forces us to reconsider the fundamental ways in which programming is done. One often quoted solution is to exploit more fully the idea of reusable software. It is the purpose of this paper to examine this concept in all of its forms and to assess the current state of the art. In addition to its usual meaning of reusable code, reusability includes reusable design, various forms of specification systems. so-called application generators, and systems for prototyping. We examine each approach from the perspective of the practicing engineer, and we evaluate the work in terms of how it may ultimately improve the development process for large-scale software systems.

Journal ArticleDOI
01 Jan 1984
TL;DR: Initial evaluations of the effectiveness of the SOAR architecture by compiling and simulating benchmarks, and will prove SOAR's feasibility by fabricating a 35,000-transistor SOAR chip suggest that a Reduced Instruction Set Computer can provide high performance in an exploratory programming environment.
Abstract: Smalltalk on a RISC (SOAR) is a simple, Von Neumann computer that is designed to execute the Smalltalk-80 system much faster than existing VLSI microcomputers. The Smalltalk-80 system is a highly productive programming environment but poses tough challenges for implementors: dynamic data typing, a high level instruction set, frequent and expensive procedure calls, and object-oriented storage management. SOAR compiles programs to a low level, efficient instruction set. Parallel tag checks permit high performance for the simple common cases and cause traps to software routines for the complex cases. Parallel register initialization and multiple on-chip register windows speed procedure calls. Sophisticated software techniques relieve the hardware of the burden of managing objects. We have initial evaluations of the effectiveness of the SOAR architecture by compiling and simulating benchmarks, and will prove SOAR's feasibility by fabricating a 35,000-transistor SOAR chip. These early results suggest that a Reduced Instruction Set Computer can provide high performance in an exploratory programming environment.

Journal ArticleDOI
TL;DR: The status of software quality assurance as a relatively new and autonomous field is described and current methods are reviewed, and future directions are indicated.
Abstract: This paper describes the status of software quality assurance as a relatively new and autonomous field. The history of its development from hardware quality assurance programs is discussed, current methods are reviewed, and future directions are indicated.

Journal ArticleDOI
TL;DR: A program package, called SEQAID, to support DNA sequencing is presented that automatically assembles long DNA sequences from short fragments with minimal user interaction and implements several new well-behaved algorithms based on a mathematical model of the problem.
Abstract: A program package, called SEQAID, to support DNA sequencing is presented. The program automatically assembles long DNA sequences from short fragments with minimal user interaction. Various tools for controlling the assembling process are also available. The main novel features of the system are that SEQAID implements several new well-behaved algorithms based on a mathematical model of the problem. It also utilizes available information on restriction fragments to detect illegitimate overlaps and to find relationships between separately assembled sequence blocks. Experiences with the system are reported including an extremely pathological real sequence which offers an interesting benchmark for this kind of programs.

Journal ArticleDOI
Boehm1, Penedo, Stuckle, Williams, Pyster 
TL;DR: The article describes the steps that led to the creation of the software productivity project and its components and summarized the requirements analyses on which the SPS was based.
Abstract: The software productivity system (SPS) was developed to support project activities. It involves a set of strategies, including the work environment; the evaluation and procurement of hardware equipment; the provision for immediate access to computing resources through local area networks; the building of an integrated set of tools to support the software development life cycle and all project personnel; and a user support function to transfer new technology. All of these strategies are being accomplished incrementally. The current architecture is VAX-based and uses the Unix operating system, a wideband local network, and a set of software tools. The article describes the steps that led to the creation of the software productivity project and its components and summarized the requirements analyses on which the SPS was based

Proceedings ArticleDOI
25 Jun 1984
TL;DR: The MIMOLA design method is a method for the design of digital processors from a very high-level bevavioral specification, supported by a retargetable microcode generator and by an utilization and performance analyzer.
Abstract: The MIMOLA design method is a method for the design of digital processors from a very high-level bevavioral specification. A key feature of this method is the synthesis of a processor from a description of programs which are expected to be typical for the applications of that processor. Design cycles, in which the designer tries to improve automatically generated hardware structures, are supported by a retargetable microcode generator and by an utilization and performance analyzer. This paper describes the design method, available software tools and some applications.

Journal ArticleDOI
TL;DR: This paper reports on a positive experience with a set of quantitative measures of software structure, which were used to evaluate the design and implementation of a software system which exhibits the interconnectivity of components typical of large‐scale software systems.
Abstract: The design and analysis of the structure of software systems has typically been based on purely qualitative grounds. In this paper we report on our positive experience with a set of quantitative measures of software structure. These metrics, based on the number of possible paths of information flow through a given component, were used to evaluate the design and implementation of a software system (the UNIX operating system kernel) which exhibits the interconnectivity of components typical of large-scale software systems. Several examples are presented which show the power of this technique in locating a variety of both design and implementation defects. Suggested repairs, which agree with the commonly accepted principles of structured design and programming, are presented. The effect of these alterations on the structure of the system and the quantitative measurements of that structure lead to a convincing validation of the utility of information flow metrics.

Journal ArticleDOI
TL;DR: This paper will examine the constituent components of SCM, dwelling at some length on one of those components, configuration control, and conclude with a look at what the 1980's might have in store.
Abstract: Software configuration management (SCM) is one of the disciplines of the 1980's which grew in response to the many failures of the software industry throughout the 1970's. Over the last ten years, computers have been applied to the solution of so many complex problems that our ability to manage these applications has all too frequently failed. This has resulted in the development of a series of ''new'' disciplines intended to help control the software process. This paper will focus on the discipline of SCM by first placing it in its proper context with respect to the rest of the software development process, as well as to the goals of that process. It will examine the constituent components of SCM, dwelling at some length on one of those components, configuration control. It will conclude with a look at what the 1980's might have in store.