scispace - formally typeset
Search or ask a question

Showing papers on "Software published in 1982"


Book
01 Jan 1982

493 citations


Journal ArticleDOI
TL;DR: This paper surveys current verification, validation, and testing approaches and discusses their strengths, weaknesses, and life-cycle usage and describes automated tools used toplement vahdation, verification, and testmg.
Abstract: Software quahty is achieved through the apphcatlon of development techniques and the use of verification procedures throughout the development process Careful consideratmn of specific quality attmbutes and validation reqmrements leads to the selection of a balanced collection of review, analysis, and testing techmques for use throughout the life cycle. This paper surveys current verification, validation, and testing approaches and discusses their strengths, weaknesses, and life-cycle usage. In conjunction with these, the paper describes automated tools used to nnplement vahdation, verification, and testmg. In the discussion of new research thrusts, emphasis is gwen to the continued need to develop a stronger theoretical basis for testing and the need to employ combinations of tools and techniques that may vary over each apphcation.

485 citations


Patent
Rob Pike1
07 Oct 1982
TL;DR: In this paper, a graphic terminal is disclosed using bitmaps to represent plural overlapping displays, and graphics software is also disclosed in which the overlapping asynchronous windows or layers are manipulated by manipulating the bitmaps.
Abstract: A graphic terminal is disclosed using bitmaps to represent plural overlapping displays. Graphics software is also disclosed in which the overlapping asynchronous windows or layers are manipulated by manipulating the bitmaps. With this software, the physical screen becomes several logical screens (layers) all running simultaneously, any one of which may be interacted with at any time.

427 citations



Patent
20 Sep 1982
TL;DR: In this paper, a method and apparatus are provided for inhibiting unauthorized copying, unauthorized usage and automated cracking of proprietary software used in computer systems, which are protected by encapsulation and/or encryption.
Abstract: A method and apparatus are provided for inhibiting unauthorized copying, unauthorized usage and automated cracking of proprietary software used in computer systems. The computer systems execute protected programs, which are protected by encapsulation and/or encryption. To provide security against unauthorized copying of software, means are provided that detect and inhibit automated cracking of protected programs. These means will destroy or make inaccessible information in the CPU during conditions when automated cracking could occur. These means will also store interrupt contexts in secret to prevent implementation of automated cracking. Additional features may be provided to allow operation as a general purpose computer system, where protected programs are distributed using public key cryptography and a means is provided to convert from this distribution form to the protected execution form.

321 citations


Book
01 Nov 1982
TL;DR: By reading, you can know the knowledge and things more, not only about what you get from people to people, book will be more trusted and it will really give the good idea to be successful.
Abstract: By reading, you can know the knowledge and things more, not only about what you get from people to people. Book will be more trusted. As this controlling software projects management measurement and estimation, it will really give you the good idea to be successful. It is not only for you to be success in certain life you can be successful in everything. The success can be started by knowing the basic knowledge and do actions.

249 citations


Journal ArticleDOI
01 Apr 1982
TL;DR: An architecture for improving computer performance which has a high degree of decoupling between operand access and execution, resulting in an implementation which has two separate instruction streams that communicate via queues.
Abstract: An architecture for improving computer performance is presented and discussed. The main feature of the architecture is a high degree of decoupling between operand access and execution. This results in an implementation which has two separate instruction streams that communicate via queues. A similar architecture has been previously proposed for array processors, but in that context the software is called on to do most of the coordination and synchronization between the instruction streams. This paper emphasizes implementation features that remove this burden from the programmer. Performance comparisons with a conventional scalar architecture are given, and these show that considerable performance gains are possible.Single instruction stream versions, both physical and conceptual, are discussed with the primary goal of minimizing the differences with conventional architectures. This would allow known compilation and programming techniques to be used. Finally, the problem of deadlock in such a system is discussed, and one possible solution is given.

239 citations


Journal ArticleDOI
TL;DR: This paper describes an object-oriented design methodology, using Ada as the implementation language, and indicates that the application of appropriate design methodolgies, embodied in a high-order language, is appropriate in combatting software depression.
Abstract: The current software depression is characterized by software that is late, erroneous, and costly. Experience indicates that the application of appropriate design methodolgies, embodied in a high-order language, is appropriate in combatting this depression. In particular, this paper describes an object-oriented design methodology, using Ada as the implementation language.

203 citations


Proceedings ArticleDOI
Gregory Francis Pfister1
01 Jan 1982
TL;DR: The Yorktown Simulation Engine is a special-purpose, highly-parallel programmable machine for the gate-level simulation of logic that can simulate up to one million gates at a speed of over two billion gate simulations per second.
Abstract: The Yorktown Simulation Engine (YSE) is a special-purpose, highly-parallel programmable machine for the gate-level simulation of logic. It can simulate up to one million gates at a speed of over two billion gate simulations per second; it is estimated that the IBM 3081 processor could have been simulated on the YSE at a rate of 1000 instructions per second. This is far beyond the capabilities of existing register-level software simulators. The YSE has been designed and is being constructed at the IBM T. J. Watson Research Center. This paper introduces the YSE and describes its top-level architecture.

166 citations


Book
01 Jan 1982
TL;DR: This book investigates efficiency at a design level that is practiced by many but discussed by few, and the operations undertaken at this level are beneath most work on algorithms and data structures yet are too complex for most current and foreseeable compilers.
Abstract: The primary task of software engineers is the cost-effective development of maintainable and useful software. There are many secondary problems lurking in that definition. One such problem arises from the term "useful": to be useful in the application at hand, software must often be efficient (that is, use little time or space). The problem we will consider in this book is building efficient software systems. There are a number of levels at which we may confront the problem of efficiency. These are defined in Section 1.2 and include the overall system design, the program's algorithms and data structures, the translation to machine code, and the underlying system software and hardware; many books discuss efficiency. at each of those levels. In this book we will investigate efficiency at a design level that is practiced by many but discussed by few. This level is called "writing efficient code" and can be defined as follows: The activity of writing efficient code takes as input a high-level language program (which incorporates efficient algorithms and data structures) and produces as output a program in the same high-level language that is suitable for compilation into efficient machine code. The operations undertaken at this level are beneath most work on algorithms and data structures yet are too complex for most current and foreseeable compilers.

129 citations


Patent
21 Sep 1982
TL;DR: In this article, the authors present a multiprocessor control system that allows full job recovery after a machine power down or after a malfunction or software crash or temporary power outage, in particular, essential variables such as the state and status of the machine and the programmed job at the time of the malfunction are maintained in nonvolatile memory.
Abstract: The present invention is a multiprocessor control system that allows full job recovery after a machine power down or after a malfunction or software crash or temporary power outage. In particular, essential variables such as the state and status of the machine and the programmed job at the time of the malfunction are maintained in nonvolatile memory. This information is continually updated in nonvolatile memory. Once the control system has reset and reinitialized all the control elements after a malfunction, the control restores or downloads all the relevant variables in the nonvolatile memory to the various control elements to maintain status. In another embodiment, the essential variables are maintained in RAM locations in a master processor and saved for downloading to the control elements.

Journal ArticleDOI
TL;DR: Support for the construction of robust software that survives node, network, and media failures is discussed in an integrated language/system whose goal is to provide the needed support.
Abstract: Technological advances have made it possible to construct systems from collections of computers connected by a network. At present, however, there is little support for the construction and execution of software to run on such a system. Our research concerns the development of an integrated language/system whose goal is to provide the needed support. This paper discusses a number of issues that must be addressed in such a language. The major focus of our work and this paper is support for the construction of robust software that survives node, network, and media failures.

Proceedings ArticleDOI
01 Mar 1982
TL;DR: It is argued that the most effective design methodology must make simultaneous tradeoffs across all three areas: hardware, software support, and systems support.
Abstract: Most new computer architectures are concerned with maximizing performance by providing suitable instruction sets for compiled code and providing support for systems functions. We argue that the most effective design methodology must make simultaneous tradeoffs across all three areas: hardware, software support, and systems support. Recent trends lean towards extensive hardware support for both the compiler and operating systems software. However, consideration of all possible design tradeoffs may often lead to less hardware support. Several examples of this approach are presented, including: omission of condition codes, word-addressed machines, and imposing pipeline interlocks in software. The specifics and performance of these approaches are examined with respect to the MIPS processor.

Journal ArticleDOI
05 Oct 1982
TL;DR: The MIPS processor is a fast pipelined engine without pipeline interlocks, which attempts to achieve high performance with the use of a simplified instruction set, similar to those found in microengines.
Abstract: MIPS is a new single chip VLSI microprocessor. It attempts to achieve high performance with the use of a simplified instruction set, similar to those found in microengines. The processor is a fast pipelined engine without pipeline interlocks. Software solutions to several traditional hardware problems, such as providing pipeline interlocks, are used.

01 Jan 1982
TL;DR: The mechanisms behind the exhibited modifiability and lack of modifiable in a large commercial software system during a part of its evolution are investigated, and a dichotomy of software modularizations is proposed.
Abstract: Large software systems are often characterized by a continuing evolution, where a large number of people are involved in maintaining and extending the system. Software modifiability is a critical issue in such system evolution. It is desirable that the basic design is modifiable, and that subsequent evolution maintains this modifiability. This thesis is an investigation of the mechanisms behind the exhibited modifiability and lack of modifiability in a large commercial software system during a part of its evolution. First, the relation between modifiability and different types of modularizations are discussed, and a dichotomy of software modularizations is proposed. As a measure of modifiability at system level, i.e., disregarding the internal modifiability of modules, we use the number of modules which are influenced by the implementation of a certain system change. The implementation of each requirement in one release of the system is examined, and the underlying causes of good and bad modifiability are explained. This results in a list of factors which were found to influence the modifiability.

Journal ArticleDOI
TL;DR: An overview of the language mechanisms is given, some of the major design decisions are discussed and one implementation of SR is described.
Abstract: SR is a new language for programming software containing many processes that execute in parallel. The language allows an entire software system that controls a potentially large collection of processors to be programmed as an integrated set of software modules. The key language mechanisms are resources, operations and input statements. The language supports separate compilation, type abstraction, and dynamic communication links; it also contains novel treatments of arrays and procedures. This paper gives an overview of the language mechanisms, discusses some of the major design decisions and describes one implementation.

Patent
12 Nov 1982
TL;DR: In this paper, a system and technique are presented which enable several independent software tasks, referred to as partitions, to be stored in memory and concurrently utilized without interfering with each other's operation.
Abstract: In accordance with the illustrated preferred embodiment, a system and technique are presented which enable several independent software tasks, referred to as partitions, to be stored in memory and concurrently utilized without interfering with each other's operation. This system and technique also enable physical memory to be dynamically mapped so that various dynamic structures, including partitions, can be dynamically allocated memory as needed without interfering with each other. Some interaction between partitions is also made available so that results generated by operation of one of the partitions can be utilized by other of the partitions.

Journal ArticleDOI
TL;DR: In this article, the authors provided raw displacement-time data for an object falling in the earth's gravitational field and demonstrated the results obtained using splines and digital filtering to calculate acceleration.
Abstract: The advent of the digital computer has allowed workers in the field of biomechanics to perform data smoothing and time differentiation numerically. The methods which are most commonly used are spline functions and digital filtering. This paper provides raw displacement-time data for an object falling in the earth's gravitational field and demonstrates the results obtained using splines and digital filtering to calculate acceleration. It is recommended that the quintic spline is a good method to use provided one has sufficient core available, and the digital filter, with some form of augmentation and/or velocity smoothing, can yield satisfactory results for the user who has a mini- or micro-computer. Listings of the digital filter and finite difference subroutines are provided.

Patent
12 Nov 1982
TL;DR: In this paper, a cardiac pacemaker or muscle stimulator is described as a physiological device adapted for implantation in a human patient, characterized by having programmable means for generating data and assembling same for presentation of one or more histograms.
Abstract: A physiological device adapted for implantation in a human patient, e.g. a cardiac pacemaker or muscle stimulator, characterized by having programmable means for generating data and assembling same for presentation of one or more histograms. The implanted device has circuitry for registering the occurrence of sensed or evoked events, as well as device operating events such as cycles of operation, and means, preferably software control of a microprocessor, for classifying registered events into respective classes of one or more parameters associated with the events and for accumulating counts of events for each such class. The system also includes external apparatus for communicating programmed instructions to the device, whereby histogram selection and histogram classes are programmed, and for receiving the histogram data transmitted from the implanted device and displaying it in a convenient histogram form. Time based histograms are also generated, utilizing a software clock for continuously tracking time.

Journal ArticleDOI
TL;DR: This paper surveys the most likely changes in the programming task and in the nature of software over the short term, the medium term, and the long term.
Abstract: The nature of programming is changing. These changes will accelerate as improved software development practices and more sophisticated development tools and environments are produced. This paper surveys the most likely changes in the programming task and in the nature of software over the short term, the medium term, and the long term.In the short term, the focus is on gains in programmer productivity through improved tools and integrated development environments. In the medium term, programmers will be able to take advantage of libraries of software components and to make use of packages that generate programs automatically for certain kinds of common systems. Over the longer term, the nature of programming will change even more significantly as programmers become able to describe desired functions in a nonprocedural way, perhaps through a set of rules or formal specification languages. As these changes occur, the job of the application programmer will become increasingly analysis-oriented and software developers will be able to attack a large number of application areas which could not previously be addressed effectively.

01 Feb 1982
TL;DR: Independently generated input data was used to verify that interfailure times are very nearly exponentially distributed and to obtain good estimates of the failure rates of individual errors and demonstrate how widely they vary.
Abstract: A software experiment conducted with repetitive run sampling is reported Independently generated input data was used to verify that interfailure times are very nearly exponentially distributed and to obtain good estimates of the failure rates of individual errors and demonstrate how widely they vary This fact invalidates many of the popular software reliability models now in use The log failure rate of interfailure time was nearly linear as a function of the number of errors corrected A new model of software reliability is proposed that incorporates these observations

Journal ArticleDOI
TL;DR: The "state of the art" of mathematical software that solves systems of nonlinear equations is evaluated based on a comparison of eight readily available FORTRAN codes.
Abstract: The "state of the art" of mathematical software that solves systems of nonlinear equations is evaluated. The evaluation is based on a comparison of eight readily available FORTRAN codes. Theoretmal and software aspects of the codes, as well as their performance on a carefully designed set of test problems, are used m the evaluation.

Proceedings ArticleDOI
01 Jan 1982
TL;DR: The architecture of a logic simulation machine employing distributed and parallel processing is described, which can accommodate different levels of modeling ranging from simple gates to complex functions, and support timing analysis.
Abstract: Special-purpose CAD hardware is increasingly being considered as a means to meet the challenge posed to conventional (software-based) CAD tools by the growing complexity of VLSI circuits. In this paper we describe the architecture of a logic simulation machine employing distributed and parallel processing. Our architecture can accommodate different levels of modeling ranging from simple gates to complex functions, and support timing analysis. We estimate that simulation implemented by the proposed special-purpose hardware will be between 10 and 60 times faster than currently used software algorithms running on general-purpose computers. With the available technology, a throughput of 1,000,000 gate evaluations/second can be achieved.

Proceedings ArticleDOI
01 Jan 1982
TL;DR: The Generic Sonar Model is a computer program designed to provide sonar system developers with a comprehensive modeling capability for evaluating the performance of sonar systems and investigating the ocean environment in which they operate.
Abstract: The Generic Sonar Model is a computer program designed to provide sonar system developers with a comprehensive modeling capability for evaluating the performance of sonar systems and investigating the ocean environment in which they operate. The model provides features not presently available in any single computer program. These permit cost/accuracy trade-offs for specific applications, and interfacing the results with generalized warfare models. The approach adopted is to use a modular design, adhere to a strict programming standard, and to implement existing software when practical.


Journal ArticleDOI
TL;DR: An overview of two basic versions of PANACEA is provided, which solve “closed” networks only and a description of its model language is given from the point of view of its capability to describe queueing networks in a compact, natural manner.
Abstract: PANACEA is a software package that significantly extends the range of Markovian queueing networks that are computationally tractable. It solves multi-class closed, open, and mixed queueing networks. Based on an underlying theory of integral representations and asymptotic expansions, PANACEA solves queueing networks that are orders of magnitude larger than can be solved by other established algorithms. The package is finding widespread use in Bell Laboratories. It also has important software innovations. A flexible programming-language-like interface facilitates compact representation of large queueing networks. An out-of-core implemental strategy enables PANACEA to be ported to processors with modest memory. The modular structure of this software package, along with the automatic machine-generated parser, makes it easily extendable. This paper provides an overview of two basic versions of PANACEA, versions 1.0 and 1.1, which solve “closed” networks only. A description of its model language is given from the point of view of its capability to describe queueing networks in a compact, natural manner. The paper discusses the algorithms, together with their time and storage requirements, that are used in the implementation. Several numerical examples are given.

Book ChapterDOI
07 Jun 1982
TL;DR: An attempt to abstract from the great diversity of approaches to automated deduction a core collection of operations which are common to all of them, and outline the architecture for a layered family of software tools to support the development of theorem-proving systems.
Abstract: In this paper we present an attempt to abstract from the great diversity of approaches to automated deduction a core collection of operations which are common to all of them. Implementation of this kernel of functions provides a software platform upon which a variety of theorem-proving systems can be built, We outline the architecture for a layered family of software tools to support the development of theorem-proving systems and present in some detail the functions which comprise the two lowest layers. These are the layer implementing primitive abstract data types not supported by the host language and the layer providing primitives for the manipulation of logical formulas. This layer includes the implementation of efficient unification and substitution application algorithms, structure sharing within the formula database, and efficient access to formulas via arbitrary user-defined properties. The tools are provided in a highly portable form (implemented in Pascal) in order that a diverse community of users may build on them.


Journal ArticleDOI
TL;DR: A review of computer control of fermentation processes with the utilization of modern control techniques is presented and on-line estimation of bioreactor parameters for feedback control is presented.