scispace - formally typeset
Search or ask a question

Showing papers on "Software published in 1974"


Journal ArticleDOI
TL;DR: The complete instruction-by-instruction simulation of one computer system on a different system is a well-known computing technique often used for software development when a hardware base is being altered.
Abstract: The complete instruction-by-instruction simulation of one computer system on a different system is a well-known computing technique. It is often used for software development when a hardware base is being altered. For example, if a programmer is developing software for some new special purpose (e.g., aerospace) computer X which is under construction and as yet unavailable, he will likely begin by writing a simulator for that computer on some available general-purpose machine G. The simulator will provide a detailed simulation of the special-purpose environment X, including its processor, memory, and I/O devices. Except for possible timing dependencies, programs which run on the “simulated machine X” can later run on the “real machine X” (when it is finally built and checked out) with identical effect. The programs running on X can be arbitrary — including code to exercise simulated I/O devices, move data and instructions anywhere in simulated memory, or execute any instruction of the simulated machine. The simulator provides a layer of software filtering which protects the resources of the machine G from being misused by programs on X.

963 citations


Journal ArticleDOI
TL;DR: A methodology based on a 25 X 7 structural forecast matrix that has been used by TRW with good results over the past few years is presented and software information elements that experience has shown to be useful in establishing such a data base are given.
Abstract: The work of software cost forecasting falls into two parts. First we make what we call structural forecasts, and then we calculate the absolute dollar-volume forecasts. Structural forecasts describe the technology and function of a software project, but not its size. We allocate resources (costs) over the project's life cycle from the structural forecasts. Judgment, technical knowledge, and econometric research should combine in making the structural forecasts. A methodology based on a 25 X 7 structural forecast matrix that has been used by TRW with good results over the past few years is presented in this paper. With the structural forecast in hand, we go on to calculate the absolute dollar-volume forecasts. The general logic followed in "absolute" cost estimating can be based on either a mental process or an explicit algorithm. A cost estimating algorithm is presented and five tradition methods of software cost forecasting are described: top-down estimating, similarities and differences estimating, ratio estimating, standards estimating, and bottom-up estimating. All forecasting methods suffer from the need for a valid cost data base for many estimating situations. Software information elements that experience has shown to be useful in establishing such a data base are given in the body of the paper. Major pricing pitfalls are identified. Two case studies are presented that illustrate the software cost forecasting methodology and historical results. Topics for further work and study are suggested.

234 citations


01 Jun 1974
TL;DR: A conceptual model of an evaluated computer system, the P-model, is defined in this study using the principles of general systems theory; it provides a convenient uniform description for observing a computer system at any of these levels.
Abstract: : This study concentrates on the measurement problem of a complex computer system. Several issues are attacked: system representation, evaluation and application of computer performance evaluation tools, power of a performance monitor, design of a performance monitor. For an external observer, performance of a computer system is the quality and the quantity of service delivered by the system. However, a computer system is a hierarchy of several levels, the lowest level being the circuit level, the highest the Software Support level. Performance of the system as a whole is determined by performance of individual levels. A conceptual model of an evaluated computer system, the P-model, is defined in this study using the principles of general systems theory; it provides a convenient uniform description for observing a computer system at any of these levels. The elements of the P-model are the level components; the output are performance measures relevant to the particular level and the purpose of evaluation.

80 citations


01 Oct 1974
TL;DR: The study of software error types, techniques for locating them, and recommendations for improvement of reliability are discussed and interim results from a study of errors encountered in three large software packages are presented.
Abstract: : The study of software error types, techniques for locating them, and recommendations for improvement of reliability are discussed. Interim results from a study of errors encountered in three large software packages are presented. Data collection and analysis schemes are summarized for subject data sets; and plans for data collection on a fourth software project are outlined. Finally, a survey of present software reliability models and a summary of TRW work in this area are given. (Author)

59 citations


Journal ArticleDOI
TL;DR: Initial applications of the voice response system are in computer aided voice wiring, automatic directory assistance, and experiments on speaker verification, but the system is sufficiently modular to adapt readily to other applications.
Abstract: In this paper we discuss the issues involved in implementing an automatic computer voice response system which is capable of serving up to ten independent output channels in real time. The system has been implemented on a Data General NOVA- 800 minicomputer. Individual isolated words and phrases are coded at a rate of 24 000 bits/s using a hardware adaptive, differential pulse-code modulation (ADPCM) coder, and stored on a fixed-head disk as a random access vocabulary. By exploiting the features of ADPCM coding, it is possible to create and edit automatically a vocabulary for the system from an analog tape recording of the spoken entries, with minimal operator intervention. To provide ten simultaneous output lines of speech which are independent of each other required the use of an efficient scheduling algorithm. Such an algorithm was provided by the computer manufacturer in their real-time multitasking system which was part of their Fortran software. Thus almost all the programming required to implement this real-time system was in Fortran, thereby providing flexibility and ease in making changes in the system. Initial applications of the voice response system are in computer aided voice wiring, automatic directory assistance, and experiments on speaker verification, but the system is sufficiently modular to adapt readily to other applications.

50 citations



Journal ArticleDOI
TL;DR: A hardware modification, which yields improved accuracy for adapter evaluation, is described and an alternative calibration procedure is outlined which exploits this improved accuracy potential, and which requires only one impedance standard.
Abstract: Although conceptually straightforward, the application of existing automated network analyzers to the problem of adapter evaluation is inhibited by the limited accuracy of the detection process, the requirement for several impedance standards at each frequency, and software problems. A hardware modification, which yields improved accuracy for adapter evaluation, is described. An alternative calibration procedure is outlined which exploits this improved accuracy potential, and which requires only one impedance standard.

39 citations


Journal ArticleDOI
TL;DR: The last 10 to 15 years have seen the evolution of hardware diagnosis and testing from an art to a science, but software testing, like hardware diagnosis, has not experienced the same growth.
Abstract: The last 10 to 15 years have seen the evolution of hardware diagnosis and testing from an art to a science (see Chang).1 Well conceived and well documented hardware test strategies are now available, as well as reliability measures for hardware designs. Software testing, on the other hand, has not experienced the same growth. (This is not to say that software is not being tested; many complex software systems are up and running without significant problems.) However, unlike hardware diagnosis, software testing methodology is very primitive.

30 citations


Journal ArticleDOI
TL;DR: The principles upon which Janus is based are presented, and it is shown that it is suited to a wide range of source languages and target computers.
Abstract: Janus is a symbolic language used to embody the information which is normally passed from the analysis phase of a compiler to the code generators. It is designed for transporting software: A program coded in a high level language can be translated to Janus on one computer, and the resulting output translated to assembly code on another. (The STAGE2 macro processor could be used for the second translation.) In this paper we present the principles upon which Janus is based, and show that it is suited to a wide range of source languages and target computers.

26 citations


Journal ArticleDOI
E. Douglas Jensen1
01 Dec 1974
TL;DR: This paper outlines the hardware aspects of an experimental distributed function computer intended to examine issues in the context of real-time control systems.
Abstract: Due to advances in hardware technology, processors are no longer the limiting factor in system costs--the software and configuration portions are now dominant. Hardware can be effectively applied to these problems as well: low processor costs have revitalized multiprocessors and multicomputers; the executive functions entailed by such architectures can be facilitated with powerful yet flexible hardware mechanisms. This paper outlines the hardware aspects of an experimental distributed function computer intended to examine these issues in the context of real-time control systems.

21 citations


Journal ArticleDOI
TL;DR: CDL, or Computer Design (description) Language, was created to bridge the gap between hardware and software designers and describes the computer elements and hardware algorithms at a level just above that of the electronics; this is the level commonly called the register transfer level.
Abstract: CDL, or Computer Design (description) Language, was first reported by the author in 1965. Since then, there have been some changes and many versions of simulators for different computer systems. The language was created to bridge the gap between hardware and software designers. As such, it describes the computer elements and hardware algorithms at a level just above that of the electronics; this is the level commonly called the register transfer level.

Journal ArticleDOI
01 Jun 1974
TL;DR: In this paper, an extension of the Finite Strip method to the analysis of PRESTRESSING FORCES is described, together with the use of FISBOB for the study of STRESS DISTRIBUTION in MULTI-CELLULAR BRIDGE DECKS.
Abstract: THE GENERAL FINITE STRIP TECHNIQUE AND RECENT DEVELOPMENTS AND APPLICATIONS IN THE ANALYSIS OF SINGLE SPAN AND MULTI- SPAN BOX BRIDGES ARE SUMMARIZED. A FLEXIBILITY PROCEDURE IS INCORPORATED FOR THE SOLUTION OF INDETERMINATE BRIDGES. EXTENSION OF THE STRIP METHOD TO THE ANALYSIS OF PRESTRESSING FORCES IS DESCRIBED, TOGETHER WITH THE THEORETICAL AND EXPERIMENTAL VERIFICATION OF A LOWER ORDER FINITE STRIP COMPUTER PROGRAM (FISBOB). APPLICATION OF THE METHOD TO THE STUDY OF STRESS DISTRIBUTION IN MULTI-CELLULAR BRIDGE DECKS IS ALSO REPORTED. THE PRACTICAL USE OF FISBOB IS ILLUSTRATED BY THE ANALYSIS OF A POST-TENSIONED SPINE BOX BRIDGE. /AUTHOR/

Proceedings ArticleDOI
06 May 1974
TL;DR: An overview is presented of RADCAP, the operational associative array processor (AP) facility installed at Rome Air Development Center (RADC) and the objectives of the RADCAP facility and plans for its use.
Abstract: An overview is presented of RADCAP, the operational associative array processor (AP) facility installed at Rome Air Development Center (RADC). Basically, this facility consists of a Goodyear Aerospace STARAN associative array (parallel) processor and various peripheral devices, all interfaced with a Honeywell Information Systems (HIS) 645 sequential computer, which runs under the Multics timeshared operating system. The RADCAP hardware and software are described only briefly here since they are detailed in companion papers presented at this conference. The latter part of this paper dwells on the objectives of the RADCAP facility and plans for its use.The STARAN associative parallel processor is a processor based on an associative or content addressable memory and a related ensemble of bit serial processing elements. STARAN is considered to be the first practical associative processor ever produced. This claim of practicality is based on the fact that the design concept for the associative memory of STARAN allows the use of the same high-volume, standard, large-scale integrated (LSI) circuit memory devices that are in widespread use by the computer industry. In fact, every electronic component used in the STARAN associative parallel processor is-available from your local components distributor. The significance of this fact is that now, for the first time, associative processors enjoy the same cost per bit of storage as does the conventional computer.

01 Jan 1974
TL;DR: This thesis includes software and hardware proposals to increase the efficiency of representing an abstract machine and providing capability based protection and a description of a crash recovery consistency problem for files which reside in several levels of storage, together with a solution that was used.
Abstract: This thesis describes a time sharing system constructed by a project at the University of California, Berkeley Campus, Computer Center. The project was of modest size, consuming about 30 man years. The resulting system was used by a number of programmers. The system was designed for a commercially available computer, the Control Data 6400 with extended core store. The system design was based on several fundamental ideas, including: specification of the entire system as an abstract machine, a capability based protection system, mapped address space, and layered implementation. The abstract machine defined by the first implementation layer provided 8 types of abstractly defined objects and about 100 actions to manipulate them. Subsequent layers provided a few very complicated additional types. Many of the fundamental ideas served us well, particularly the concept that the system defines an abstract machine, and capability based protection. However, the attempt to provide a mapped address space using unsuitable hardware was a disaster. This thesis includes software and hardware proposals to increase the efficiency of representing an abstract machine and providing capability based protection. Also included is a description of a crash recovery consistency problem for files which reside in several levels of storage, together with a solution that we used. XEROX PALO ALTO RESEARCH CENTER 3180 PORTER DRIVE/PALO ALTO/CALIFORNIA 94304

Journal ArticleDOI
TL;DR: It may be some years before computer-controlled systems are practical, but even now the water industry recognizes the importance of centralized telemetering and control systems.
Abstract: To date only a few computers have been employed for control purposes in the water industry, and on the whole results have been disappointing. Primarily, the problems have been related to overselling and improper application. Several major manufacturing companies, as well as some smaller ones, have suffered severe financial losses as a result of overzealous application of these process controllers. However, it should only be a matter of time before certain major utilities can justify computers for this purpose. In any event, the computer should be only a subsystem, thus allowing the operator to perform all control and to view all critical telemetering data without benefit of the computer. Also, except for simple logging and point-type control, the utility should consider developing its own software. Each water-distribution system has its unique control problems, and it is very unlikely that a proper software package can be developed without benefit of considerable operating history. Most significant is the fact that equipment being employed now, will not become obsolete. The machinery, even more than the operators, must rely on specific and accurate data coming from the primary devices, and the method of interfacing to the computer should be mostly at the receiving end. How much this new tool will ultimately affect the industry cannot yet be fully known or appreciated. All in all, it may be some years before computer-controlled systems are practical, but even now the water industry recognizes the importance of centralized telemetering and control systems. Manufacturers, must gear systems and components to the smaller utility. There are 18 000 community water utilities in the US alone, and less than 3 per cent serve populations over 25 000 whereas -86 per cent serve less than 5 000 people. By and large, it is the small utility that can realize the most benefit from automation and, by using the building-block type of equipment, can finance its telemetering system without any large capital outlay.

Journal ArticleDOI
TL;DR: Four stages in the execution of software projects, namely the planning, coding, testing, and final usage, contain pitfalls that have been responsible for a large proportion of software failures and it is hoped this paper will help avoid some of them.
Abstract: Time and time again software projects, though undertaken by people of considerable intellectual ability, fail. This will doubtless be a continuing phenomenon, because software production is indeed a difficult task requiring wide-ranging skills. Certainly, no single paper can suddenly solve all the problems of software writing and eliminate the failures. The scope of this paper is limited to four stages in the execution of software projects, namely the planning, coding, testing, and final usage. These four crucial areas contain pitfalls that have been responsible for a large proportion of software failures, and it is hoped this paper will help avoid some of them.

Book ChapterDOI
09 Apr 1974
TL;DR: A technique (finite state testing) which effectively organises data objects into equivalence classes and exercises a module using a representative of each class is described.
Abstract: The research reported in this paper is concerned with the testing of software which is being developed in a structured way The advantages which accrue from a well structured or modular organisation of software depend upon an ability to independently test a module well before the full development of all the modules with which it communicates This paper describes a technique (finite state testing) which effectively organises data objects into equivalence classes and exercises a module using a representative of each class As a technique it has an affinity with both the type checking performed by a conventional compiler and the assertion checking performed by a so-called verifying compiler It is however a practical technique which has been used in experimental systems and is being incorporated in a prototype program development system

Journal ArticleDOI
01 Dec 1974
TL;DR: The implementation of a vector arithmetic instruction is presented to provide a more thorough insight into the operational aspects of the FCPU and the benefits derived from the structuring of this hardware product are presented.
Abstract: The proper structuring of the implementation levels in a digital computer system is an important attribute for all aspects of the product. That is to say, the design, development, testing, production and maintenance of the product are facilitated by a well-structured design of the software and hardware components. This paper discusses digital computer system structuring in general, followed by descriptions of the logical and physical architecture of the DATASAAB FCPU (Flexible Central Processing Unit). The implementation of a vector arithmetic instruction is presented to provide a more thorough insight into the operational aspects of the FCPU. Finally, the benefits derived from the structuring of this hardware product are presented.

01 Jan 1974
TL;DR: With the new look-up approach an entire ERTS MSS frame can be classified into 24 classes in 1.3 hours, compared to 22.5 hours required by the conventional method.
Abstract: Software employing Eppler's improved table look-up approach to pattern recognition has been developed, and results from this software are presented. The look-up table for each class is a computer representation of a hyperellipsoid in four dimensional space. During implementation of the software Eppler's look-up procedure was modified to include multiple ranges in order to accommodate hollow regions in the ellipsoids. In a typical ERTS classification run less than 6000 36-bit computer words were required to store tables for 24 classes. Classification results from the improved table look-up are identical with those produced by the conventional method, i.e., by calculation of the maximum likelihood decision rule at the moment of classification. With the new look-up approach an entire ERTS MSS frame can be classified into 24 classes in 1.3 hours, compared to 22.5 hours required by the conventional method. The new software is coded completely in FORTRAN to facilitate transfer to other digital computers.

Proceedings ArticleDOI
06 May 1974
TL;DR: A study to determine the expected performance of a "classical" multiprocessor consisting of K (K>1) central processor units which have access to N storage units using a discrete Fortran model of the hardware and software.
Abstract: This paper reports the results of a study to determine the expected performance of a "classical" multiprocessor. The "classical" multiprocessor as used here is a multiprocessor consisting of K (K>1) central processor units which have access to N storage units. The tool used for studying the performance is a discrete Fortran model of the hardware and software. Hardware measurements of the real system performance were available for validation of the model. The primary indicator of performance is the system thruput in millions of instructions executed per second (MIPS). Since the application of the hardware is in a real-time system, consideration is also given to response times of critical software tasks.

Journal ArticleDOI
TL;DR: Machine language subroutines can be integrated with the SEED system and can provide complex decision functions, data recording schemes, and software for new peripheral devices.
Abstract: Machine language subroutines can be integrated with the SEED system. These subroutines can shorten lengthy programs that could otherwise be handled by SKED, and can provide complex decision functions, data recording schemes, and software for new peripheral devices. Rules and examples for each function will he presented.

Journal ArticleDOI
01 Dec 1974
TL;DR: The chief attributes of the Virtual Processor placed at the disposal of each user of the SYmbOL-2R time-shared multiprocessor system are described, and the mechanisms by which SYMBOL's hardwired operating system manages processing-mode transitions for individual Virtual Processors and allocates hardware resources among competing virtual Processors are described.
Abstract: This paper describes the chief attributes of the Virtual Processor placed at the disposal of each user of the SYMBOL-2R time-shared multiprocessor system, and the mechanisms by which SYMBOL's hardwired operating system manages processing-mode transitions for individual Virtual Processors and allocates hardware resources -- processors and memory space -- among competing Virtual Processors. It describes the provisions by which the unusually high-level capabilities of the hardware are augmented by software, and contrasts the structure of the software component of the operating system with that of the hardware component. Finally, it describes the hardware/software partition of resource-allocation functions, in which allocation policies are controlled by software and executed by hardware.

Journal Article
TL;DR: Preliminary results of derived parallax data from correlation of digitized grey shade data are presented along with information on parallel computers and their impact on this process.
Abstract: Digital Mapping and Digital Image Processing are discussed along with related software and hardware. Our Digital Image Manipulation and Enhancement System (DIMES) software is discussed indicating its potential as a flexible R&D tool for conducting studies in digital image processing as applied to mapping and charting problems. Preliminary results of derived parallax data from correlation of digitized grey shade data are presented along with information on parallel computers and their impact on this process.

Journal ArticleDOI
TL;DR: The development of computer software tailored to social science research is less than two decades old, yet in a very short time numerous programs, packages, and computer-based data sets have been developed.
Abstract: The development of computer software tailored to social science research is less than two decades old, yet in a very short time numerous programs, packages, and computer-based data sets have been developed. Since there has been no coordinated international or national effort to pool resources or avoid duplication of effort, most activity in the social science computing area tends to be highly localized and relatively difficult to export. Of course there are exceptions to this rule, but the field is best characterized in terms of various types of incompatibilities. The dilemma occurs in part because of the diversity within social science research itself. Appendix A contains a table

Journal ArticleDOI
John W. Boyse1
TL;DR: The execution characteristics of two types of commonly used programs in a large-scale, time-shared computer system are shown and paging characteristics of tasks as a function of the number of pages those tasks have in core.h.
Abstract: h show the execution characteristics of two types of commonly used programs in a large-scale, time-shared computer system A software monitoring facility built into the supervisor was used for data collection during normal system operation These data were analyzed, and results of this analysis are presented for a Fortran compiler and an interactive line file editorProbability distribution functions and other data are given for such things as CPU intervals, I/O intervals, and the number of such intervals during execution Empirical distributions are compared with simple theoretical distributions (exponential, hyperexponential, and geometric) Other data show paging characteristics of tasks as a function of the number of pages those tasks have in core

Journal ArticleDOI
TL;DR: A sampling software monitor designed to track CPU and channel I/O activity in OS/360 is described and a procedure for fitting the model to the system based on measurements obtained by the monitor is developed and comparative results of the adjusted model and the actual system are presented.
Abstract: The use of a software monitor in validating a cyclic server multiprogramming model of OS/360 is presented. Following a brief development of the analytic model, a sampling software monitor designed to track CPU and channel I/O activity in OS/360 is described. A procedure for fitting the model to the system based on measurements obtained by the monitor is developed and comparative results of the adjusted model and the actual system are presented for validation of the model principles.

Journal ArticleDOI
TL;DR: With the advent of multiprocessor systems, computer networks, and distributed-function architecture, additional problems are now encountered, for example, the design of computer systems cannot be restricted to a von Neumann type of architecture and a means of describing the possible concurrency between units in the system must be found.
Abstract: In the 1960's enormous progress was made in the automated design of computer systems.1 the modeling of parallel compulations.2,3 and the simulation and evaluation of performance of computer systems.4 With the advent of multiprocessor systems, computer networks, and distributed-function architecture, additional problems are now encountered. for example, the design of computer systems cannot be restricted to a von Neumann type of architecture. A means of describing the possible concurrency between units in the system must be found either through some new language developments or through some innovative model methodology. Modeling of parallel computations cannot be restricted to the design of asynchronous structures on one hand, and the behavior of algorithms on the other hand. but should encompass simultaneouly all of the modules of a system. hardware as well as software. In particular orderly interprocess communications and sharing of resources have to be taken into account. Techniques to represent extant or projected systems at different levels of detail. to simulate them under various configurations. and to analyze them for correctness have to be further developed. Stochastic predictions of performance have to graduate from single to multiple resource system.

Proceedings ArticleDOI
30 Sep 1974
TL;DR: An integral system, which covers the development range from symbolic microinstruction level to manufacturing data and loadable microprograms, is developed at Siemens, called MIKADO.
Abstract: This paper discusses some software aids used in microprogram development at Siemens.The following aspects are typical for the microprogram-design for our present day computers:a) the instruction format of microprograms is complicated and a good knowledge on the internal hardware-structure is required to understand it,b) in our, and also other, microprogramming systems the possibilities of locating the microinstructions in control store are restricted (by limited branching distances e.g.), thus making the address-assignment a difficult optimizing task,c) as soon as the hardware is finished its microprograms have to be available in a sufficiently tested shape,d) above that, microprograms which are stored in read only memory have to be tested before manufacturing very thoroughly,e) manufacturing data has to be generated for microprograms in read only memory.These peculiar problems require additional support by computer aided design-techniques. As an aid for our microprogram design we have developed an integral system, which covers the development range from symbolic microinstruction level to manufacturing data and loadable microprograms. The system is called MIKADO.

Journal ArticleDOI
01 Apr 1974
TL;DR: Computer graphics applications, software, and hard-ware research, are examined and evaluated in Continental Western Europe, finding the need for computer graphics still exists and trends appear to be on the horizon.
Abstract: Computer graphics applications, software, and hard-ware research, are examined and evaluated in Continental Western Europe. Growth has been slow. The reasons are the high cost of hardware and the complexity of the required software. The past reveals a rather dim picture. Mechanical engineering applications have shown some promising success. Electrical engineering applications have been effective mainly for PCB and IC mask layout only. Interactive graphic software has not yet effectively found its way outside Fortran. APL-G seems an interesting graphic extension of the language APL. Promsing trends appear to be on the horizon. They are related to hardware and software tradeoffs, as, for example, in the TALENT System. The need for computer graphics still exists. The hardware and software tradeoffs will receive more attention. Success in computer graphics will depend upon advances in computer systems design, especially programming languages and systems. Computer graphics has yet to come of age.

Journal ArticleDOI
TL;DR: The Automatic Intercept System Operational Programs provide the logic for processing calls served by the system, and these programs also perform administrative and software correction and recovery functions.
Abstract: The Automatic Intercept System Operational Programs provide the logic for processing calls served by the system. These programs also perform administrative and software correction and recovery functions. Described are program organization, use of temporary memory, and details of call processing.