scispace - formally typeset
Search or ask a question

Showing papers on "Software published in 1983"


Journal ArticleDOI
A.J. Albrecht1, J.E. Gaffney
TL;DR: In this paper, the equivalence between Albrecht's external input/output data flow representative of a program (the function points" metric) and Halstead's [2] "software science" or "software linguistics" model of a programming program as well as the "soft content" variation of Halsteads model suggested by Gaffney [7] was demonstrated.
Abstract: One of the most important problems faced by software developers and users is the prediction of the size of a programming system and its development effort. As an alternative to "size," one might deal with a measure of the "function" that the software is to perform. Albrecht [1] has developed a methodology to estimate the amount of the "function" the software is to perform, in terms of the data it is to use (absorb) and to generate (produce). The "function" is quantified as "function points," essentially, a weighted sum of the numbers of "inputs," "outputs,"master files," and "inquiries" provided to, or generated by, the software. This paper demonstrates the equivalence between Albrecht's external input/output data flow representative of a program (the "function points" metric) and Halstead's [2] "software science" or "software linguistics" model of a program as well as the "soft content" variation of Halstead's model suggested by Gaffney [7].

1,560 citations


Patent
21 Nov 1983
TL;DR: A software version management system, also called system modeller, provides for automatically collecting and recompiling updated versions of component software objects comprising a software program for operation on a plurality of personal computers coupled together in a distributed software environment via a local area network as mentioned in this paper.
Abstract: A software version management system, also called system modeller, provides for automatically collecting and recompiling updated versions of component software objects comprising a software program for operation on a plurality of personal computers coupled together in a distributed software environment via a local area network. The component software objects include the source and binary files for the software program, which stored in various different local and remote storage means through the environment. The component software objects are periodically updated, via a system editor, by various users at their personal computers and then stored in designated storage means. The management system includes models which are also objects. Each of the models is representative of the source versions of a particular component software object and contain object pointers including a unique name of the object, a unique identifier descriptive of the cronological updating of its current version, information as to an object's dependencies on other objects and a pathname representative of the residence storage means of the object. Means are provided in the system editor to notify the management system when any one of the objects is being edited by a user and the management system is responsive to such notification to track the edited objects and alter their respective models to the current version thereof.

857 citations


Journal ArticleDOI
TL;DR: This paper investigates a stochastic model for a software error detection process in which the growth curve of the number of detected software errors for the observed data is S-shaped.
Abstract: This paper investigates a stochastic model for a software error detection process in which the growth curve of the number of detected software errors for the observed data is S-shaped. The software error detection model is a nonhomogeneous Poisson process where the mean-value function has an S-shaped growth curve. The model is applied to actual software error data. Statistical inference on the unknown parameters is discussed. The model fits the observed data better than other models.

780 citations


Patent
11 Jul 1983
TL;DR: In this paper, the authors propose a software authorization system where a software can be authorized for use a given number of times by a base unit after which the base unit (computer, videogame base unit, record player, videorecorder or videodisk player) cannot use that software until the manufacturer sends an authorization for additional uses to the user's base unit.
Abstract: Software (programs, videogames, music, movies, etc.) can be authorized for use a given number of times by a base unit after which the base unit (computer, videogame base unit, record player, videorecorder or videodisk player) cannot use that software until the manufacturer sends an authorization for additional uses to the user's base unit. Authorizations may be sent via telephone line, mail, or whatever form of communication is most suited to the application. Authorizations cannot be reused, for example by recording the telephone authorization signal and replaying it to the base unit. Similarly, authorizations can be made base unit specific, so that an authorization for one base unit cannot be transferred to another base unit. This invention also solves the "software piracy problem" and allows telephone sales of software as additional benefits.

615 citations


Journal ArticleDOI
TL;DR: A theoretical approach is developed to deal with man-machine interactive systems requiring advanced decision making in unpredictable environments and consists of a three-layer control of "increasing intelligence and decreasing precision".
Abstract: A theoretical approach is developed to deal with man-machine interactive systems requiring advanced decision making in unpredictable environments. The hierarchical method consists of a three-layer control of "increasing intelligence and decreasing precision." The lowest level consists of several controllers designed for effective control with existing hardware using an approximation theory of optimal control. The next level is that of a coordinator which utilizes new computer architectures to effectively control the overall hardware system. The highest level is the organizer which supervises the performance of the overall system. Both highest levels are computer implemented and the research involved is in developing the appropriate architecture and software to accommodate others. The lowest level, aimed for end-point control tasks, is dominated by typical hardware control methods. The coexistence of the two approaches makes the method novel. Application of intelligent control techniques to robotics and manipulative systems is considered.

294 citations


Book
01 Jan 1983
TL;DR: Numerical Methods, Software, and Analysis, Second Edition introduces the methods, tools, and ideas of numerical computation.
Abstract: Numerical Methods, Software, and Analysis, Second Edition introduces the methods, tools, and ideas of numerical computation.

256 citations


Proceedings ArticleDOI
J. F. Kelley1
12 Dec 1983
TL;DR: This research demonstrates that the methodological tools of the engineering psychologist can help build user-friendly software that accommodates the unruly language of computer-naive, first-time users by eliciting the cooperation of such users as partners in an iterative, empirical development process.
Abstract: A six-step, iterative, empirical, human factors design methodology was used to develop CAL,a natural language computer application to help computer-naive business professionals manage their personal calendars. Language is processed by a simple, non-parsing algorithm having limited storage requirements and a quick response time. CAL allows unconstrained English inputs from users with no training (except for a 5 minute introduction to the keyboard and display) and no manual (except for a two-page overview of the system). In a controlled test of performance, CAL correctly responded to between 86% and 97% of the inputs it received, according to various criteria. This research demonstrates that the methodological tools of the engineering psychologist can help build user-friendly software that accommodates the unruly language of computer-naive, first-time users by eliciting the cooperation of such users as partners in an iterative, empirical development process.The principal purpose of the research reported here was to design and test a systematic, empirical methodology for developing natural language computer applications. This paper describes that methodology and its successful use in the development of a natural language computer application: CAL,Calendar Access Language. The limited context or domain in which the application operates is the management of a personal calendar, or appointment book, data base by computer-naive business professionals.

246 citations


Journal ArticleDOI
TL;DR: This paper defines software safety and describes a technique called software fault tree analysis which can be used to analyze a design as to its safety and has been applied to a program which controls the flight and telemetry for a University of California spacecraft.
Abstract: With the increased use of software controls in critical realtime applications, a new dimension has been introduced into software reliability–the "cost" of errors. The problems of safety have become critical as these applcations have increasingly included areas where the consequences of failure are serious and may involve grave dangers to human life and property. This paper defines software safety and describes a technique called software fault tree analysis which can be used to analyze a design as to its safety. The technique has been applied to a program which controls the flight and telemetry for a University of California spacecraft. A critical failure scenario was detected by the technique which had not been revealed during substantial testing of the program. Parts of this analysis are presented as an example of the use of the technique and the results are discussed.

243 citations


Patent
Carl P. Graf1, Kim Fairchild1, Karl M. Fant1, George W Rusler1, Michael O. Schroeder1 
29 Jul 1983
TL;DR: In this paper, a computer controlled imaging system involving a digital image processing and display system which has the ability to compose and construct a display scene from a library of images with sufficient processing speed to permit real-time or near real time analysis of the images by a human operator or a hardware/software equivalent thereof is described.
Abstract: The disclosure relates to a computer controlled imaging system involving a digital image processing and display system which has the ability to compose and construct a display scene from a library of images with sufficient processing speed to permit real-time or near real time analysis of the images by a human operator or a hardware/software equivalent thereof.

178 citations


Patent
18 Aug 1983
TL;DR: In this article, a bidirectional communications link, termed a customization window, comprises a series of routines including functions, procedures and status flags implemented in software, interfaced between a machine control logic (MCL) operating to control the auxiliary functions of a computerized numerical control (CNC) system and a numerical control operating to controlling the multi-axis motions of the system.
Abstract: A bidirectional communications link, termed a customization window, comprises a series of routines including functions, procedures and status flags implemented in software, interfaced between a machine control logic (MCL) operating to control the auxiliary functions of a computerized numerical control (CNC) system and a numerical control (NC) operating to control the multi-axis motions of the system whereby the MCL, implemented in software, can access the NC, also implemented in software, and control the system but only under conditions dictated by the NC and enforced by the routines of the customization window so that, for example, the integrity of the NC software will not be compromised by any of the MCL software which is of a user programmable type. Additionally, machine setup data and the availability of optional system control features are routed from the NC to the MCL through the customization window. Also the customization window conveys the status of the various operational conditions of the NC to the MCL.

152 citations


Journal ArticleDOI
P. Misra1
TL;DR: A case study is presented of the analysis of failure data from a Space Shuttle software project to predict the number of failures likely during a mission, and the subsequent verification of these predictions.
Abstract: Methods proposed for software reliability prediction are reviewed. A case study is then presented of the analysis of failure data from a Space Shuttle software project to predict the number of failures likely during a mission, and the subsequent verification of these predictions.

Journal ArticleDOI
TL;DR: The findings suggest that software group innovativeness can be improved by providing appropriate external information channels, but this relationship is contingent on a software groups internal environment.
Abstract: This study of forty-nine software development groups investigated the effectiveness of ten information channels, linking the software groups to potential information resources about new developments in software methodologies, as a means of facilitating software group innovativeness. While the findings suggest that software group innovativeness can be improved by providing appropriate external information channels, this relationship is contingent on a software groups internal environment. The channels most commonly provided by those organizations participating in the study tended to be those least effective in promoting innovation.

DOI
01 Jan 1983
TL;DR: The CONIC architecture for DCCS is described, concentrating on the software structure but also briefly describing the physical architecture designed to support a CONIC system.
Abstract: Distributed computer control systems (DCCS) have a number of potential advantages over centralised systems, especially where the application is itself physically distributed. A computer station can be placed close to the plant being controlled, and a communications network used to enable the stations to communicate to co-ordinate their actions. However, the software must be carefully designed to exploit the potential advantages of distribution. In the paper, the CONIC architecture for DCCS is described, concentrating on the software structure but also briefly describing the physical architecture designed to support a CONIC system. The software structure emphasises the distinction between the writing of individual software components and the construction and configuration of a system from a set of components. A modular structure is used to separate programming from configuration. Typed entry and exit ports clearly define a module interface which, like the plugs and sockets of hardware components, permit modules to be interconnected in different ways. On-line modification and extension of the system is supported by permitting the dynamic creation and interconnection of modules. Message-passing primitives are provided to permit modules to co-ordinate and synchronise control actions.

Journal ArticleDOI
TL;DR: A decision procedure to determine when computer software should be released is described, based upon the cost-benefit for the entire company that has developed the software.
Abstract: A decision procedure to determine when computer software should be released is described. This procedure is based upon the cost-benefit for the entire company that has developed the software. This differs from the common practice of only minimizing the repair costs for the data processing division. Decision rules are given to determnine at what time the system should be released based upon the results of testing the software. Necessary and sufficient conditions are identified which determine when the system should be released (immediately, before the deadline, at the deadline, or after the deadline). No assumptions are made about the relationship between any of the model's parameters. The model can be used whether the software was developed by a first or second party. The case where future costs are discounted is also considered.

Patent
14 Sep 1983
TL;DR: In this paper, a method and apparatus for protecting computer software using an active coded hardware apparatus which is adapted to be connected by an interface connector to a communications port of a computer is described.
Abstract: A method and apparatus are provided for protecting computer software using an active coded hardware apparatus which is adapted to be connected by an interface connector to a communications port of a computer. The computer is directed by a coded software program in which a small section of the code of the computer software interrogates the communications port periodically to determine if the active coded hardware device is present and connected. The active coded hardware device has a permanently established preset code on an active presettable counter circuit which code is transmitted when interrogated. If the active coded hardware device is present when interrogated and the correct code returned through the communications port of the computer, the program is permitted to continue insuring that the software is properly protected at all times. The active coded hardware device with its particular code and circuitry are sealed in epoxy as a deterrent against tampering. In order to violate the hardware it would be necessary to construct a duplicate of the hardware device in order to run a second copy of the software. Since the device is active containing electrical logical elements the degree in duplicating the device and its function without the benefit of circuit diagrams will be greater than the software itself. The particular hardware may be used alone or will permit daisy-chaining allowing 2, 3 or even an entire family of other elements with their own individual codes to operate simultaneously and at the same time permit computer peripherals to remain connected to the same port. A variety of time and logic elements may be added to the basic configuration in order to increase the difficulty of duplicating or violating the system.

Journal ArticleDOI
TL;DR: This work proposes a straightforward pragmatic approach to software fault tolerance which takes advantage of the structure of real-time systems to simplify error recovery, and a classification scheme for errors is introduced.
Abstract: Real-time systems often have very high reliability requirements and are therefore prime candidates for the inclusion of fault tolerance techniques. In order to provide tolerance to software faults, some form of state restoration is usually advocated as a means of recovery. State restoration can be expensive and the cost is exacerbated for systems which utilize concurrent processes. The concurrency present in most real-time systems and the further difficulties introduced by timing constraints suggest that providing tolerance for software faults may be inordinately expensive or complex. We believe that this need not be the case, and propose a straightforward pragmatic approach to software fault tolerance'which is believed to be applicable to many real-time systems. The approach takes advantage of the structure of real-time systems to simplify error recovery, and a classification scheme for errors is introduced. Responses to each type of error are proposed which allow service to be maintained.

Patent
13 Jan 1983
TL;DR: In this paper, the authors present an approach for protecting proprietary computer software against unauthorised use, which comprises a store (11) for selected data, means (10) for comparing data successively communicated by a program running on a computer (1) with data from the storage means, means such as an indelible memory (15) associated with a microprocessor (14) for storing identifying data, and transmitting means for sending stored identifying data to the computer.
Abstract: Apparatus for protecting proprietary computer software against unauthorised use comprises a store (11) for selected data, means (10) for comparing data successively communicated by a program running on a computer (1) with data from the storage means, means such as an indelible memory (15) associated with a microprocessor (14) for storing identifying data, and transmitting means (14) for sending stored identifying data to the computer. When a match is detected by the comparator, the identifying data are sent to the computer, which requires this data for continued normal running. A copy of the software cannot run on a computer without associated protection apparatus and unauthorised copies will therefore be unusable unless the protection apparatus can be obtained. For a great degree of protection, a sequence of matches and identifying data messages may be required to allow continued normal running of a program.

15 Aug 1983
TL;DR: This work describes the implementation of several standard automatic focusing algorithms on the POPEYE system and provides experimental evaluation and comparison, which leaves the system with a valuable enhancement and provides a starting point for the Implementation of a production focusing system.
Abstract: : The POPEYE system is a grey level computer vision system developed for research and development. It provides a convenient environment for research by coupling a powerful microprocessor with a large base of support software. The particulars of the system's hardware configuration and software support are given after an explanation of the desires which motivated its fabrication. In addition to providing general computation and display capabilities, the system provides open loop manual or software control over the camera parameters of pan, tilt, focus, and zoom. This offers many advantages over fixed arrangements such as the ability to investigate focusing and elementary tracking algorithms. This work describes the implementation of several standard automatic focusing algorithms on the POPEYE system and provides experimental evaluation and comparison. This leaves the system with a valuable enhancement and provides a starting point for the implementation of a production focusing system. There are many possible uses for such a system, including robotic assembly and inspection tasks. One application is the development of industrial inspection algorithms for the Factory of the Future Project. Part of this project involved the inspection of fluorescent lamp mount assemblies. Algorithms for the automated inspection of the assemblies are described which represent the solutions to difficult inspection problems currently beyond the capabilities of commercial vision systems. Suggestions for the implementation of a production focusing system are given alone with suggestions for possible hardware improvements to the POPEYE system.

Journal ArticleDOI
04 Feb 1983-JAMA
TL;DR: Methods should be sought for writing clinical algorithms that represent expert consensus that could be written for any area of medical decision making that can be standardized, so that medical practice could be taught more effectively, monitored accurately, and understood better.
Abstract: The clinical algorithm (flow chart) is a text format that is specially suited for representing a sequence of clinical decisions, for teaching clinical decision making, and for guiding patient care. A representative clinical algorithm is described in detail; five steps for writing an algorithm and seven steps for writing a set of algorithms are outlined. Five clinical education and patient care uses of algorithms are then discussed, including a map for teaching clinical decision making and protocol charts for guiding step-by-step care of specific problems. Clinical algorithms are compared as to their clinical usefulness with decision analysis. Three objections to clinical algorithms are answered, including the one that they restrict thinking. It is concluded that methods should be sought for writing clinical algorithms that represent expert consensus. A clinical algorithm could then be written for any area of medical decision making that can be standardized. Medical practice could then be taught more effectively, monitored accurately, and understood better. (JAMA1983;249:627-632)

Patent
Terrance L. Lillie1
12 Sep 1983
TL;DR: In this paper, a data processing system including a method and apparatus for controlling software configurations is presented, which includes a portable media storage device, such as a floppy disk drive, and fixed information which loads a preselected portion of the portable media into the system.
Abstract: A data processing system including a method and apparatus for controlling software configurations. A plurality of software programs and routines are stored in a mass storage device. Which software accessed by the system is controlled by non-volatile information stored in the system. The system also includes a portable media storage device, such as a floppy disk drive, and fixed information which loads a preselected portion of the portable media into the system. Two portable media are also provided. The preselected portion of the first portable media contains a bootstrap program which loads an operating system into the system. The preselected portion of the second portable media contains a program for altering the non-volatile information stored in the system so as to change the software configuration which may be accessed by the system. The program then erases the preselected portion of the second portable media after execution. In a preferred embodiment, the program also includes a counter containing a preselected number which is decremented after each time the program is executed and the program does not erase the preselected region until the counter reaches zero.

Journal ArticleDOI
01 Mar 1983
TL;DR: The applications of computer-assisted, or comput(eriz)ed, tomography (CT) are reviewed and the major emphasis is on medical applications, but all relevant technical sciences are covered.
Abstract: The applications of computer-assisted, or comput(eriz)ed, tomography (CT) are reviewed. The major emphasis is on medical applications, but all relevant technical sciences are covered. A unified descriptive account of the underlying principles is presented (detailed reviews of algorithms and their mathematical backgrounds can be found elsewhere in this special issue). Deficiencies in existing hardware and software are identified and the possible means of remedying the more urgent of these are outlined. Promising approaches for future research and development into CT are suggested.

Dissertation
01 Mar 1983
TL;DR: This work determines the appropriate granularity of replacement in relation to the module structure of the language, examines the constraints imposed on dynamic replacement by the need to ensure behavioral consistency across replacements, and analyzes functional requirements for a replacement mechanism.
Abstract: The replacement of parts of software systems is an important aspect of programming methodology. Most of the research in this area has centered around support for modular construction and the clear separation of interface from implementation. The emphasis has been on producing easily modified static program structures. With recent increased interest in distributed systems, attention has been focused on a class of applications for which this approach to modifiability is insufficient. These are applications involving long-running, distributed computations with long-term, on-line state information. In the context of the Argus programming system, we examine a method of supporting dynamic modification of software for this class of applications. We determine the appropriate granularity of replacement in relation to the module structure of the language, examine the constraints imposed on dynamic replacement by the need to ensure behavioral consistency across replacements, and then analyze functional requirements for a replacement mechanism.

Proceedings ArticleDOI
TL;DR: The System Modeller provides automatic support for several different kinds of program development cycle in the Cedar programming system, including the daily evolution of a single module or a small group of modules modified by a single person.
Abstract: The System Modeller provides automatic support for several different kinds of program development cycle in the Cedar programming system. It handles the daily evolution of a single module or a small group of modules modified by a single person, the assembly of numerous modules into a large system with complex interconnections, and the formal release of a system. The Modeller can also efficiently locate a large number of modules in a big distributed file system, and move them from one machine to another to meet operational requirements or improve performance.

Journal ArticleDOI
TL;DR: A technique, software fault tree analysis, is described for the safety analysis of software that interfaces with hardware faultTree analysis to allow the safety of the entire system to be maximized.

Journal ArticleDOI
TL;DR: SPIDER is a general-purpose image processing software package which consists of over 400 FORTRAN IV subroutines for various image processing algorithms and several utility programs for managing them.
Abstract: SPIDER is a general-purpose image processing software package which consists of over 400 FORTRAN IV subroutines for various image processing algorithms and several utility programs for managing them. The package was developed for the benefit of extensive interchange and accumulation of programs among research groups. Thus, high transportability of software is emphasized above all in its design concept. In effect, all the image processing subroutines are implemented to be completely free of I/O work such as file access or driving peripheral image devices. The specifications of SPIDER programs also regulate the style of comments in source programs and documentation for the user's manual. SPIDER may also be very useful as a research tool in other scientific disciplines as well as integrating fundamental algorithms in the image processing community. The design concepts, specifications, and contents of SPIDER are described.

Book
01 Jan 1983
TL;DR: This self-contained introductory text stresses a learn by doing' approach to numerical analysis with exercises to be done on a computer or micro-computer to show why--and why not--they work.
Abstract: This self-contained introductory text stresses a learn by doing' approach to numerical analysis with exercises to be done on a computer or micro-computer. Unlike software'' books, this text thoroughly explains methods used for solving problems with sufficient analysis to show why--and why not--they work. Discussions of theorem proofs are kept to a minimum.

Journal ArticleDOI
TL;DR: The architecture of a logic simulation machine employing distributed and parallel processing is described, which can accommodate different levels of modeling ranging from simple gates to complex functions, and support timing analysis.
Abstract: Special-purpose CAD hardware is increasingly being considered as a means to meet the challenge posed to conventional (software-based) CAD tools by the growing complexity of VLSI circuits. In this paper we describe the architecture of a logic simulation machine employing distributed and parallel processing. Our architecture can accommodate different levels of modeling ranging from simple gates to complex functions, and support timing analysis. We estimate that simulation implemented by the proposed special-purpose hardware will be between 10 and 60 times faster than currently used software algorithms running on general-purpose computers. With the available technology, a throughput of 1 000 000 gate evaluations/sec can be achieved.

Journal ArticleDOI
TL;DR: The use and acceptance of the term "software engineer" is investigated, and the functions and background of persons identified as software engineers are reported.
Abstract: The results of a survey of software development practice are reported and analyzed. The problems encountered in various phases of the software life cycle are measured and correlated with characteristics of the responding installations. The use and acceptance of the term "software engineer" is investigated, and the functions and background of persons identified as software engineers are reported. The usage of a wide variety of software engineerilng tools and methods is measured; conclusions are drawn concerning the usefulness of these techniques.

Journal ArticleDOI
TL;DR: The purpose of this paper is to identify significant improvements that will be made in simulation software in the next 10 years, based upon review of ongoing research efforts in programming systems.
Abstract: The availability of good software tools is vitally important to practitioners of simulation. The purpose of this paper is to identify significant improvements that will be made in simulation software in the next 10 years. While based upon review of ongoing research efforts in programming systems, a paper such as this is necessarily speculative. Substantial research effort is already underway. Numerous papers-even entire books-describe current research and comment upon trends for the future. Since most of the current research in programming systems is being conducted in other problem contexts, we must look outside the discipline of simulation for most of our examples. In doing so, we must be careful to assess the applicability of such examples to simulation, for techniques that are successful within a narrow focus may not be readily extended to such a broad discipline as simulation. There is great cause for optimism: it appears likely that simulation practitioners of the future will work in an environment comprised of well-integrated software tools. The integrated software environment of the 1990s will make present state-of-the art simulation software tools look as primitive as the building of simulation models entirely in high-level languages like Fortran looks today.

Journal ArticleDOI
TL;DR: A commonly used model for describing software failures is presented, and it is pointed out that some of the alternative models can be obtained by assigning specific prior distri- butions for the parameters of this model.
Abstract: In this paper we present a commonly used model for describing software failures, and point out that some of the alternative models can be obtained by assigning specific prior distri- butions for the parameters of this model. The likelihood function of an unknown parameter of the model poses some interesting issues and problems, which can be meaningfully addressed by adopting a Bayesian point of view. We present some real life data on software failures to illustrate the usefulness of the approach taken here.