scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Software Engineering in 1980"


Journal ArticleDOI
TL;DR: In this paper, the A-7 document is described as a model of a disciplined approach to requirements specification; the document is available to anyone who wishes to see a fully worked-out example of the approach.
Abstract: This paper concerns new techniques for making requirements specifications precise, concise, unambiguous, and easy to check for completeness and consistency. The techniques are well-suited for complex real-time software systems; they were developed to document the requirements of existing flight software for the Navy's A-7 aircraft. The paper outlines the information that belongs in a requirements document and discusses the objectives behind the techniques. Each technique is described and illustrated with examples from the A-7 document. The purpose of the paper is to introduce the A-7 document as a model of a disciplined approach to requirements specification; the document is available to anyone who wishes to see a fully worked-out example of the approach.

542 citations


Journal ArticleDOI
R.C. Cheung1
TL;DR: A user-oriented software reliability figure of merit is defined to measure the reliability of a software system with respect to a user environment and the effects of the user profile, which summarizes the characteristics of the users of a system, on system reliability are discussed.
Abstract: A user-oriented reliability model has been developed to measure the reliability of service that a system provides to a user community. It has been observed that in many systems, especially software systems, reliable service can be provided to a user when it is known that errors exist, provided that the service requested does not utilize the defective parts. The reliability of service, therefore, depends both on the reliability of the components and the probabilistic distribution of the utilization of the components to provide the service. In this paper, a user-oriented software reliability figure of merit is defined to measure the reliability of a software system with respect to a user environment. The effects of the user profile, which summarizes the characteristics of the users of a system, on system reliability are discussed. A simple Markov model is formulated to determine the reliability of a software system based on the reliability of each individual module and the measured intermodular transition probabilities as the user profile. Sensitivity analysis techniques are developed to determine modules most critical to system reliability. The applications of this model to develop cost-effective testing strategies and to determine the expected penalty cost of failures are also discussed. Some future refinements and extensions of the model are presented.

505 citations


Journal ArticleDOI
TL;DR: In this article, an extended timed Petri net model is used to model the synchronization involved in real-time asynchronous concurrent systems, and procedures for predicting and verifying the system performance are presented.
Abstract: Some analysis techniques for real-time asynchronous concurrent systems are presented. In order to model clearly the synchronization involved in these systems, an extended timed Petri net model is used. The system to be studied is first modeled by a Petri net. Based on the Petri net model, a system is classified into either: 1) a consistent system; or 2) an inconsistent system. Most real-world systems fall into the first class which is further subclassified into i) decision-free systems; ii) safe persistent systems; and iii) general systems. Procedures for predicting and verifying the system performance of all three types are presented. It is found that the computational complexity involved increases in the same order as they are listed above.

503 citations


Journal ArticleDOI
TL;DR: This paper outlines the argument why it is unlikely that anyone will find a cheaper nonlookahead memory policy that delivers significantly better performance and suggests that a working set dispatcher should be considered.
Abstract: A program's working set is the collection of segments (or pages) recently referenced. This concept has led to efficient methods for measuring a program's intrinsic memory demand; it has assisted in undetstanding and in modeling program behavior; and it has been used as the basis of optimal multiprogrammed memory management. The total cost of a working set dispatcher is no larger than the total cost of other common dispatchers. This paper outlines the argument why it is unlikely that anyone will find a cheaper nonlookahead memory policy that delivers significantly better performance.

405 citations


Journal ArticleDOI
TL;DR: This paper presents a testing strategy desiged to detect errors in the control flow of a computer program, and the conditions under which this strategy is reliable are given and characterized.
Abstract: This paper presents a testing strategy desiged to detect errors in the control flow of a computer program, and the conditions under which this strategy is reliable are given and characterized. The control flow statements in a computer progam partition the input space into a set of mutually exclusive domains, each of which corresponds to a particular program path and consists of input data points which cause that path to be executed. The testing strategy generates test points to examine the boundaries of a domain to detect whether a domain error has occurred, as either one or more of these boundaries will have shifted or else the corresponding predicate relational operator has changed. If test points can be chosen within e of each boundary, under the appropriate assumptions, the strategy is shown to be reliable in detecting domain errons of magnitude greater than ∈. Moreover, the number of test points required to test each domain grows only linearly with both the dimensionality of the input space and the number of predicates along the path being tested.

383 citations


Journal ArticleDOI
TL;DR: Query-by-Pictorial-Example is a relational query language introduced for manipulating queries regarding pictorial relations as well as conventional relations.
Abstract: Query-by-Pictorial-Example is a relational query language introduced for manipulating queries regarding pictorial relations as well as conventional relations. In addition to the manipulating capabilities of the conventional query languages, queries can also be expressed in terms of pictorial examples through a display tenninal. Example queries are used to illustrate the language facilities.

299 citations


Journal ArticleDOI
TL;DR: The concepts of a revealing test criterion and a revealing subdomain are proposed and used to provide a basis for constructing program tests.
Abstract: The theory of test data selection proposed by Goodenough and Gerhart is examined. In order to extend and refine this theory, the concepts of a revealing test criterion and a revealing subdomain are proposed. These notions are then used to provide a basis for constructing program tests.

235 citations


Journal ArticleDOI
TL;DR: An approach to functional testing is described in which the design of a program is viewed as an integrated collection of functions and the selection of test data depends on the functions used in the design and on the value spaces over which the functions are defined.
Abstract: An approach to functional testing is described in which the design of a program is viewed as an integrated collection of functions. The selection of test data depends on the functions used in the design and on the value spaces over which the functions are defined. The basic ideas in the method were developed during the study of a collection of scientific programs containing errors. The method was the most reliable testing technique for discovering the errors. It was found to be significantly more reliable than structural testing. The two techniques are compared and their relative advantages and limitations are discussed.

209 citations


Journal ArticleDOI
TL;DR: In this paper, measures for estimating the stability of a program and the modules of which the program is composed are presented, and an algorithm for computing these stability measures is given.
Abstract: Software maintenance is the dominant factor contributing to the high cost of software. In this paper, the software maintenance process and the important software quality attributes that affect the maintenance effort are discussed. One of the most important quality attributes of software maintainability is the stability of a program, which indicates the resistance to the potential ripple effect that the program would have when it is modified. Measures for estimating the stability of a program and the modules of which the program is composed are presented, and an algorithm for computing these stability measures is given. An algorithm for normalizing these measures is also given. Applications of these measures during the maintenance phase are discussed along with an example. An indirect validation of these stability measures is also given. Future research efforts involving application of these measures during the design phase, program restructuring based on these measures, and the development of an overall maintainability measure are also discussed.

203 citations


Journal ArticleDOI
D.L. Russell1
TL;DR: In systems of asynchronous processes using messagelists with SEND–RECEIVE primitives for interprocess communication recovery primitives are defined to perform state restoration: MARK saves a particular point in the execution of the program; RESTORE resets the system state to an earlier point (saved by MARK); and PURGE discards redundant information when it is no longer needed for possible state restoration.
Abstract: In systems of asynchronous processes using messagelists with SEND–RECEIVE primitives for interprocess communication recovery primitives are defined to perform state restoration: MARK saves a particular point in the execution of the program; RESTORE resets the system state to an earlier point (saved by MARK); and PURGE discards redundant information when it is no longer needed for possible state restoration

190 citations


Journal ArticleDOI
TL;DR: An examination of the assumptions used in early bug-counting models of software reliability shows them to be deficient and it is suggested that current theories are only the first step along what threatens to be a long road.
Abstract: An examination of the assumptions used in early bug-counting models of software reliability shows them to be deficient. Suggestions are made to improve modeling assumptions and examples are given of mathematical implementations. Model verification via real-life data is discussed and minimum requirements are presented. An example shows how these requirements may be satisfied in practice. It is suggested that current theories are only the first step along what threatens to be a long road.

Journal ArticleDOI
TL;DR: A hierarchy of structural test metrics is suggested to direct the choide and to monitor the coverge of test paths and the use of "allegations" to prevent the static generation of many infeasible paths.
Abstract: There are a number of practical difficulties in performing a path testing strategy for computer programs. One problem is in deciding which paths, out of a possible infinity, to use as test cases. A hierarchy of structural test metrics is suggested to direct the choide and to monitor the coverge of test paths. Another problem is that many of the chosen paths may be infeasible in the sense that no test data can ever execute them. Experience with the use of "allegations" to circumvent this problem and prevent the static generation of many infeasible paths is reported.

Journal ArticleDOI
TL;DR: A scheduling algorithm for a set of tasks that guarantees the time within which a task, once started, will complete is described.
Abstract: This paper describes a scheduling algorithm for a set of tasks that guarantees the time within which a task, once started, will complete. A task is started upon receipt of an external signal or the completion of other tasks. Each task has a rxed set of requirements in processor time, resources, and device operations needed for completion of its various segments. A worst case analysis of task performance is carried out. An algorithm is developed for determining the response times that can be guaranteed for a set of tasks. Operating system overhead is also accounted for.

Journal ArticleDOI
David R. Musser1
TL;DR: The main emphasis is on methods of ensuring convergence (finite and unique termination) of sets of rewrite rules and on the relation of this property to the equational and inductive proof theories of data types.
Abstract: This paper describes the data type definition facilities of the AFFIRM system for program specification and verification. Following an overview of the system, we review the rewrite rule concepts that form the theoretical basis for its data type facilities. The main emphasis is on methods of ensuring convergence (finite and unique termination) of sets of rewrite rules and on the relation of this property to the equational and inductive proof theories of data types.

Journal ArticleDOI
TL;DR: Several of the *MOD distributed programming constructs are discussed as well as an interprocessor communication methodology.
Abstract: Distributed programming is characterized by high communications costs and the inability to use shared variables and procedures for interprocessor synchronization and communication *MOD is a high-level language system which attempts to address these problems by creating an environment conducive to efficient and reliable network software construction Several of the *MOD distributed programming constructs are discussed as well as an interprocessor communication methodology Examples illustrating these concepts are drawn from the areas of network communication and distributed process synchronization

Journal ArticleDOI
TL;DR: The intuitive approach of this paper, which makes heavy use of examples, is complemented by the more formal development of the companion paper, "Redundancy in Data Structures: Some Theoretical Results."
Abstract: The increasing cost of computer system failure has stimulated interest in improving software reliability. One way to do this is by adding redundant structural data to data structures. Such redundancy can be used to detect and correct (structural) errors in instances of a data structure. The intuitive approach of this paper, which makes heavy use of examples, is complemented by the more formal development of the companion paper, "Redundancy in Data Structures: Some Theoretical Results."

Journal ArticleDOI
TL;DR: A control flow checking scheme capable of detecting control flow errors of programs resulting from software coding errors, hardware malfunctions, or memory mutilation during the execution of the program is presented.
Abstract: A control flow checking scheme capable of detecting control flow errors of programs resulting from software coding errors, hardware malfunctions, or memory mutilation during the execution of the program is presented. In this approach, the program is partitioned into loop-free intervals and a database containing the path information in each of the loop-free intervals is derived from the detailed design. The path in each loop-free interval actually traversed at run time is recorded and then checked against the information provided in the database, and any discrepancy indicates an error. This approach is general, and can detect all uncompensated illegal branches. Any uncompensated error that occurs during the execution of a loop-free interval and manifests itself as a wrong branch within the loop-free interval or right after the completion of execution of the loop-free interval is also detectable. The approach can also be used to check the control flow in the testing phase of program development. The capabilities, limitations, implementation, and the overhead of using this approach are discussed.

Journal ArticleDOI
TL;DR: The distributed protocol for deadlock detection in distributed databases is incorrect, and possible remedies are presented, however, the distributed protocol remains impractical because "condensations" of "transaction-wait-for" graphs make graph updates difficult to perform.
Abstract: A hierarchically organized and a distributed protocol for deadlock detection in distributed databases are presented in [1]. In this paper we show that the distributed protocol is incorrect, and present possible remedies. However, the distributed protocol remains impractical because "condensations" of "transaction-wait-for" graphs make graph updates difficult to perform. Delayed graph updates cause the occurrence of false deadlocks in this as well as in some other deadlock detection protocols for distributed systems. The performance degradation that results from false deadlocks depends on the characteristics of each protocol.

Journal ArticleDOI
TL;DR: This paper begins by discussing in a general setting the role of type abstraction and the need for formal specifications of type abstractions, and examines in some detail two approaches to the construction of such specifications: that proposed by Hoare in his 1972 paper "Proofs of Correctness of Data Representations," and the author's own version of algebraic specifications.
Abstract: This paper, which was initially prepared to accompany a series of lectures given at the 1978 NATO International Summer School on Program Construction, is primarily tutorial in nature. It begins by discussing in a general setting the role of type abstraction and the need for formal specifications of type abstractions. It then proceeds to examine in some detail two approaches to the construction of such specifications: that proposed by Hoare in his 1972 paper "Proofs of Correctness of Data Representations," and the author's own version of algebraic specifications. The Hoare approach is presented via a discussion of its embodiment in the programming language Euclid. The discussion of the algebraic approach includes material abstracted from earlier papers as well as some new material that has yet to appear. This new material deals with parameterized types and the specification of restrictions. The paper concludes with a brief discussion of the relative merits of the two approaches to type abstraction.

Journal ArticleDOI
TL;DR: A new method for analyzing complex queueing networks is proposed: the isolation method, which studies packet switching networks with finite buffer size at each node.
Abstract: In this paper a new method for analyzing complex queueing networks is proposed: the isolation method. As an example, we study packet switching networks with finite buffer size at each node.

Journal ArticleDOI
F.N. Parr1
TL;DR: A new model of the software development process is presented and used to derive the form of the resource consumption curve of a project over its life cycle, which relates the rate of progress which can be achieved in developing software to the structure of the system being developed.
Abstract: A new model of the software development process is presented and used to derive the form of the resource consumption curve of a project over its life cycle. The function obtained differs in detail from the Rayleigh curve previously used in fitting actual project data. The main advantage of the new model is that it relates the rate of progress which can be achieved in developing software to the structure of the system being developed. This leads to a more testable theory, and it also becomes possible to predict how the use of structured programming methods may alter pattems of life cycle resource consumption.

Journal ArticleDOI
A.D. Birrell1, R.M. Needham
TL;DR: The paper explores the dedgn issues associated with such a file server and proposes some solutions.
Abstract: A file server is a utility provided in a computer connected via a local communications network to a number of other computer. File servers exist to preserve material for the benefit of client machines or systems. It is desirable for a file server to be able to support multiple file directory and access management systems, so that the designer of a client system retains the freedom to design the system that best suits him. For example, he may wish to use the rile server to support a predefimed directory structure or as a swapping disk. The paper explores the dedgn issues associated with such a file server and proposes some solutions.

Journal ArticleDOI
TL;DR: The design of a secure file system based on user controlled cryptographic (UCC) transformations is investigated and several protection implementation schemes are suggested and analyzed according to criteria such as: security, efficiency, and user convenience.
Abstract: The design of a secure file system based on user controlled cryptographic (UCC) transformations is investigated. With UCC transformations, cryptography not only complements other protection mechanisms, but can also enforce protection specifications. Files with different access permissions are enciphered by different cryptographic keys supplied by authorized users at access time. Several classes of protection policies such as: compartmentalized, hierarchical, and data dependent are discussed. Several protection implementation schemes are suggested and analyzed according to criteria such as: security, efficiency, and user convenience. These schemes provide a versatile and powerful set of design alternatives.

Journal ArticleDOI
TL;DR: A hardware failure analysis technique adapted to software yielded three rules for generating test cases sensitive to code errors, and a procedure for generating these cases is given with examples.
Abstract: A hardware failure analysis technique adapted to software yielded three rules for generating test cases sensitive to code errors. These rules, and a procedure for generating these cases, are given with examples. Areas for further study are recommended.

Journal ArticleDOI
TL;DR: Algorithms are presented for detecting errors and anomalies in programs which use synchronization constructs to implement concurrency, and these algorithms employ data flow analysis techniques.
Abstract: Algorithms are presented for detecting errors and anomalies in programs which use synchronization constructs to implement concurrency. The algorithms employ data flow analysis techniques. First used in compiler object code optimization, the techniques have more recently been used in the detection of variable usage errors in dngle process programs. By adapting these existing algorithms, the sane classes of variable usage errors can be detected in concurrent process programs. Important classes of errors unique to concurrent process programs are also described, and algorithms for their detection are presented.

Journal ArticleDOI
TL;DR: This work proposes a complexity measure based on the number of crossings, or "knots," of arcs in a linearization of the flowgraph based on McCabe's cyclomatic complexity and Halstead's software effort.
Abstract: In attempting to describe the quality of computer software, one of the more frequently mentioned measurable attributes is complexity of the flow of control. During the past several years, there have been many attempts to quantify this aspect of computer programs, approaching the problem from such diverse points of view as graph theory and software science. Most notable measures in these areas are McCabe's cyclomatic complexity and Halstead's software effort. More recently, Woodward et al. proposed a complexity measure based on the number of crossings, or "knots," of arcs in a linearization of the flowgraph.

Journal ArticleDOI
TL;DR: This paper presents a method for synthesizing or growing live and safe marked graph models of decision-free concurrent comutations, which is modular in the sense that subsystems r represented by arcs (and nodes) are added one by one without the need of redesigning the entire system.
Abstract: This paper presents a method for synthesizing or growing live and safe marked graph models of decision-free concurrent comutations. The approach is modular in the sense that subsystems r represented by arcs (and nodes) are added one by one without the need of redesigning the entire system. The foliowing properties of marked graph models can be prescribed in the synthesis: liveness (absence of deadlocks), safeness (absence of overflows), the number of reachability classes, the maximum resource (temporary storage) requirement, computation rate (performance), as well as the numbers of arcs and states.

Journal ArticleDOI
TL;DR: Communication port is an encapsulation of two language properties: "communication non-determinism" and "communication disconnect time" that provides a tool for progranmers to write well-structured, modular, and efficient concurrent programs.
Abstract: A new language concept–communication port (CP), is introduced for programming on distributed processor networks. Such a network can contain an arbitrary number of processors each with its own private storage but with no memory sharing. The processors must communicate via explicit message passing. Communication port is an encapsulation of two language properties: "communication non-determinism" and "communication disconnect time." It provides a tool for progranmers to write well-structured, modular, and efficient concurrent programs. A number of examples are given in the paper to demonstrate the power of the new concepts.

Journal ArticleDOI
TL;DR: Predicate/transition-nets, a first-order extension of Petri nets, are shown to provide suitable means for concise representation of complex decentralized systems and for their rigorous formal analysis.
Abstract: In this paper, a net model for decentralized control of user accesses to a distributed database is proposed. It is developed in detail for the restricted case of updating distributed copies of a single database. Predicate/transition-nets, a first-order extension of Petri nets, are shown to provide suitable means for concise representation of complex decentralized systems and for their rigorous formal analysis. It will be demonstrated in the present paper how these net models can be constructed and interpreted in a quite natural manner and how they can be analyzed by linear algebraic methods. By this, it will be shown that the modeled distributed database system is deadlock-free and guarantees a consistent database as well as a fair and effective service to the users.

Journal ArticleDOI
TL;DR: It is argued that in the distributed database environment, structured database decomposition is attractive both for efficiency and for database security considerations.
Abstract: We present a methodology for structured database decomposition based on the relational data model. It is argued that in the distributed database environment, structured database decomposition is attractive both for efficiency and for database security considerations. Techniques for parallel processing and hashed access of structurally decomposed database are presented. Techniques for structured database decomposition to support multiple user views are also described. Structured database decomposition is most advantageous in a query only database environment with stable user views, although dynamic updates can also be handled using techniques described in this paper.