scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Software Engineering in 1986"


Journal ArticleDOI
TL;DR: It is shown that extremely simple adaptive load sharing policies, which collect very small amounts of system state information and which use this information in very simple ways, yield dramatic performance improvements.
Abstract: Rather than proposing a specific load sharing policy for implementation, the authors address the more fundamental question of the appropriate level of complexity for load sharing policies. It is shown that extremely simple adaptive load sharing policies, which collect very small amounts of system state information and which use this information in very simple ways, yield dramatic performance improvements. These policies in fact yield performance close to that expected from more complex policies whose viability is questionable. It is concluded that simple policies offer the greatest promise in practice, because of their combination of nearly optimal performance and inherent stability.

1,041 citations


Journal ArticleDOI
TL;DR: The author examines the process of object-oriented development as well as the influences upon this approach from advances in abstraction mechanisms, programming languages, and hardware.
Abstract: Object-oriented development is a partial-lifecycle software development method in which the decomposition of a system is based upon the concept of an object. This method is fundamentally different from traditional functional approaches to design and serves to help manage the complexity of massive software-intensive systems. The author examines the process of object-oriented development as well as the influences upon this approach from advances in abstraction mechanisms, programming languages, and hardware. The concept of an object is central to object-oriented development and so the properties of an object are discussed. The mapping of object-oriented techniques to Ada using a design case study is considered.

998 citations



Journal ArticleDOI
TL;DR: N-version programming has been proposed as a method of incorporating fault tolerance into software and it is revealed that the programs were individually extremely reliable but that the number of tests in which more than one program failed was substantially more than expected.
Abstract: N-version programming has been proposed as a method of incorporating fault tolerance into software. Multiple versions of a program (i.e. `N') are prepared and executed in parallel. Their outputs are collected and examined by a voter, and, if they are not identical, it is assumed that the majority is correct. This method depends for its reliability improvement on the assumption that programs that have been developed independently will fail independently. An experiment is described in which the fundamental axiom is tested. In all, 27 versions of a program were prepared independently from the same specification at two universities and then subjected to one million tests. The results of the tests revealed that the programs were individually extremely reliable but that the number of tests in which more than one program failed was substantially more than expected. The results of these tests are presented along with an analysis of some of the faults that were found in the programs. Background information on the programmers used is also summarized.

789 citations


Journal ArticleDOI
Michael E. Fagan1
TL;DR: Studies and experiences are presented which enhance the use of the inspection process and improve its contribution to development of defect-free software on time and at lower cost.
Abstract: Software inspection is a method of static testing to verify that software meets its requirements. It engages the developers and others in a formal process of investigation that usually detects more defects in the product-and at lower cost-than does machine testing. Studies and experiences are presented which enhance the use of the inspection process and improve its contribution to development of defect-free software on time and at lower cost. Examples of benefits are cited followed by descriptions of the inspection process and some methods of obtaining the enhanced results. Users of the method report very significant improvements in quality that are accompanied by lower development costs and greatly reduced maintenance efforts. Excellent results have been obtained by small and large organizations in all aspects of new development as well as in maintenance. There is some evidence that developers who participate in the inspection of their own product actually create fewer defects in subsequent work. Because inspections formalize the development process, productivity-enhancing and quality-enhancing tools can be adopted more easily and rapidly.

735 citations


Journal ArticleDOI
TL;DR: The authors formalize the safety analysis of timing properties in real-time systems based on a formal logic, RTL (real-time logic), which is especially suitable for reasoning about the timing behavior of systems.
Abstract: The authors formalize the safety analysis of timing properties in real-time systems. The analysis is based on a formal logic, RTL (real-time logic), which is especially suitable for reasoning about the timing behavior of systems. Given the formal specification of a system and a safety assertion to be analyzed, the goal is to relate the safety assertion to the systems specification. There are three distinct cases: (1) the safety assertion is a theorem derivable from the systems specification; (2) the safety assertion is unsatisfiable with respect to the systems specification; or (3) the negation of the safety assertion is satisfiable under certain conditions. A systematic method for performing safety analysis is presented.

684 citations


Journal ArticleDOI
Rob Strom1, S. Yemini1
TL;DR: The authors introduce a new programming language concept, called typestate, which is a refinement of the concept of type, which determines the subset of operations which is permitted in a particular context.
Abstract: The authors introduce a new programming language concept, called typestate, which is a refinement of the concept of type. Whereas the type of a data object determines the set of operations over permitted on the object, typestate determines the subset of these operations which is permitted in a particular context. Typestate tracking is a program analysis technique which enhances program reliability by detecting at compile-time syntactically legal but semantically undefined execution sequences. These include reading a variable before it has been initialized and dereferencing a pointer after the dynamic object has been deallocated. The authors define typestate, give examples of its application, and show how typestate checking may be embedded into a compiler. They discuss the consequences of typestate checking for software reliability and software structure, and summarize their experience in using a high-level language incorporating typestate checking.

581 citations


Journal ArticleDOI
TL;DR: A framework is presented for analyzing most of the experimental work performed in software engineering over the past several years, corresponding to phases of the experimentation process: definition, planning, operation, and interpretation.
Abstract: A framework is presented for analyzing most of the experimental work performed in software engineering over the past several years. The framework of experimentation consists of four categories corresponding to phases of the experimentation process: definition, planning, operation, and interpretation. A variety of experiments are described within the framework and their contribution to the software engineering discipline is discussed. Some recommendations for the application of the experimental process in software engineering are included.

572 citations


Journal ArticleDOI
TL;DR: In this article, it is proposed that, although designing a real product in that way will not be successful, it is possible to produce documentation that makes it appear that the software was designed by such a process.
Abstract: Many have sought a software design process that allows a program to be derived systematically from a precise statement of requirements. It is proposed that, although designing a real product in that way will not be successful, it is possible to produce documentation that makes it appear that the software was designed by such a process. The ideal process and the documentation that it requires are described. The authors explain why one should attempt to design according to the ideal process and why one should produce the documentation that would have been produced by that process. The contents of each of the required documents are outlined.

411 citations


Journal ArticleDOI
TL;DR: Gandalf environments integrate programming and system development, permitting interactions not available in traditional environments, and the structure and function of several existing environments are covered.
Abstract: Different programming projects require different environments, but handcrafting a separate environment for each project is not economically feasible. Gandalf solves this problem by permitting environment designers to generate families of software development environments semiautomatically without excessive cost. Environments generated using Gandalf address programming environments, which help ease the programming process, as well as system development environments, which reduce the degree to which a software project is dependent on the good will of its members. Gandalf environments integrate programming and system development, permitting interactions not available in traditional environments. The paper covers the basic characteristics of Gandalf environments. The method used to generate these environments, the structure and function of several existing environments, and ongoing research on the project.

331 citations


Journal ArticleDOI
TL;DR: Real-Time Euclid uses exception handlers and import/export lists to provide comprehensive error detection, isolation, and recovery and is felt to be well-suited for writing reliable real-time software.
Abstract: Real-Time Euclid, a language designed specifically to address reliability and guaranteed schedulability issues in real-time systems, is introduced. Real-Time Euclid uses exception handlers and import/export lists to provide comprehensive error detection, isolation, and recovery. The philosophy of the language is that every exception detectable by the hardware or the software must have an exception-handler clause associated with it. Moreover, the language definition forces every construct in the language to be time- and space-bounded. Consequently, Real-Time Euclid programs can always be analyzed for guaranteed schedulability of their processes. Thus, it is felt that Real-Time Euclid is well-suited for writing reliable real-time software.

Journal ArticleDOI
TL;DR: Some techniques are presented which form the basis of a partial solution to the problem of knowing which, if any, of the competing predictions are trustworthy in a reliability growth context.
Abstract: Different software reliability models can produce very different answers when called on to predict future reliability in a reliability growth context. Users need to know which, if any, of the competing predictions are trustworthy. Some techniques are presented which form the basis of a partial solution to this problem. Rather than attempting to decide which model is generally best, the approach adopted allows a user to decide on the most appropriate model for each application.

Journal ArticleDOI
TL;DR: An extension of the data flow diagram called the transformation schema is described, which provides a notation and formation rules for building a comprehensive system model, and a set of execution rules to allow prediction of the behavior over time of a system modeled in this way.
Abstract: The data flow diagram has been extensively used to model the data transformation aspects of proposed systems. However, previous definitions of the data flow diagram have not provided a comprehensive way to represent the interaction between the timing and control aspects of a system and its data transformation behavior. An extension of the data flow diagram called the transformation schema is described. This schema provides a notation and formation rules for building a comprehensive system model, and a set of execution rules to allow prediction of the behavior over time of a system modeled in this way. The notation and formation rules allow depiction of a system as a network of potentially concurrent `centers of activity' (transformations), and of data repositories (stores), linked by communication paths (flows). The execution rules provide a qualitative prediction rather than a quantitative one, describing the acceptance of inputs and the production of outputs by the transformations but not input and output values.

Journal ArticleDOI
TL;DR: A methodology for the rapid prototyping of process control systems which is based on an original extension to classical Petri nets, and these nets are shown to be translatable into Ada program structures concerning concurrent processes and their synchronizations.
Abstract: A methodology for the rapid prototyping of process control systems which is based on an original extension to classical Petri nets is presented. The proposed nets, called PROT nets, provide a suitable framework to support the following activities: building an operational specification model; evaluation, simulation, and validation of the model; and automatic translation into program structures. PROT nets are shown to be translatable into Ada program structures concerning concurrent processes and their synchronizations. The authors illustrate this translation in detail using, as a working example, the problem of tool handling in a flexible manufacturing system.

Journal ArticleDOI
TL;DR: In this paper, a framework for the provision of fault tolerance in asynchronous systems is introduced, which generalizes the form of simple recovery facilities supported by nested atomic actions in which the exception mechanisms only permit backward error recovery.
Abstract: A framework for the provision of fault tolerance in asynchronous systems is introduced. The proposal generalizes the form of simple recovery facilities supported by nested atomic actions in which the exception mechanisms only permit backward error recovery. It allows the construction of systems using both forward and backward error recovery and thus allows the exploitation of the complementary benefits of the two schemes. Backward recovery, forward recovery, and normal processing activities can occur concurrently within the organization proposed. Exception handling is generalized to provide a uniform basis for fault tolerance schemes with the atomic action structure. The generalization includes a resolution scheme for concurrently raised exceptions based on an exception tree and an abortion scheme that permits the termination of the internal atomic actions. An automatic resolution mechanism is outlined for exceptions in atomic actions which allows users to separate their recovery schemes from the details of the underlying algorithms.

Journal ArticleDOI
TL;DR: Failure times of a software reliability growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables.
Abstract: Failure times of a software reliability growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto logarithmic, and power law models are all special cases of exponential order statistic models, but there are many additional examples as well. Various characterizations, properties, and examples of this class of models are developed and presented.

Journal ArticleDOI
P. A. Currit1, M. Dyer1, Harlan D. Mills1
TL;DR: A description is given of a procedure for certifying the reliability of software before its release to users, which includes a life cycle of executable product increments, representative statistical testing, and a standard estimate of the MTTF of the product at the time of its release.
Abstract: A description is given of a procedure for certifying the reliability of software before its release to users. The ingredients of this procedure are a life cycle of executable product increments, representative statistical testing, and a standard estimate of the MTTF (mean time to failure) of the product at the time of its release. The authors also discuss the development of certified software products and the derivation of a statistical model used for reliability projection. Available software test data are used to demonstrate the application of the model in certification process.

Journal ArticleDOI
TL;DR: A general axiomatic theory of test data adequacy is developed, and five previously proposed adequacy criteria are examined to see which of the axioms are satisfied.
Abstract: A test data adequacy criterion is a set of rules used to determine whether or not sufficient testing has been performed. A general axiomatic theory of test data adequacy is developed, and five previously proposed adequacy criteria are examined to see which of the axioms are satisfied. It is shown that the axioms are consistent, but that only two of the criteria satisfy all of the axioms.

Journal ArticleDOI
TL;DR: A layout algorithm is presented that allows the automatic drawing of data flow diagrams, a diagrammatic representation widely used in the functional analysis of information systems.
Abstract: A layout algorithm is presented that allows the automatic drawing of data flow diagrams, a diagrammatic representation widely used in the functional analysis of information systems. A grid standard is defined for such diagrams, and aesthetics for good readability are identified. The layout algorithm receives as input an abstract graph specifying connectivity relations between the elements of the diagram, and produces as output a corresponding diagram according to the aesthetics. The basic strategy is to build incrementally the layout; first, a good topology is constructed with few crossings between edges; subsequently, the shape of the diagram is determined in terms of angles appearing along edges. and finally dimensions are given to the graph, obtaining a grid skeleton for the diagram.

Journal ArticleDOI
TL;DR: The Jackson System Development method addresses most of the software lifecycle through a distributed network of processes that communicate by message-passing and read-only inspection of each other's data.
Abstract: The Jackson System Development (JSD) method addresses most of the software lifecycle. JSD specifications consist mainly of a distributed network of processes that communicate by message-passing and read-only inspection of each other's data. A JSD specification is therefore directly executable, at least in principle. Specifications are developed middle-out from an initial set of `model' processes. The model processes define a set of events, which limit the scope of the system, define its semantics, and form the basis for defining data and outputs. Implementation often involves reconfiguring or transforming the network to run on a smaller number of real or virtual processors. The main phase of JSD are introduced and illustrated by a small example system. The rationale for the approach is discussed.

Journal ArticleDOI
K. J. Perry1, S. Toueg2
TL;DR: The algorithm exhibits early stopping under conditions of less than maximum failure and is as efficient as the algorithm developed for the more restrictive crash-fault model in terms of time, message, and bit complexity.
Abstract: A model of distributed computation is proposed in which processes may fail by not sending or receiving the message specified by a protocol. The solution to the Byzantine generals problem for this model is presented. The algorithm exhibits early stopping under conditions of less than maximum failure and is as efficient as the algorithm developed for the more restrictive crash-fault model in terms of time, message, and bit complexity. The authors show extant models to underestimate resiliency when faults in the communication medium are considered; the model outlined here is more accurate in this regard.

Journal ArticleDOI
TL;DR: An approximation procedure is developed for the analysis of tandem configurations consisting of single server finite queues linked in series and gives results in the form of the marginal probability distribution of the number of units in each queue of the tandem configuration.
Abstract: An approximation procedure is developed for the analysis of tandem configurations consisting of single server finite queues linked in series. External arrivals occur at the first queue which may be either finite or infinite. Departures from the queuing network may only occur from the last queue. All service times and interarrival times are assumed to be exponentially distributed. The approximation algorithm gives results in the form of the marginal probability distribution of the number of units in each queue of the tandem configuration. Other performance measures, such as mean queue-length and throughput, can be readily obtained. The approximation procedure was validated using exact and simulation data. The approximate results seem to have an acceptable error level.

Journal ArticleDOI
TL;DR: Graph theory techniques are used to systematically generate file spanning trees that provide all the required connections and can be used in a dynamic environment for efficient reliability evaluation.
Abstract: The reliability of distributed processing systems can be expressed in terms of the reliability of the processing elements that run the programs, the reliability of the processing elements holding the required files, and the reliability of the communication links used in file transfers. The authors introduce two reliability measures, namely distributed program reliability and distributed system reliability, to accurately model the reliability of distributed systems. The first measure describes the probability of successful execution of a distributed program which runs on some processing elements and needs to communicate with other processing elements for remote files, while the second measure describes the probability that all the programs of a given set can run successfully. The notion of minimal file spanning trees is introduced to efficiently evaluate these reliability measures. Graph theory techniques are used to systematically generate file spanning trees that provide all the required connections. The technique is general and can be used in a dynamic environment for efficient reliability evaluation.

Journal ArticleDOI
Pamela Zave1, W. Schell1
TL;DR: It is shown that PAISLey is unusual in having the following desirable features: there is both synchronous and asynchronous parallelism free of mutual-exclusion problems, all computations are encapsulated, and a notable degree of simplicity is maintained.
Abstract: The executable specification language PAISLey and its environment are presented as a case study in the design of computer languages. It is shown that PAISLey is unusual (and for some features unique) in having the following desirable features: (1) there is both synchronous and asynchronous parallelism free of mutual-exclusion problems, (2) all computations are encapsulated, (3) specifications in the language can be executed no matter how incomplete they are, (4) timing constraints are executable, (5) specifications are organized so that bounded resource consumption can be guaranteed, (6) almost all forms of inconsistency can be detected by automated checking, and (7) a notable degree of simplicity is maintained. Conclusions are drawn concerning the differences between executable specification languages and programming languages, and potential uses for PAISLey are given.

Journal ArticleDOI
TL;DR: A review of the Project on Diverse Software (PODS), a collaborative software reliability research project, is presented to evaluate the merits of using diverse software, evaluate the specification language X-SPEX, and compare the productivity and reliability associated with high-level and low-level languages.
Abstract: A review of the Project on Diverse Software (PODS), a collaborative software reliability research project, is presented. The purpose of the project was to determine the effect of a number of different software development techniques on software reliability. The main objectives were to evaluate the merits of using diverse software, evaluate the specification language X-SPEX, and compare the productivity and reliability associated with high-level and low-level languages. A secondary objective was to monitor the software development process, with particular reference to the creation and detection of software faults. To achieve these objectives, an experiment was performed which simulated a normal software development process to produce three diverse programs to the same requirement. The requirement was for a reactor over-power protection (trip) system. After careful independent development and testing, the three programs were tested against each other in a special test harness to locate residual faults. The conclusions drawn from this project are discussed.

Journal ArticleDOI
TL;DR: In ARES, relational operations have been functionally augmented with an additional comparison operator that implies `approximately equal to' or `similar to' for cases in which the user expects the system to perform a flexible interpretation of the query conditions.
Abstract: In ARES, relational operations have been functionally augmented with an additional comparison operator. This operator implies `approximately equal to' or `similar to' for cases in which the user expects the system to perform a flexible interpretation of the query conditions. The functional augmentation is simply achieved by a combination of conventional relational operations. ARES is now in actual operation in a research environment, and will contribute to the next step of research toward implementation of highly intelligent data processing facilities beyond the present scope of database technology.

Journal ArticleDOI
TL;DR: Results of an empirical study of software design practices in one specific environment show that some recommended design practices, despite their intuitive appeal, are ineffective in this environment, whereas others are very effective.
Abstract: Results of an empirical study of software design practices in one specific environment are reported. The practices examined affect module size, module strength, data coupling, descendant span, unreferenced variables, and software reuse. Measures characteristic of these practices were extracted from 887 Fortran modules developed for five flight dynamics software projects monitored by the Software Engineering Laboratory. The relationship of these measures to cost and fault rate was analyzed using a contingency table procedure. The results show that some recommended design practices, despite their intuitive appeal, are ineffective in this environment, whereas others are very effective.

Journal ArticleDOI
TL;DR: It is argued that functional programs combine the clarity required for the formal specification of software designs with the ability to validate the design by execution, making them ideal for rapidly prototyping a design as it is developed.
Abstract: Functional programming has enormous potential for reducing the high cost of software development. Because of the simple mathematical basis of functional programming, it is easier to design correct programs in a purely functional style than in a traditional imperative style. It is argued that functional programs combine the clarity required for the formal specification of software designs with the ability to validate the design by execution. As such they are ideal for rapidly prototyping a design as it is developed. An example is presented which is larger than those traditionally used to explain functional programming. This example is used to illustrate a method of software design which efficiently and reliably turns an informal description of requirements into an executable formal specification.

Journal ArticleDOI
TL;DR: User software engineering (USE) is a methodology supported by automated tools for the systematic development of interactive information systems The USE methodology gives particular attention to effective user involvement in the early stages of the software development process, concentrating on external design and the use of rapidly created and modified prototypes of the user interface as discussed by the authors.
Abstract: User software engineering (USE) is a methodology, supported by automated tools, for the systematic development of interactive information systems The USE methodology gives particular attention to effective user involvement in the early stages of the software development process, concentrating on external design and the use of rapidly created and modified prototypes of the user interface The USE methodology is supported by an integrated set of graphically based tools The USE methodology and the tools that support it are described

Journal ArticleDOI
TL;DR: Efficient algorithms based on short-path methods are presented to determine the optimum assignment on a distributed system containing N heterogeneous processors.
Abstract: The problem of assigning the modules of distributed program to the processors of a distributed system is addressed. The goal of such an assignment is to minimize the total execution and communication costs. A computational model of a distributed program containing probabilistic branches and loops is described by a directed graph whose edges represent precedence relations between modules. Efficient algorithms based on short-path methods are presented to determine the optimum assignment on a distributed system containing N heterogeneous processors.