scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Software Engineering in 1991"


Journal ArticleDOI
TL;DR: An enumerative method is proposed in order to exhaustively validate the behavior of Merlin's time Petri net model and it is applied to the specification and verification of the alternating bit protocol as a simple illustrative example.
Abstract: A description and analysis of concurrent systems, such as communication systems, whose behavior is dependent on explicit values of time is presented. An enumerative method is proposed in order to exhaustively validate the behavior of P. Merlin's time Petri net model, (1974). This method allows formal verification of time-dependent systems. It is applied to the specification and verification of the alternating bit protocol as a simple illustrative example. >

1,129 citations


Journal ArticleDOI
TL;DR: The authors' experience implementing and evaluation several protocols in the x-Kernel shows that this architecture is general enough to accommodate a wide range of protocols, yet efficient enough to perform competitively with less-structured operating systems.
Abstract: A description is given of an operating system kernel, called the x-Kernel, that provides an explicit architecture for constructing and composing network protocols. The authors' experience implementing and evaluation several protocols in the x-Kernel shows that this architecture is general enough to accommodate a wide range of protocols, yet efficient enough to perform competitively with less-structured operating systems. Experimental results demonstrating the architecture's generality and efficiency are provided. The explicit structure provided by the x-Kernel has the following advantages. First, the architecture simplifies the process of implementing protocols in the kernel, making it easier to build and test novel protocols. Second, the uniformity of the interface between protocols avoids the significant cost of changing abstractions and makes protocol performance predictable. Third, it is possible to write efficient protocols by tuning the underlying architecture rather than heavily optimizing protocols themselves. >

853 citations


Journal ArticleDOI
TL;DR: Godzilla is a fault-based technique that uses algebraic constraints to describe test cases designed to find particular types of faults and has been integrated with the Mothra testing system.
Abstract: A novel technique for automatically generating test data is presented. The technique is based on mutation analysis and creates test data that approximate relative adequacy. It is a fault-based technique that uses algebraic constraints to describe test cases designed to find particular types of faults. A set of tools (collectively called Godzilla) that automatically generates constraints and solves them to create test cases for unit and module testing has been implemented. Godzilla has been integrated with the Mothra testing system and has been used as an effective way to generate test data that kill program mutants. The authors present an initial list of constraints and discuss some of the problems that have been solved to develop the complete implementation of the technique. >

824 citations


Journal ArticleDOI
TL;DR: Using the lattice of single variable decomposition slices ordered by set inclusion, it is shown how a slice-based decomposition for programs can be formed and how to delineate the effects of a proposed change by isolating those effects in a single component of the decomposition.
Abstract: Program slicing is applied to the software maintenance problem by extending the notion of a program slice (that originally required both a variable and line number) to a decomposition slice, one that captures all computation on a given variable, i.e., is independent of line numbers. Using the lattice of single variable decomposition slices ordered by set inclusion, it is shown how a slice-based decomposition for programs can be formed. One can then delineate the effects of a proposed change by isolating those effects in a single component of the decomposition. This gives maintainers a straightforward technique for determining those statements and variables which may be modified in a component and those which may not. Using the decomposition, a set of principles to prohibit changes which will interfere with unmodified components is provided. These semantically consistent changes can then be merged back into the original program in linear time. >

706 citations


Journal ArticleDOI
TL;DR: A technology for automatically assembling large software libraries which promote software reuse by helping the user locate the components closest to her/his needs is described.
Abstract: A technology for automatically assembling large software libraries which promote software reuse by helping the user locate the components closest to her/his needs is described. Software libraries are automatically assembled from a set of unorganized components by using information retrieval techniques. The construction of the library is done in two steps. First, attributes are automatically extracted from natural language documentation by using an indexing scheme based on the notions of lexical affinities and quantity of information. Then a hierarchy for browsing is automatically generated using a clustering technique which draws only on the information provided by the attributes. Due to the free-text indexing scheme, tools following this approach can accept free-style natural language queries. >

475 citations


Journal ArticleDOI
TL;DR: The effects of subdomain modifications on partition testing's ability to detect faults are studied, and comparisons of the fault detection capabilities of partition testing and random testing are made.
Abstract: Partition testing strategies, which divide a program's input domain into subsets with the tester selecting one or more elements from each subdomain, are analyzed. The conditions that affect the efficiency of partition testing are investigated, and comparisons of the fault detection capabilities of partition testing and random testing are made. The effects of subdomain modifications on partition testing's ability to detect faults are studied. >

379 citations


Journal ArticleDOI
TL;DR: A high-level Petri net formalism-environment/relationship (ER) nets-which can be used to specify control, function, and timing issues-is introduced and time can be modeled via ER nets by providing a suitable axiomatization.
Abstract: The authors introduce a high-level Petri net formalism-environment/relationship (ER) nets-which can be used to specify control, function, and timing issues. In particular, they discuss how time can be modeled via ER nets by providing a suitable axiomatization. They use ER nets to define a time notation that is shown to generalize most time Petri-net-based formalisms which appeared in the literature. They discuss how ER nets can be used in a specification support environment for a time-critical system and, in particular, the kind of analysis supported. >

356 citations


Journal ArticleDOI
TL;DR: In this paper, a method for the selection of appropriate test case, an important issue for conformance testing of protocol implementations as well as software engineering, is presented, called the partial W-method, which is shown to have general applicability, full fault-detection power, and yields shorter test suites than the W-Method.
Abstract: A method for the selection of appropriate test case, an important issue for conformance testing of protocol implementations as well as software engineering, is presented. Called the partial W-method, it is shown to have general applicability, full fault-detection power, and yields shorter test suites than the W-method. Various other issues that have an impact on the selection of a suitable test suite including the consideration of interaction parameters, various test architectures for protocol testing and the fact that many specifications do not satisfy the assumptions made by most test selection methods (such as complete definition, a correctly implemented reset function, a limited number of states in the implementation, and determinism), are discussed. >

330 citations


Journal ArticleDOI
TL;DR: An automated tool called the Requirements Apprentice (RA) which assists a human analyst in the creation and modification of software requirements is presented, which develops a coherent internal representation of a requirement from an initial set of disorganized imprecise statements.
Abstract: An automated tool called the Requirements Apprentice (RA) which assists a human analyst in the creation and modification of software requirements is presented. Unlike most other requirements analysis tools, which start from a formal description language, the focus of the RA is on the transition between informal and formal specifications. The RA supports the earliest phases of creating a requirement, in which ambiguity, contradiction, and incompleteness are inevitable. From an artificial intelligence perspective, the central problem the RA faces is one of knowledge acquisition. The RA develops a coherent internal representation of a requirement from an initial set of disorganized imprecise statements. To do so, the RA relies on a variety of techniques, including dependency-directed reasoning, hybrid knowledge representations and the reuse of common forms (cliches). An annotated transcript showing an interaction with a working version of the RA is given. >

280 citations


Journal ArticleDOI
TL;DR: An approach based on a modular state-transition representation of a parallel system called the stochastic automata network (SAN) is developed, which is automatically derived using tensor algebra operators, under a format which involves a very limited storage cost.
Abstract: A methodology for modeling a system composed of parallel activities with synchronization points is proposed. Specifically, an approach based on a modular state-transition representation of a parallel system called the stochastic automata network (SAN) is developed. The state-space explosion is handled by a decomposition technique. The dynamic behavior of the algorithm is analyzed under Markovian assumptions. The transition matrix of the chain is automatically derived using tensor algebra operators, under a format which involves a very limited storage cost. >

266 citations


Journal ArticleDOI
TL;DR: Experimental results indicate that no performance improvements can be obtained over the scheduler versions using a one-dimensional workload descriptor, and the best single workload descriptor is the number of tasks in the run queue.
Abstract: A task scheduler based on the concept of a stochastic learning automation, implemented on a network of Unix workstations, is described. Creating an artificial, executable workload, a number of experiments were conducted to determine the effect of different workload descriptions. These workload descriptions characterize the load at one host and determine whether a newly created task is to be executed locally or remotely. Six one-dimensional workload descriptors are examined. Two workload descriptions that are more complex are also considered. It is shown that the best single workload descriptor is the number of tasks in the run queue. The use of the worst workload descriptor, the 1-min load average, resulted in an increase of the mean response time of over 32%, compared to the best descriptor. The two best workload descriptors, the number of tasks in the run queue and the system call rate, are combined to measure a host's load. Experimental results indicate that no performance improvements over the scheduler versions using a one-dimensional workload descriptor can be obtained. >

Journal ArticleDOI
TL;DR: A language for expressing views from different viewpoints and a set of analogy heuristics for performing a syntactically oriented analysis of views are proposed, capable of differentiating between missing information and conflicting information, thus providing support for viewpoint resolution.
Abstract: A specific technique-viewpoint resolution-is proposed as a means of providing early validation of the requirements for a complex system, and some initial empirical evidence of the effectiveness of a semi-automated implementation of the technique is provided. The technique is based on the fact that software requirements can and should be elicited from different viewpoints, and that examination of the differences resulting from them can be used as a way of assisting in the early validation of requirements. A language for expressing views from different viewpoints and a set of analogy heuristics for performing a syntactically oriented analysis of views are proposed. This analysis of views is capable of differentiating between missing information and conflicting information, thus providing support for viewpoint resolution. >

Journal ArticleDOI
TL;DR: Assessing testability from program specifications and an experiment shows that it takes less time to build and test a program developed from a domain-testable specification than a similar program developing from a nondomain- testable specification are discussed.
Abstract: The concept of domain testability of software is defined by applying the concepts of observability and controllability to software. It is shown that a domain-testable program does not exhibit any input-output inconsistencies and supports small test sets in which test outputs are easily understood. Metrics that can be used to assess the level of effort required in order to modify a program so that it is domain-testable are discussed. Assessing testability from program specifications and an experiment which shows that it takes less time to build and test a program developed from a domain-testable specification than a similar program developed from a nondomain-testable specification are also discussed. >

Journal ArticleDOI
TL;DR: Using these criteria, analysis procedures can be defined for particular state-machine modeling languages to provide semantic analysis of real-time process-control software requirements requirements.
Abstract: A set of criteria is defined to help find errors in, software requirements specifications. Only analysis criteria that examine the behavioral description of the computer are considered. The behavior of the software is described in terms of observable phenomena external to the software. Particular attention is focused on the properties of robustness and lack of ambiguity. The criteria are defined using an abstract state-machine model for generality. Using these criteria, analysis procedures can be defined for particular state-machine modeling languages to provide semantic analysis of real-time process-control software requirements. >

Journal ArticleDOI
TL;DR: It is recommended that every department should gain an insight into its reasons for delay in order to be able to take adequate actions for improvement.
Abstract: A study of the reasons for delay in software development is described. The aim of the study was to gain an insight into the reasons for differences between plans and reality in development activities in order to be able to take actions for improvement. A classification was used to determine the reasons. 160 activities, comprising over 15000 hours of work, have been analyzed. The results and interpretations of the results are presented. Insight into the predominant reasons for delay enabled actions for improvements to be taken in the department concerned. Because the distribution of reasons for delay varied widely from one department to another, it is recommended that every department should gain an insight into its reasons for delay in order to be able to take adequate actions for improvement. >

Journal ArticleDOI
TL;DR: A simple transformation of the metric is investigated whereby the cyclomatic complexity is divided by the size of the system in source statements, thereby determining a complexity density ratio, which is demonstrated to be a useful predictor of software maintenance productivity on a small pilot sample of maintenance projects.
Abstract: A study of the relationship between the cyclomatic complexity metric (T. McCabe, 1976) and software maintenance productivity, given that a metric that measures complexity should prove to be a useful predictor of maintenance costs, is reported. The cyclomatic complexity metric is a measure of the maximum number of linearly independent circuits in a program control graph. The current research validates previously raised concerns about the metric on a new data set. However, a simple transformation of the metric is investigated whereby the cyclomatic complexity is divided by the size of the system in source statements. thereby determining a complexity density ratio. This complexity density ratio is demonstrated to be a useful predictor of software maintenance productivity on a small pilot sample of maintenance projects. >

Journal ArticleDOI
TL;DR: Improvements to a fast method of generating sample values for xi in constant time are suggested, which reduces the time required for initialization to O(n).
Abstract: Let xi be a random variable over a finite set with an arbitrary probability distribution. Improvements to a fast method of generating sample values for xi in constant time are suggested. The proposed modification reduces the time required for initialization to O(n). For a simple genetic algorithm, this improvement changes an O(g n 1n n) algorithm into an O(g n) algorithm (where g is the number of generations, and n is the population size). >

Journal ArticleDOI
TL;DR: The strategy of using multiple versions of independently developed software as a means to tolerate residual software design faults is discussed and the effectiveness of multiversion software is studied by comparing estimates of the failure probability of these systems with the failure probabilities of single versions.
Abstract: The strategy of using multiple versions of independently developed software as a means to tolerate residual software design faults is discussed. The effectiveness of multiversion software is studied by comparing estimates of the failure probabilities of these systems with the failure probabilities of single versions. The estimates are obtained under a model of dependent failures and compared with estimates obtained when failures are assumed to be independent. The experimental results are based on 20 versions of an aerospace application developed and independently validated by 60 programmers from 4 universities. Descriptions of the application and development process are given, together with an analysis of the 20 versions. >

Journal ArticleDOI
TL;DR: The empirical results support the effectiveness of the data bindings clustering approach for localizing error-prone system structure and quantify ratios of coupling and strength in software systems.
Abstract: Using measures of data interaction called data bindings, the authors quantify ratios of coupling and strength in software systems and use the ratios to identify error-prone system structures. A 148000 source line system from a prediction environment was selected for empirical analysis. Software error data were collected from high-level system design through system testing and from field operation of the system. The authors use a set of five tools to calculate the data bindings automatically and use a clustering technique to determine a hierarchical description of each of the system's 77 subsystems. A nonparametric analysis of variance model is used to characterize subsystems and individual routines that had either many or few errors or high or low error correction effort. The empirical results support the effectiveness of the data bindings clustering approach for localizing error-prone system structure. >

Journal ArticleDOI
TL;DR: It is proven that several previously proposed multidatabase transaction management mechanisms guarantee global serializability only if all participating databases systems produce rigorous schedules.
Abstract: The class of transaction scheduling mechanisms in which the transaction serialization order can be determined by controlling their commitment order, is defined. This class of transaction management mechanisms is important, because it simplifies transaction management in a multidatabase system environment. The notion of analogous execution and serialization orders of transactions is defined and the concept of strongly recoverable and rigorous execution schedules is introduced. It is then proven that rigorous schedulers always produce analogous execution and serialization orders. It is shown that the systems using the rigorous scheduling can be naturally incorporated in hierarchical transaction management mechanisms. It is proven that several previously proposed multidatabase transaction management mechanisms guarantee global serializability only if all participating databases systems produce rigorous schedules. >

Journal ArticleDOI
TL;DR: The development of a virtual-machine monitor (VMM) security kernel for the VAX architecture is described, focusing on how the system's hardware, microcode, and software are aimed at meeting A1-level security requirements while maintaining the standard interfaces and applications of the VMS and ULTRIX-32 operating systems.
Abstract: The development of a virtual-machine monitor (VMM) security kernel for the VAX architecture is described. The focus is on how the system's hardware, microcode, and software are aimed at meeting A1-level security requirements while maintaining the standard interfaces and applications of the VMS and ULTRIX-32 operating systems. The VAX security kernel supports multiple concurrent virtual machines on a single VAX system, providing isolation and controlled sharing of sensitive data. Rigorous engineering standards were applied during development to comply with the assurance requirements for verification and configuration management. The VAX security kernel has been developed with a heavy emphasis on performance and system management tools. The kernel performs sufficiently well that much of its development was carried out in virtual machines running on the kernel itself, rather than in a conventional time-sharing system. >

Journal ArticleDOI
TL;DR: Two approaches are presented for integrating structured analysis and the Vienna development method as surrogates for informal and formal languages, respectively, and the issues that emerge from the use of the two approaches are reported.
Abstract: The differences between informal and formal requirements specification languages are noted, and the issue of bridging the gap between them is discussed. Using structured analysis (SA) and the Vienna development method (VDM) as surrogates for informal and formal languages, respectively, two approaches are presented for integrating the two. The first approach uses the SA model of a system to guide the analyst's understanding of the system and the development of the VDM specifications. The second approach proposes a rule-based method for generating VDM specifications from a set of corresponding SA specifications. The two approaches are illustrated through a simplified payroll system case. The issues that emerge from the use of the two approaches are reported. >

Journal ArticleDOI
TL;DR: It is argued that a simple alternative to copying as a data movement primitive-swapping (exchanging) the values of two variables-has potentially significant advantages in the context of the design of generic reusable software components.
Abstract: The authors argue that a simple alternative to copying as a data movement primitive-swapping (exchanging) the values of two variables-has potentially significant advantages in the context of the design of generic reusable software components. Specifically, the authors claim that generic module designs based on a swapping style are superior to designs based on copying, both in terms of execution-time efficiency and with respect to the likelihood of correctness of client programs and module implementations. Furthermore, designs based on swapping are more reusable than traditional designs. Specific arguments and examples to support these positions are presented. >

Journal ArticleDOI
TL;DR: The model is shown to provide a unified approach in which the user's requirements and preferences are formally integrated with the technical structure of the software and its module and program reliabilities, making reliability a singular measure for performance evaluation and project control.
Abstract: A software reliability allocation model is developed. This model determines how reliable software modules and programs must be in order to maximize the user's utility, while taking into account the financial and technical constraints of the system. The model is shown to provide a unified approach in which the user's requirements and preferences are formally integrated with the technical structure of the software and its module and program reliabilities. The model determines reliability goals at the planning and design stages of the software project, making reliability a singular measure for performance evaluation and project control. An example for the application of the model is provided. >

Journal ArticleDOI
TL;DR: A semidistributed approach is given for load balancing in large parallel and distributed systems which is different from the conventional centralized and fully distributed approaches and makes exclusive use of a combinatorial structure known as the Hadamard matrix.
Abstract: A semidistributed approach is given for load balancing in large parallel and distributed systems which is different from the conventional centralized and fully distributed approaches. The proposed strategy uses a two-level hierarchical control by partitioning the interconnection structure of a distributed or multiprocessor system into independent symmetric regions (spheres) centered at some control points. The central points, called schedulers, optimally schedule tasks within their spheres and maintain state information with low overhead. The authors consider interconnection structures belonging to a number of families of distance transitive graphs for evaluation, and, using their algebraic characteristics, show that identification of spheres and their scheduling points is in general an NP-complete problem. An efficient solution for this problem is presented by making exclusive use of a combinatorial structure known as the Hadamard matrix. The performance of the proposed strategy has been evaluated and compared with an efficient fully distributed strategy through an extensive simulation study. The proposed strategy yielded much better results. >

Journal ArticleDOI
TL;DR: In this article, the authors present a toolset for automating the main constrained expression analysis techniques and the results of experiments with that toolset are reported. The toolset is capable of carrying out completely automated analyses of a variety of concurrent systems, starting from source code in an Ada-like design language and producing system traces displaying the properties represented by the analysts queries.
Abstract: The constrained expression approach to analysis of concurrent software systems can be used with a variety of design and programming languages and does not require a complete enumeration of the set of reachable states of the concurrent system. The construction of a toolset automating the main constrained expression analysis techniques and the results of experiments with that toolset are reported. The toolset is capable of carrying out completely automated analyses of a variety of concurrent systems, starting from source code in an Ada-like design language and producing system traces displaying the properties represented bv the analysts queries. The strengths and weaknesses of the toolset and the approach are assessed on both theoretical and empirical grounds. >

Journal ArticleDOI
TL;DR: The robustness of PWS is demonstrated by showing that allocation policies that allocate processors more than the PWS are inferior in performance to those that never allocate more thanThe PWS-even at a moderately low load.
Abstract: The concept of a processor working set (PWS) as a single value parameter for characterizing the parallel program behavior is introduced. Through detailed experimental studies of different algorithms on a transputer-based multiprocessor machine, it is shown that the PWS is a robust measure for characterizing the workload of a multiprocessor system. It is shown that processor allocation strategies based on the PWS provide significantly better throughput-delay characteristics. The robustness of PWS is further demonstrated by showing that allocation policies that allocate processors more than the PWS are inferior in performance to those that never allocate more than the PWS-even at a moderately low load. Based on the results, a simple static allocation policy that allocates the PWS at low load and adaptively fragments at high load to one processor per job is proposed. >

Journal ArticleDOI
TL;DR: The authors argue that this approach can improve the flexibility of network management systems by providing a language that is declarative and set-oriented and it is shown that any data-manipulation language, augmented with several new capabilities, can serve as a language for specifying the aforementioned network management functions.
Abstract: The problem of managing large communication networks using statistical tests, alerts, and correlation among alerts is considered. The authors propose a model of these network management functions as data-manipulation operations. They argue that this approach can improve the flexibility of network management systems by providing a language that is declarative and set-oriented. These are properties of existing data-manipulation languages and it is shown that any data-manipulation language, augmented with several new capabilities, can serve as a language for specifying the aforementioned network management functions. The new capabilities required are specification of events, correlation among events, and change-tracking. >

Journal ArticleDOI
TL;DR: A set of language-independent schedulability analysis techniques is presented,utilizing knowledge of implementation- and hardware-dependent information in a table-driven fashion, that provide accurate worst-case time bounds and other Schedulability information.
Abstract: A set of language-independent schedulability analysis techniques is presented. Utilizing knowledge of implementation- and hardware-dependent information in a table-driven fashion, these techniques provide accurate worst-case time bounds and other schedulability information. A prototype schedulability analyzer has been developed to demonstrate the effectiveness of these techniques. The analyzer consists of a partially language-dependent front-end, targeted at real-time Euclid, a real-time language specifically designed with a set of schedulability analysis provisions built-in, and a language-dependent back-end. The analyzer has been used on a number of realistic real-time programs run on a multiple-microprocessor system. Predicted program performance differs only marginally from the actual performance. >

Journal ArticleDOI
TL;DR: Ergodicity and throughput bound characterization are addressed for a subclass of timed and stochastic Petri nets, interleaving qualitative and quantitative theories.
Abstract: Ergodicity and throughput bound characterization are addressed for a subclass of timed and stochastic Petri nets, interleaving qualitative and quantitative theories. The nets considered represent an extension of the well-known subclass of marked graphs, defined as having a unique consistent firing count vector, independently of the stochastic interpretation of the net model. In particular, persistent and mono-T-semiflow net subclasses are considered. Upper and lower throughput bounds are computed using linear programming problems defined on the incidence matrix of the underlying net. The bounds proposed depend on the initial marking and the mean values of the delays but not on the probability distributions (thus including both the deterministic and the stochastic cases). From a different perspective, the considered subclasses of synchronized queuing networks; thus, the proposed bounds can be applied to these networks. >