scispace - formally typeset
Search or ask a question

Showing papers in "ACM Transactions on Software Engineering and Methodology in 1999"


Journal ArticleDOI
TL;DR: In this article, the authors developed techniques for uncovering and measuring the discrepancies between models and executions, which they call process validation, which takes a process execution and a process model, and measures the level of correspondence between the two.
Abstract: To a great extent, the usefulness of a formal model of a software process lies in its ability to accurately predict the behavior of the executing process. Similarly, the usefulness of an executing process lies largely in its ability to fulfill the requirements embodied in a formal model of the process. When process models and process executions diverge, something significant is happening. We have developed techniques for uncovering and measuring the discrepancies between models and executions, which we call process validation. Process validation takes a process execution and a process model, and measures the level of correspondence between the two. Our metrics are tailorable and give process engineers control over determining the severity of different types of discrepancies. The techniques provide detailed information once a high-level measurement indicates the presence of a problem. We have applied our processes validation methods in an industrial case study, of which a portion is described in this article.

264 citations


Journal ArticleDOI
TL;DR: It is shown that there is a coverage hierarchy to fault classes that is consistent with, and may therefore explain, experimental results on fault-based testing.
Abstract: Some varieties of specification-based testing rely upon methods for generating test cases from predicates in a software specification. These methods derive various test conditions from logic expressions, with the aim of detecting different types of faults. Some authors have presented empirical results on the ability of specification-based test generation methods to detect failures. This article describes a method for cokmputing the conditions that must be covered by a test set for the test set to guarantee detection of the particular fault class. It is shown that there is a coverage hierarchy to fault classes that is consistent with, and may therefore explain, experimental results on fault-based testing. The method is also shown to be effective for computing MCDC-adequate tests.

161 citations


Journal ArticleDOI
TL;DR: The CRA technique is enhanced to check safety properties which may contain actions that are not globally observable, and a scheme to transform, in stages, a property that involves hidden actions to one that involves only globally observable actions is proposed.
Abstract: The software architecture of a distributed program can be represented by a hierarchical composition of subsystems, with interacting processes at the leaves of the hierarchy. Compositional reachability analysis (CRA) is a promising state reduction technique which can be automated and used in stages to derive the overall behavior of a distributed program based on its architecture. CRA is particularly suitable for the analysis of programs that are subject to evolutionary change. When a program evolves, only the behaviors of those subsystems affected by the change need be reevaluated. The technique however has a limitation. The properties available for analysis are constrained by the set of actions that remain globally observable. Properties involving actions encapsulated by subsystems may therefore not be analyzed. In this article, we enhance the CRA technique to check safety properties which may contain actions that are not globally observable. To achieve this, the state machine model is augmented with a special trap state labeled as p. We propose a scheme to transform, in stages, a property that involves hidden actions to one that involves only globally observable actions. The enhanced technique also includes a mechanism aiming at reducing the debugging effort. The technique is illustrated using a gas station system example.

106 citations


Journal ArticleDOI
Steven P. Reiss1
TL;DR: The Desert software engineering environment is a suite of tools developed to enhance programmer productivity through increased tool integration that introduces an inexpensive form of data integration to provide additional tool capabilities and information sharing among tools.
Abstract: The Desert software engineering environment is a suite of tools developed to enhance programmer productivity through increased tool integration. It introduces an inexpensive form of data integration to provide additional tool capabilities and information sharing among tools, uses a common editor to give high-quality semantic feedback and to integrate different types of software artifacts, and builds virtual files on demand to address specific tasks. All this is done in an open and extensible environment capable of handling large software systems.

101 citations


Journal ArticleDOI
TL;DR: An approach to software reliability estimation is presented that combines operational testing with stratified sampling in order to reduce the number of program executions that must be checked manually for conformance to requirements.
Abstract: A new approach to software reliability estimation is presented that combines operational testing with stratified sampling in order to reduce the number of program executions that must be checked manually for conformance to requirements. Automatic cluster analysis is applied to execution profiles in order to stratify captured operational executions. Experimental results are reported that suggest this approach can significantly reduce the cost of estimating reliability.

80 citations


Journal ArticleDOI
TL;DR: It is argued that the incremental attitude toward the introduction of formal methods within the industry could be effective largely independently from the chosen formalism.
Abstract: We address the problem of increasing the impact of formal methods in the practice of industrial computer applications. We summarize the reasons why formal methods so far did not gain widespead use within the industrial environment despite several promising experiences. We suggest an evolutionary rather than revolutionary attitude in the introduction of formal methods in the practice of industrial applications, and we report on our long-standing experience which involves an academic institution. Politecnico di Milano, two main industrial partners, ENEL and CISE, and occasionally a few other industries. Our approach aims at augmenting an existing and fairly deeply rooted informal industrial methodology with our original formalism, the logic specification language TRIO. On the basis of the experiences we gained we argue that our incremental attitude toward the introduction of formal methods within the industry could be effective largely independently from the chosen formalism.

70 citations


Journal ArticleDOI
TL;DR: This article defines a six-step procedure for building a PRIME-based process-integrated environment (PIE) and illustrates how PRIME facilitates change integration on an easy-to-adapt modeling level.
Abstract: Research in process-centered environments (PCEs) has focused on project management support and has neglected method guidance for the engineers performing the (software) engineering process. It has been dominated by the search for suitable process-modeling languages and enactment mechanisms. The consequences of process orientation on the computer-based engineering environments, i.e., the interactive tools used during process performance, have been studied much less. In this article, we present the PRIME (Process Integrated Modeling Environments) framework which empowers method guidance through process-integrated tools. In contrast to the tools of PCEs, the process-integrated tools of PRIME adjust their behavior according to the current process situation and the method definitions. Process integration of PRIME tools is achieved through (1) the definition of tool models; (2) the integration of the tool models and the method definitions; (3) the interpretation of the integrated environment model by the tools, the process-aware control integration mechanism, and the enactment mechanism; and (4) the synchronization of the tools and the enactment mechanism based on a comprehensive interaction protocol. We sketch the implementation of PRIME as a reusable implementation framework which facilitates the realization of process-integrated tools as well as the process integration of external tools. We define a six-step procedure for building a PRIME-based process-integrated environment (PIE) and illustrate how PRIME facilitates change integration on an easy-to-adapt modeling level.

70 citations


Journal ArticleDOI
TL;DR: It is argued that the GENOA language is a simple and convenient vehicle for implementing a range of analysis tools, and that the “front-and reuse” approach of GENOA offers an important advantage for tools aimed at large software projects: the reuse of complex, expensive build procedures to run generated tools over large source bases.
Abstract: Code analysis tools provide support for such software engineering tasks as program understanding, software metrics, testing, and reengineering. In this article we describe GENOA, the framework underlying application generators such as Aria and GEN++ which have been used to generate a wide range of practical code analysis tools. This experience illustrates front-end retargetability of GENOA; we describe the features of the GENOA framework that allow it to be used with different front ends. While permitting arbitrary parse tree computations, the GENOA specification language has special, compact iteration operators that are tuned for expressing simple, polynomial-time analysis programs; in fact, there is a useful sublanguage of the GENOA language that can express precisely all (and only) polynomial-time (PTIME) analysis programs on parse trees. Thus, we argue that the GENOA language is a simple and convenient vehicle for implementing a range of analysis tools. We also argue that the “front-and reuse” approach of GENOA offers an important advantage for tools aimed at large software projects: the reuse of complex, expensive build procedures to run generated tools over large source bases. In this article, we describe the GENOA framework and our experiences with it.

60 citations


Journal ArticleDOI
TL;DR: A hierarchy-aware classification schema for obje ct-oriented code, where software components are classified according to their behavioral characteristics, such as provided services, employed algorithms, and needed data is presented.
Abstract: This article presents a hierarchy-aware classification schema for obje ct-oriented code, where software components are classified according to their behavioral characteristics, such as provided services, employed algorithms, and needed data. In the case of reusable application frameworks, these characteristics are constructured from their model, i.e., from the description of the abstract classes specifying both the framework structure and purpose. In conventional object libraries, the characteristics are extracted semiautomatically from class interfaces. Characteristics are term pairs, weighted to represent “how well” they describe component behavior. The set of characteristics associated with a given component forms its software descriptor. A descriptor base is presented where descriptors are organized on the basis of structured relationships, such as similarity and composition. The classification is supported by a thesaurus acting as a language-independent unified lexicon. The descriptor base is conceived for developers who, besides conventionally browsing the descriptors hierarchy, can query the system, specifying a set of desired functionalities and getting a ranked set of adaptable candidates. User feedback is taken into account in order to progressively ameliorate the quality of the descriptors according to the views of the user community. Feedback is made dependent of the user typology through a user profile. Experimental results in terms of recall and precision of the retrieval mechanism against a sample code base are reported.

58 citations


Journal ArticleDOI
TL;DR: This paper presents a structured compositional design method for discrete real-time systems that can be used to combat the combinatorial explosion of states in the verification of large systems.
Abstract: Reactive systems exhibit ongoing, possibly nonterminating, interaction with the environment. Real-time systems are reactive systems that must satisfy quantitative timing constraints. This paper presents a structured compositional design method for discrete real-time systems that can be used to combat the combinatorial explosion of states in the verification of large systems. A composition rule describes how the correctness of the system can be determined from the correctness of its modules, without knowledge of their internal structure. The advantage of compositional verification is clear. Each module is both simpler and smaller than the system itself. Composition requires the use of both model-checking and deductive techniques. A refinement ruleguarantees that specifications of high-level modules are preserved by their implementations. The StateTime toolset is used to automate parts of compositional designs using a combination of model-checking and simulation. The design method is illustrated using a reactor shutdown system that cannot be verified using the StateTime toolset (due to the combinatorial explosion of states) without compositional reasoning. The reactor example also illustrates the use of the refinement rule.

41 citations


Journal ArticleDOI
TL;DR: A hierarchy-aware classification schema for object-oriented code, where software components are classified according to their behavioral characteristics, such as provided services, employed algorithms, and needed data is presented.
Abstract: This article presents a hierarchy-aware classification schema for object-oriented code, where software components are classified according to their behavioral characteristics, such as provided services, employed algorithms, and needed data. In the case of reusable application frameworks, these characteristics are constructed from their model, i.e., from the description of the abstract classes specifying both the framework structure and purpose. In conventional object libraries, the characteristics are extracted semiautomatically from class interfaces. Characteristics are term pairs, weighted to represent "how well" they describe component behavior. The set of characteristics associated with a given component forms its software descriptor. A descriptor base is presented where descriptors are organized on the basis of structured relationships, such as similarity and composition. The classification is supported by a thesaurus acting as a language-independent unified lexicon. The descriptor base is conceived for developers who, besides conventionally browsing the descriptors hierarchy, can query the system, specifying a set of desired functionalities and getting a ranked set of adaptable candidates. User feedback is taken into account in order to progressively ameliorate the quality of the descriptors according to the views of the user community. Feedback is made dependent of the user typology through a user profile. Experimental results in terms of recall and precision of the retrieval mechanism against a sample code base are reported.

Journal ArticleDOI
TL;DR: It is argued that Mobile UNITY contributes to the modular development of system specifications because of the declarative fashion in which coordination among components is specified.
Abstract: With recent advances in wireless communication technology, mobile computing is an increasingly important area of research. A mobile system is one where independently executing components may migrate through some space during the course of the computation, and where the pattern of connectivity among the components changes as they move in and out of proximity. Mobile UNITY is a notation and proof logic for specifying and reasoning about mobile systems. In this article it is argued that Mobile UNITY contributes to the modular development of system specifications because of the declarative fashion in which coordination among components is specified. The packet-fowarding mechanism at the core of the Mobile IP protocol for routing to mobile hosts is taken as an example. A Mobile UNITY model of packet forwarding and the mobile system in which it must operate is developed. Proofs of correctness properties, including important real-time properties, are outlined, and the role of formal verification in the development of protocols such as Mobile IP is discussed.

Journal ArticleDOI
TL;DR: A method is proposed (based on the use creational design patterns) to simplify SCM by reifying the variants of an object-oriented software system into language-level objects and newly available compilation technology makes this proposal attractive with respect to performance.
Abstract: Using a solid software configuration management (SCM) is mandatory to establish and maintain the integrity of the products of a software project throughout the project's software life cycle. Even with the help of sophisticated tools, handling the various dimensions of SCM can be a daunting (and costly) task for many projects. The contribution of this article is to (1)propose a method (based on the use creational design patterns) to simplify SCM by reifying the variants of an object-oriented software system into language-level objects and (2)show that newly available compilation technology makes this proposal attractive with respect to performance (memory footprint and execution time) by inferring which classes are needed for a specific configuration and optimizing the generated code accordingly.