scispace - formally typeset
Search or ask a question

Showing papers in "ACM Transactions on Software Engineering and Methodology in 1997"


Journal ArticleDOI
TL;DR: The key idea is to define architectural connectors as explicit semantic entities as a collection of protocols that characterize each of the participant roles in an interaction and how these roles interact.
Abstract: As software systems become more complex, the overall system structure—or software architecture—becomes a central design problem. An important step toward an engineering discipline of software is a formal basis for describing and analyzing these designs. In the article we present a formal approach to one aspect of architectural design: the interactions among components. The key idea is to define architectural connectors as explicit semantic entities. These are specified as a collection of protocols that characterize each of the participant roles in an interaction and how these roles interact. We illustrate how this scheme can be used to define a variety of common architectural connectors. We further provide a formal semantics and show how this leads to a system in which architectural compatibility can be checked in a way analogous to type-checking in programming languages.

1,344 citations


Journal ArticleDOI
Pamela Zave1, Michael Jackson1
TL;DR: It is shown that all descriptions involved in requirements engineering should be descriptions of the environment, and certain control information is necessary for sound requirements engineering, and the close association between domain knowledge and refinement of requirements is explained.
Abstract: Research in requirements engineering has produced an extensive body of knowledge, but there are four areas in which the foundation of the discipline seems weak or obscure. This article shines some light in the “four dark corners,” exposing problems and proposing solutions. We show that all descriptions involved in requirements engineering should be descriptions of the environment. We show that certain control information is necessary for sound requirements engineering, and we explain the close association between domain knowledge and refinement of requirements. Together these conclusions explain the precise nature of requirements, specifications, and domain knowledge, as well as the precise nature of the relationships among them. They establish minimum standards for what information should be represented in a requirements language. They also make it possible to determine exactly what it means for requirements and engineering to be successfully completed.

769 citations


Journal ArticleDOI
TL;DR: Initial empirical studies indicate that the technique can significantly reduce the cost of regression testing modified software and is at lease as precise as other safe regression test selection algorithms.
Abstract: Regression testing is an expensive but necessary maintenance activity performed on modified software to provide confidence that changes are correct and do not adversely affect other portions of the softwore. A regression test selection technique choses, from an existing test set, thests that are deemed necessary to validate modified software. We present a new technique for regression test selection. Our algorithms construct control flow graphs for a precedure or program and its modified version and use these graphs to select tests that execute changed code from the original test suite. We prove that, under certain conditions, the set of tests our technique selects includes every test from the original test suite that con expose faults in the modified procedfdure or program. Under these conditions our algorithms are safe. Moreover, although our algorithms may select some tests that cannot expose faults, they are at lease as precise as other safe regression test selection algorithms. Unlike many other regression test selection algorithms, our algorithms handle all language constructs and all types of program modifications. We have implemented our algorithms; initial empirical studies indicate that our technique can significantly reduce the cost of regression testing modified software.

721 citations


Journal ArticleDOI
TL;DR: This work uses formal specifications to describe the behavior of software components and, hence, to determine whether two components match, and gives precise definitions of not just exact match, but, more relevantly, various flavors of relaxed match.
Abstract: Specification matching is a way to compare two software components, based on descriptions of the component's behaviors. In the context of software reuse and library retrieval, it can help determine whether one component can be substituted for another or how one can be modified to fit the requirements of the other. In the context of object-oriented programming, it can help determine when one type is a behavioral subtype of another. We use formal specifications to describe the behavior of software components and, hence, to determine whether two components match. We give precise definitions of not just exact match, but, more relevantly, various flavors of relaxed match. These definitions capture the notions of generalization, specialization, and substitutability of software components. Since our formal specifications are pre- and postconditions written as predicates in first-order logic, we rely on theorem proving to determine match and mismatch. We give examples from our implementation of specification matching using the Larch Prover.

568 citations


Journal ArticleDOI
TL;DR: An initial evaluation of the state of the art in the field of process-centered software engineering environments is evaluated, taking into account the systems developed by the authors in the past five years as well as the main characteristics of other well-known environments.
Abstract: Process-centered software engineering environments (PSEEs) are the most recent generation of environments supporting software development activities. They exploit an representation of the process (called the process model that specifies how to carry out software development activities, the roles and tasks of software developers, and how to use and control software development tools. A process model is therefore a vehicle to better understand and communicate the process. If it is expressed in a formal notation, it can be used to support a variety of activities such as process analysis, process simulation, and process enactment. PSEEs provide automatic support for these activities. They exploit languages based on different paradigms, such as Petri nets and rule-based systems. They include facilities to edit and analyze process models. By enacting the process model, a PSEE provides a variety of services, such as assistance for software developers, automation of routine tasks, invocation and control of software development tools, and enforcement of mandatory rules and practices. Several PSEEs have been developed, both as research projects and as commercial products. The initial deployment and exploitation of this technology have made it possible to produce a significant amount of experiences, comments, evaluations, and feedback. We still lack, however, consistent and comprehensive assessment methods that can be used to collect and organize this information. This article aims at contributing to the definition of such methods, by providing a systematic comparison grid and by accomplishing an initial evaluation of the state of the art in the field. This evaluation takes into account the systems that have been developed by the authors in the past five years, as well as the main characteristics of other well-known environments

146 citations


Journal ArticleDOI
TL;DR: An approach to choosing a retrieval method that utilizes minimal repository structure to effectively support the process of finding software conponents is outlined, and a retrieval system that compensates for the lack of explicit knowledge structures through a spreading activation retrieval process is demonstrated.
Abstract: Repositories for software reuse are faced with two interrelated problems: (1) acquiring the knowledge to initially construct the repository and (2) modifying the repository to meet the evolving and dynamic needs of software development organizations. Current software repository methods rely heavily on classification, which exacerbates acquistition and evolution problems by requiring costly classification and domain analysis efforts before a repository can be used effectively, This article outlines an approach that avoids these problems by choosing a retrieval method that utilizes minimal repository structure to effectively support the process of finding software conponents. The approach is demonstrated through a pair of proof-of-concept prototypes: PEEL, a tool to semiautomatically identify reusable components, and CodeFinder, a retrieval system that compensates for the lack of explicit knowledge structures through a spreading activation retrieval process. CodeFinder also allows component representations to be modified while users are searching for information. This mechanism adapts to the changing nature of the information in the repository and incrementally improves the repository while people use it. The combination of these techniques holds potential for designing software repositories that minimize up-front costs, effectively support the search process, and evolve with an organization's changing needs.

142 citations


Journal ArticleDOI
TL;DR: This article considers the nature of the underlying formal models that will enable us to specify and reason about mobile computations, and employs the methods of UNITY, a highly modular extension of the UNITY programming notation.
Abstract: Mobile computing represents a major point of departure from the traditional distributed-computing paradigm. The potentially very large number of independent computing units, a decoupled computing style, frequent disconnections, continuous position changes, and the location-dependent nature of the behavior and communication patterns present designers with unprecedented challenges in the areas of modularity and dependability. So far, the literature on mobile computing is dominated by concerns having to de with the development of protocols and services. This article complements this perspective by considering the nature of the underlying formal models that will enable us to specify and reason about such computations. The basic research goal is to characterize fundamental issues facing mobile computing. We want to achieve this in a manner analogous to the way concepts such as shared variables and message passing help us understand distributed computing. The pragmatic objective is to develop techniques that facilitate the verification and design of dependable mobile systems. Toward this goal we employ the methods of UNITY. To focus on what is essential, we center our study on ad hoc networks, whose singular nature is bound to reveal the ultimate impact of movement on the way one computes and communicates in a mobile environment. To understand interactions we start with the UNITY concepts of union and superposition and consider direct generalizations to transient interactions. The motivation behind the transient nature of the interactions comes from the fact that components can communicate with each other only when they are within a a certain range. The notation we employ is a highly modular extension of the UNITY programming notation. Reasoning about mobile computations relies on extensions to the UNITY proof logic.

123 citations


Journal ArticleDOI
TL;DR: It turns out that all major SCM models can be realized and integrated efficiently on top of the FFS, demonstrating the flexible and unifying nature of the version set model.
Abstract: Software configuration management (SCM) suffers from tight coupling between SCM version-ing models and the imposed SCM processes. In order to adapt SCM tools to SCM processes, rather than vice versa, we propose a unified versioning model, the version set model. Version sets denote versions, components, and configurations by feature terms, that is, Boolean terms over (feature : value)-attributions. Through feature logic, we deduce consistency of abstract configurations as well as features of derived components and describe how features propagate in the SCM process; using feature implications, we integrate change-oriented and version-oriented SCM models. We have implemented the version set model in an SCM system called ICE, for Incremental Configuration Environment. ICE is based on a featured file system (FFS), where version sets are accessed as virtual files and directories. Using the well-known C preprocessor (CPP) representation, users can view and edit multiple versions simultaneously, while only the differences between versions are stored. It turns out that all major SCM models can be realized and integrated efficiently on top of the FFS, demonstrating the flexible and unifying nature of the version set model.

101 citations


Journal ArticleDOI
TL;DR: This work presents a hybrid slicing technique that integrates dynamic information from a specific execution into a static slice analysis and allows the user to control the cost of hybrid slicing by limiting the amount of dynamic information used in computing the slice.
Abstract: Program slicing is an effective techniqe for narrowing the focus of attention to the relevant parts of a program during the debugging process. However, imprecision is a problem in static slices, since they are based on all possible executions that reach a given program point rather than the specific execution under which the program is being debugged. Dynamic slices, based on the specific execution being debugged, are precise but incur high run-time overhead due to the tracing information that is collected during the program's execution. We present a hybrid slicing technique that integrates dynamic information from a specific execution into a static slice analysis. The hybrid sliceproduced is more precise that the static slice and less costly that the dynamic slice. The technique exploits dynamic information that is readily available during debugging—namely, breakpoint information and the dynamic call graph. This information is integrated into a static slicing analysis to more accurately estimate the potential paths taken by the program. The breakpoints and call/return points, used as reference points, divide the execution path into intervals. By associating each statement in the slice with an execution interval, hybrid slicing provides information as to when a statement was encountered during execution. Another attractive feature of our approach is that it allows the user to control the cost of hybrid slicing by limiting the amount of dynamic information used in computing the slice. We implemented the hybrid slicing technique to demonstrate the feasibility of our approach.

66 citations


Journal ArticleDOI
TL;DR: The logic, methodology, and tools that comprise the prototype RTGIL environment are described and the use of the environment is illustrated with an example application.
Abstract: Concurrent real-time systems are among the most difficult systems to design because of the many possible interleavings of events and because of the timing requirements that must be satisfied. We have developed a graphical environment based on Real-Time Graphical Interval Logic (RTGIL) for specifying and reasoning about the designs of concurrent real-time systems. Specifications in the logic have an intuitive graphical representation that resembles the timing diagrams drawn by software and hardware engineers, with real-time constraints that bound the durations of intervals. The syntax-directed editor of the RTGIL environment enables the user to compose and edit graphical formulas on a workstation display; the automated theorem prover mechanically checks the validity of proofs in the logic; and the database and proof manager tracks proof dependencies and allows formulas to be stored and retrieved. This article describes the logic, methodology, and tools that comprise the prototype RTGIL environment and illustrates the use of the environment with an example application.

61 citations


Journal ArticleDOI
TL;DR: Three KBSE systems in which Description logics capture some of the requisite knowledge needed to support design, coding, and testing activities are discussed and some alternative approaches (to DLs) are surveyed.
Abstract: The increasing size and complexity of many software systems demand a greater emphasis on capturing and maintaining knowledge at many different levels within the software development process. This knowledge includes descriptions of the hardware and software components and their behavior, external and internal design specifications, and support for system testing. The Knowledge-based software engineering (KBSE) research paradigm is concerned with systems that use formally represented knowledge, with associated inference precedures, to support the various subactivities of software development. As they growing scale, KBSE systems must balance expressivity and inferential power with the real demands of knowledge base construction, maintenance, performance, and comprehensibility. Description logics (DLs) possess several features—a terminological orientation, a formal semantics, and efficient reasoning procedures—which offer an effective tradeoff of these factors. We discuss three KBSE systems in which DLs capture some of the requisite knowledge needed to support design, coding, and testing activities. We then survey some alternative approaches (to DLs) in KBSE systems. We close with a discussion of the benefits of DLs and ways to address some of their limitations.

Journal ArticleDOI
TL;DR: It is shown that the conservative termination policy allows heap storage to be managed more efficiently than a less conservative policy and the rules for distributed termination of concurrent tasks guarantee that a task terminates only if it can no longer affect the outcome of an execution.
Abstract: This article analyzes the semantics of task dependence and termination in Ada. We use a contour model of Ada tasking in examining the implications of and possible motivation for the rules that determine when procedures and tasks terminate during execution of an Ada program. The termination rules prevent the data that belong to run-time instances of scope units from being deallocated prematurely, but they are unnecessarily conservative in this regard. For task instances that are created by invoking a storage allocator, we show that the conservative termination policy allows heap storage to be managed more efficiently than a less conservative policy. The article also examines the manner in which the termination rules affect the synchronization of concurrent tasks. Master-slave and client-server applications are considered. We show that the rules for distributed termination of concurrent tasks guarantee that a task terminates only if it can no longer affect the outcome of an execution. The article is meant to give programmers a better understanding of Ada tasking and to help language designers assess the strengths and weaknesses of the termination model.

Journal ArticleDOI
TL;DR: This correspondence proves that the results of that article are incorrect and the author's claims that these versions all have the same fault detection capability as the original W-method are incorrect.
Abstract: A previousACM TOSEM article of Ph. Bernhard (“A Reduced Test Suite of Protocol Conformance Testing,” ACM Transactions on Software Engineering and Methodology, Vol. 3, No. 3, July 1994, pages 201-220) describes three new versions of the so-called W-method for solving the protocol-testing problem, i.e., solving the Mealy machine equivalence problem. The author claims that these versions all have the same fault detection capability as the original W-method. In this correspondence we prove that the results of that article are incorrect.