scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Classification and Survey of Analysis Strategies for Software Product Lines

TL;DR: A classification of product-line analyses is proposed to enable systematic research and application in software-product-line engineering and develops a research agenda to guide future research on product- line analyses.
Abstract: Software-product-line engineering has gained considerable momentum in recent years, both in industry and in academia. A software product line is a family of software products that share a common set of features. Software product lines challenge traditional analysis techniques, such as type checking, model checking, and theorem proving, in their quest of ensuring correctness and reliability of software. Simply creating and analyzing all products of a product line is usually not feasible, due to the potentially exponential number of valid feature combinations. Recently, researchers began to develop analysis techniques that take the distinguishing properties of software product lines into account, for example, by checking feature-related code in isolation or by exploiting variability information during analysis. The emerging field of product-line analyses is both broad and diverse, so it is difficult for researchers and practitioners to understand their similarities and differences. We propose a classification of product-line analyses to enable systematic research and application. Based on our insights with classifying and comparing a corpus of 123 research articles, we develop a research agenda to guide future research on product-line analyses.
Citations
More filters
Proceedings ArticleDOI
31 May 2014
TL;DR: An approach which uses build-time errors and a novel feature-effect heuristic to automatically extract configuration constraints from C code and argues that this approach, tooling, and experimental results support researchers and practitioners working on variability model re-engineering, evolution, and consistency-checking techniques.
Abstract: Highly-configurable systems allow users to tailor the software to their specific needs. Not all combinations of configuration options are valid though, and constraints arise for technical or non-technical reasons. Explicitly describing these constraints in a variability model allows reasoning about the supported configurations. To automate creating variability models, we need to identify the origin of such configuration constraints. We propose an approach which uses build-time errors and a novel feature-effect heuristic to automatically extract configuration constraints from C code. We conduct an empirical study on four highly-configurable open-source systems with existing variability models having three objectives in mind: evaluate the accuracy of our approach, determine the recoverability of existing variability-model constraints using our analysis, and classify the sources of variability-model constraints. We find that both our extraction heuristics are highly accurate (93% and 77% respectively), and that we can recover 19% of the existing variability-models using our approach. However, we find that many of the remaining constraints require expert knowledge or more expensive analyses. We argue that our approach, tooling, and experimental results support researchers and practitioners working on variability model re-engineering, evolution, and consistency-checking techniques.

126 citations


Cites background from "A Classification and Survey of Anal..."

  • ...This is the classic approach to find consistency errors, which a user can subsequently fix in the implementation or in the variability model [19, 49, 50]....

    [...]

  • ...Such checks have been the standard approach in previous work on finding bugs in configurable systems [19, 26, 50], where inconsistencies between the model and implementation are identified as errors....

    [...]

  • ...There has been a lot of research directed at studying and ensuring the consistency of the problem and solution spaces [50]....

    [...]

Proceedings ArticleDOI
15 Sep 2014
TL;DR: This study provides insights into the nature and occurrence of variability bugs in a large C software system, and shows in what ways variability affects and increases the complexity of software bugs.
Abstract: Feature-sensitive verification pursues effective analysis of the exponentially many variants of a program family. However, researchers lack examples of concrete bugs induced by variability, occurring in real large-scale systems. Such a collection of bugs is a requirement for goal-oriented research, serving to evaluate tool implementations of feature-sensitive analyses by testing them on real bugs. We present a qualitative study of 42 variability bugs collected from bug-fixing commits to the Linux kernel repository. We analyze each of the bugs, and record the results in a database. In addition, we provide self-contained simplified C99 versions of the bugs, facilitating understanding and tool evaluation. Our study provides insights into the nature and occurrence of variability bugs in a large C software system, and shows in what ways variability affects and increases the complexity of software bugs.

116 citations


Cites background from "A Classification and Survey of Anal..."

  • ...Family-based [33] analyses, a form of feature-sensitive analyses, tackle this problem by considering all configurable program variants as a single unit of analysis, instead of analyzing the individual variants separately....

    [...]

Proceedings ArticleDOI
14 May 2016
TL;DR: In this article, the authors present a comparative study of 10 state-of-the-art sampling algorithms regarding their fault-detection capability and size of sample sets, and identify a number of technical challenges when trying to avoid the limiting assumptions, which questions the practicality of certain sampling algorithms.
Abstract: Almost every software system provides configuration options to tailor the system to the target platform and application scenario. Often, this configurability renders the analysis of every individual system configuration infeasible. To address this problem, researchers have proposed a diverse set of sampling algorithms. We present a comparative study of 10 state-of-the-art sampling algorithms regarding their fault-detection capability and size of sample sets. The former is important to improve software quality and the latter to reduce the time of analysis. In a nutshell, we found that sampling algorithms with larger sample sets are able to detect higher numbers of faults, but simple algorithms with small sample sets, such as most-enabled-disabled, are the most efficient in most contexts. Furthermore, we observed that the limiting assumptions made in previous work influence the number of detected faults, the size of sample sets, and the ranking of algorithms. Finally, we have identified a number of technical challenges when trying to avoid the limiting assumptions, which questions the practicality of certain sampling algorithms.

114 citations

Proceedings ArticleDOI
25 Aug 2016
TL;DR: A dynamic analysis for Java based on variability-aware execution and monitor executions of multiple small to medium-sized programs is developed to better understand interactions in practice and finds that the essential configuration complexity of these programs is much lower than the combinatorial explosion of the configuration space indicates.
Abstract: Quality assurance for highly-configurable systems is challenging due to the exponentially growing configuration space. Interactions among multiple options can lead to surprising behaviors, bugs, and security vulnerabilities. Analyzing all configurations systematically might be possible though if most options do not interact or interactions follow specific patterns that can be exploited by analysis tools. To better understand interactions in practice, we analyze program traces to characterize and identify where interactions occur on control flow and data. To this end, we developed a dynamic analysis for Java based on variability-aware execution and monitor executions of multiple small to medium-sized programs. We find that the essential configuration complexity of these programs is indeed much lower than the combinatorial explosion of the configuration space indicates. However, we also discover that the interaction characteristics that allow scalable and complete analyses are more nuanced than what is exploited by existing state-of-the-art quality assurance strategies.

100 citations


Cites background from "A Classification and Survey of Anal..."

  • ...The software-product-line community has extensively investigated strategies to analyze large configuration spaces through sharing [61]....

    [...]

  • ...Several quality-assurance strategies have been developed for detecting interactions in large configuration spaces [19,61]....

    [...]

  • ...successfully demonstrated exploitable redundancies among configurations to share commonalities while analyzing many configurations at once [5,11,13,34,37,39,43,51,61]....

    [...]

References
More filters
01 Sep 1996
TL;DR: Model checking tools, created by both academic and industrial teams, have resulted in an entirely novel approach to verification and test case generation that often enables engineers in the electronics industry to design complex systems with considerable assurance regarding the correctness of their initial designs.
Abstract: Turing Lecture from the winners of the 2007 ACM A.M. Turing Award. In 1981, Edmund M. Clarke and E. Allen Emerson, working in the USA, and Joseph Sifakis working independently in France, authored seminal papers that founded what has become the highly successful field of model checking. This verification technology provides an algorithmic means of determining whether an abstract model---representing, for example, a hardware or software design---satisfies a formal specification expressed as a temporal logic (TL) formula. Moreover, if the property does not hold, the method identifies a counterexample execution that shows the source of the problem. The progression of model checking to the point where it can be successfully used for complex systems has required the development of sophisticated means of coping with what is known as the state explosion problem. Great strides have been made on this problem over the past 28 years by what is now a very large international research community. As a result many major hardware and software companies are beginning to use model checking in practice. Examples of its use include the verification of VLSI circuits, communication protocols, software device drivers, real-time embedded systems, and security algorithms. The work of Clarke, Emerson, and Sifakis continues to be central to the success of this research area. Their work over the years has led to the creation of new logics for specification, new verification algorithms, and surprising theoretical results. Model checking tools, created by both academic and industrial teams, have resulted in an entirely novel approach to verification and test case generation. This approach, for example, often enables engineers in the electronics industry to design complex systems with considerable assurance regarding the correctness of their initial designs. Model checking promises to have an even greater impact on the hardware and software industries in the future. ---Moshe Y. Vardi, Editor-in-Chief

7,392 citations

Proceedings ArticleDOI
01 Jan 1977
TL;DR: In this paper, the abstract interpretation of programs is used to describe computations in another universe of abstract objects, so that the results of abstract execution give some information on the actual computations.
Abstract: A program denotes computations in some universe of objects. Abstract interpretation of programs consists in using that denotation to describe computations in another universe of abstract objects, so that the results of abstract execution give some information on the actual computations. An intuitive example (which we borrow from Sintzoff [72]) is the rule of signs. The text -1515 * 17 may be understood to denote computations on the abstract universe {(+), (-), (±)} where the semantics of arithmetic operators is defined by the rule of signs. The abstract execution -1515 * 17 → -(+) * (+) → (-) * (+) → (-), proves that -1515 * 17 is a negative number. Abstract interpretation is concerned by a particular underlying structure of the usual universe of computations (the sign, in our example). It gives a summary of some facets of the actual executions of a program. In general this summary is simple to obtain but inaccurate (e.g. -1515 + 17 → -(+) + (+) → (-) + (+) → (±)). Despite its fundamentally incomplete results abstract interpretation allows the programmer or the compiler to answer questions which do not need full knowledge of program executions or which tolerate an imprecise answer, (e.g. partial correctness proofs of programs ignoring the termination problems, type checking, program optimizations which are not carried in the absence of certainty about their feasibility, …).

6,829 citations

Journal ArticleDOI
Gregor Kiczales1
TL;DR: This work proposes to use aspect-orientation to automate the calculation of statistics for database optimization and shows how nicely the update functionality can be modularized in an aspect and how easy it is to specify the exact places and the time when statistics updates should be performed to speed up complex queries.
Abstract: The performance of relational database applications often suffers. The reason is that query optimizers require accurate statistics about data in the database in order to provide optimal query execution plans. Unfortunately, the computation of these statistics must be initiated explicitly (e.g., within application code), and computing statistics takes some time. Moreover, it is not easy to decide when to update statistics of what tables in an application. A well-engineered solution requires adding source code usually in many places of an application. The issue of updating the statistics for database optimization is a crosscutting concern. Thus we propose to use aspect-orientation to automate the calculation. We show how nicely the update functionality can be modularized in an aspect and how easy it is to specify the exact places and the time when statistics updates should be performed to speed up complex queries. Due to the automatic nature, computation takes place on time for complex queries, only when necessary, and only for stale tables. The implementation language for the automated aspect-oriented statistics update concern is AspectJ, a well known and mature aspect-oriented programming language. The approach can however be implemented in any other aspect-oriented language. Unlike in traditional object-oriented pattern solutions, e.g. using the interceptor pattern, we do not have to modify existing code.

5,161 citations


Additional excerpts

  • ...2004], and aspect-oriented programming [Kiczales et al. 1997]....

    [...]

Book
07 Jan 1999

4,478 citations

ReportDOI
01 Nov 1990
TL;DR: This report will establish methods for performing a domain analysis and describe the products of the domain analysis process to illustrate the application of domain analysis to a representative class of software systems.
Abstract: : Successful Software reuse requires the systematic discovery and exploitation of commonality across related software systems. By examining related software systems and the underlying theory of the class of systems they represent, domain analysis can provide a generic description of the requirements of that class of systems and a set of approaches for their implementation. This report will establish methods for performing a domain analysis and describe the products of the domain analysis process. To illustrate the application of domain analysis to a representative class of software systems, this report will provide a domain analysis of window management system software.

4,420 citations


"A Classification and Survey of Anal..." refers background or methods in this paper

  • ...Finally, we discuss the methodology of our survey in Section 2.4....

    [...]

  • ...A feature is a prominent or distinctive user-visible behavior, aspect, quality, or characteristic of a software system [Kang et al. 1990]....

    [...]

  • ...A feature diagram is a graphical representation of a variability model defining a hierarchy between features, in which each child feature depends on its parent feature [Kang et al. 1990]....

    [...]