scispace - formally typeset
Search or ask a question

Showing papers in "Software Quality Journal in 2012"


Journal ArticleDOI
TL;DR: An overview of the approach developed by the Software Improvement Group for code analysis and quality consulting focused on software maintainability is provided, which uses a standardized measurement model based on the ISO/IEC 9126 definition of maintainability and source code metrics.
Abstract: We provide an overview of the approach developed by the Software Improvement Group for code analysis and quality consulting focused on software maintainability. The approach uses a standardized measurement model based on the ISO/IEC 9126 definition of maintainability and source code metrics. Procedural standardization in evaluation projects further enhances the comparability of results. Individual assessments are stored in a repository that allows any system at hand to be compared to the industry-wide state of the art in code quality and maintainability. When a minimum level of software maintainability is reached, the certification body of TUV Informationstechnik GmbH issues a Trusted Product Maintainability certificate for the software product.

168 citations


Journal ArticleDOI
TL;DR: SPL Conqueror shows how non-functional properties can be qualitatively specified and quantitatively measured in the context of SPLs and discusses the variant-derivation process in SPL Conqueror that reduces the effort of computing an optimal variant.
Abstract: A software product line (SPL) is a family of related programs of a domain. The programs of an SPL are distinguished in terms of features, which are end-user visible characteristics of programs. Based on a selection of features, stakeholders can derive tailor-made programs that satisfy functional requirements. Besides functional requirements, different application scenarios raise the need for optimizing non-functional properties of a variant. The diversity of application scenarios leads to heterogeneous optimization goals with respect to non-functional properties (e.g., performance vs. footprint vs. energy optimized variants). Hence, an SPL has to satisfy different and sometimes contradicting requirements regarding non-functional properties. Usually, the actually required non-functional properties are not known before product derivation and can vary for each application scenario and customer. Allowing stakeholders to derive optimized variants requires us to measure non-functional properties after the SPL is developed. Unfortunately, the high variability provided by SPLs complicates measurement and optimization of non-functional properties due to a large variant space. With SPL Conqueror, we provide a holistic approach to optimize non-functional properties in SPL engineering. We show how non-functional properties can be qualitatively specified and quantitatively measured in the context of SPLs. Furthermore, we discuss the variant-derivation process in SPL Conqueror that reduces the effort of computing an optimal variant. We demonstrate the applicability of our approach by means of nine case studies of a broad range of application domains (e.g., database management and operating systems). Moreover, we show that SPL Conqueror is implementation and language independent by using SPLs that are implemented with different mechanisms, such as conditional compilation and feature-oriented programming.

161 citations


Journal ArticleDOI
TL;DR: A comprehensive Conceptual Modeling Quality Framework is proposed, bringing together two well-known quality frameworks: the framework of Lindland, Sindre, and Sølvberg (LSS) and that of Wand and Weber based on Bunge’s ontology (BWW).
Abstract: The goal of any modeling activity is a complete and accurate understanding of the real-world domain, within the bounds of the problem at hand and keeping in mind the goals of the stakeholders involved. High-quality representations are critical to that understanding. This paper proposes a comprehensive Conceptual Modeling Quality Framework, bringing together two well-known quality frameworks: the framework of Lindland, Sindre, and Solvberg (LSS) and that of Wand and Weber based on Bunge's ontology (BWW). This framework builds upon the strengths of the LSS and BWW frameworks, bringing together and organizing the various quality cornerstones and then defining the many quality dimensions that connect one to another. It presents a unified view of conceptual modeling quality that can benefit both researchers and practitioners.

137 citations


Journal ArticleDOI
TL;DR: This paper reports the experience on applying t-wise techniques for SPL with two independent toolsets developed by the authors, and derives useful insights for pairwise and t- Wise testing of product lines.
Abstract: Software Product Lines (SPL) are difficult to validate due to combinatorics induced by variability, which in turn leads to combinatorial explosion of the number of derivable products. Exhaustive testing in such a large products space is hardly feasible. Hence, one possible option is to test SPLs by generating test configurations that cover all possible t feature interactions (t-wise). It dramatically reduces the number of test products while ensuring reasonable SPL coverage. In this paper, we report our experience on applying t-wise techniques for SPL with two independent toolsets developed by the authors. One focuses on generality and splits the generation problem according to strategies. The other emphasizes providing efficient generation. To evaluate the respective merits of the approaches, measures such as the number of generated test configurations and the similarity between them are provided. By applying these measures, we were able to derive useful insights for pairwise and t-wise testing of product lines.

121 citations


Journal ArticleDOI
TL;DR: This contribution provides a mapping between feature models describing the common and variable parts of an SPL and a reusable test model in the form of statecharts and discusses adequacy criteria for SPL coverage under pairwise feature interaction testing and gives a generalization to the T-wise case.
Abstract: Testing software product lines (SPLs) is very challenging due to a high degree of variability leading to an enormous number of possible products. The vast majority of today's testing approaches for SPLs validate products individually using different kinds of reuse techniques for testing. Because of their reusability and adaptability capabilities, model-based approaches are suitable to describe variability and are therefore frequently used for implementation and testing purposes of SPLs. Due to the enormous number of possible products, individual product testing becomes more and more infeasible. Pairwise testing offers one possibility to test a subset of all possible products. However, according to the best of our knowledge, there is no contribution discussing and rating this approach in the SPL context. In this contribution, we provide a mapping between feature models describing the common and variable parts of an SPL and a reusable test model in the form of statecharts. Thereby, we interrelate feature model-based coverage criteria and test model-based coverage criteria such as control and data flow coverage and are therefore able to discuss the potentials and limitations of pairwise testing. We pay particular attention to test requirements for feature interactions constituting a major challenge in SPL engineering. We give a concise definition of feature dependencies and feature interactions from a testing point of view, and we discuss adequacy criteria for SPL coverage under pairwise feature interaction testing and give a generalization to the T-wise case. The concept and implementation of our approach are evaluated by means of a case study from the automotive domain.

69 citations


Journal ArticleDOI
TL;DR: A systematic literature review with the objective of identifying and interpreting all the available studies from 1996 to 2010 that present quality attributes and/or measures for software product lines to provide a global vision of the state of the research within this area.
Abstract: It is widely accepted that software measures provide an appropriate mechanism for understanding, monitoring, controlling, and predicting the quality of software development projects. In software product lines (SPL), quality is even more important than in a single software product since, owing to systematic reuse, a fault or an inadequate design decision could be propagated to several products in the family. Over the last few years, a great number of quality attributes and measures for assessing the quality of SPL have been reported in literature. However, no studies summarizing the current knowledge about them exist. This paper presents a systematic literature review with the objective of identifying and interpreting all the available studies from 1996 to 2010 that present quality attributes and/or measures for SPL. These attributes and measures have been classified using a set of criteria that includes the life cycle phase in which the measures are applied; the corresponding quality characteristics; their support for specific SPL characteristics (e.g., variability, compositionality); the procedure used to validate the measures, etc. We found 165 measures related to 97 different quality attributes. The results of the review indicated that 92% of the measures evaluate attributes that are related to maintainability. In addition, 67% of the measures are used during the design phase of Domain Engineering, and 56% are applied to evaluate the product line architecture. However, only 25% of them have been empirically validated. In conclusion, the results provide a global vision of the state of the research within this area in order to help researchers in detecting weaknesses, directing research efforts, and identifying new research lines. In particular, there is a need for new measures with which to evaluate both the quality of the artifacts produced during the entire SPL life cycle and other quality characteristics. There is also a need for more validation (both theoretical and empirical) of existing measures. In addition, our results may be useful as a reference guide for practitioners to assist them in the selection or the adaptation of existing measures for evaluating their software product lines.

68 citations


Journal ArticleDOI
TL;DR: The study results show that tailoring notations are not as mature as the industry requires if they are to provide the kind of support for process tailoring that fulfills the requirements identified, i.e., including security policies for the whole process.
Abstract: Organizations developing software-based systems or services often need to tailor process reference models--including product-oriented and project-oriented processes--to meet both their own characteristics and those of their projects. Existing process reference models, however, are often defined in a generic manner. They typically offer only limited mechanisms for adapting processes to the needs of organizational units, project goals, and project environments. This article presents a systematic literature review of peer-reviewed conference and journal articles published between 1990 and 2009. Our aim was both to identify requirements for process-tailoring notation and to analyze those tailoring mechanisms that are currently in existence and that consistently support process tailoring. The results show that the software engineering community has demonstrated an ever-increasing interest in software process tailoring, ranging from the consideration of theoretical proposals regarding how to tailor processes to the scrutiny of practical experiences in organizations. Existing tailoring mechanisms principally permit the modeling of variations of activities, artifacts, or roles by insertion or deletion. Two types of variations have been proposed: the individual modification of process elements and the simultaneous variation of several process elements. Resolving tailoring primarily refers to selecting or deselecting optional elements or to choosing between alternatives. It is sometimes guided by explicitly defined processes and supported by tools or mechanisms from the field of knowledge engineering. The study results show that tailoring notations are not as mature as the industry requires if they are to provide the kind of support for process tailoring that fulfills the requirements identified, i.e., including security policies for the whole process, or carrying out one activity rather than another. A notation must therefore be built, which takes these requirements into consideration in order to permit variant-rich processes representation and use this variability to consistently support process tailoring.

54 citations


Journal ArticleDOI
TL;DR: The key benefits from applying the SOLIMVA methodology/tool within a Verification and Validation process are the ease of use and the support of a formal method consequently leading to a potential acceptance of the methodology in complex software projects.
Abstract: Natural Language (NL) deliverables suffer from ambiguity, poor understandability, incompleteness, and inconsistency. Howewer, NL is straightforward and stakeholders are familiar with it to produce their software requirements documents. This paper presents a methodology, SOLIMVA, which aims at model-based test case generation considering NL requirements deliverables. The methodology is supported by a tool that makes it possible to automatically translate NL requirements into Statechart models. Once the Statecharts are derived, another tool, GTSC, is used to generate the test cases. SOLIMVA uses combinatorial designs to identify scenarios for system and acceptance testing, and it requires that a test designer defines the application domain by means of a dictionary. Within the dictionary there is a Semantic Translation Model in which, among other features, a word sense disambiguation method helps in the translation process. Using as a case study a space application software product, we compared SOLIMVA with a previous manual approach developed by an expert under two aspects: test objectives coverage and characteristics of the Executable Test Cases. In the first aspect, the SOLIMVA methodology not only covered the test objectives associated to the expert's scenarios but also proposed a better strategy with test objectives clearly separated according to the directives of combinatorial designs. The Executable Test Cases derived in accordance with the SOLIMVA methodology not only possessed similar characteristics with the expert's Executable Test Cases but also predicted behaviors that did not exist in the expert's strategy. The key benefits from applying the SOLIMVA methodology/tool within a Verification and Validation process are the ease of use and, at the same time, the support of a formal method consequently leading to a potential acceptance of the methodology in complex software projects.

51 citations


Journal ArticleDOI
TL;DR: An approach for quality-aware analysis in software product lines using the orthogonal variability model (OVM) to represent variability is presented and an off-the-shelf constraint programming solver is proposed to use to automatically perform the verification task.
Abstract: Software product line engineering is about producing a set of similar products in a certain domain. A variability model documents the variability amongst products in a product line. The specification of variability can be extended with quality information, such as measurable quality attributes (e.g., CPU and memory consumption) and constraints on these attributes (e.g., memory consumption should be in a range of values). However, the wrong use of constraints may cause anomalies in the specification which must be detected (e.g., the model could represent no products). Furthermore, based on such quality information, it is possible to carry out quality-aware analyses, i.e., the product line engineer may want to verify whether it is possible to build a product that satisfies a desired quality. The challenge for quality-aware specification and analysis is threefold. First, there should be a way to specify quality information in variability models. Second, it should be possible to detect anomalies in the variability specification associated with quality information. Third, there should be mechanisms to verify the variability model to extract useful information, such as the possibility to build a product that fulfils certain quality conditions (e.g., is there any product that requires less than 512 MB of memory?). In this article, we present an approach for quality-aware analysis in software product lines using the orthogonal variability model (OVM) to represent variability. We propose to map variability represented in the OVM associated with quality information to a constraint satisfaction problem and to use an off-the-shelf constraint programming solver to automatically perform the verification task. To illustrate our approach, we use a product line in the automotive domain which is an example that was created in a national project by a leading car company. We have developed a prototype tool named FaMa-OVM, which works as a proof of concepts. We were able to identify void models, dead and false optional elements, and check whether the product line example satisfies quality conditions.

49 citations


Journal ArticleDOI
TL;DR: A study of the relation between technical quality of software products and the issue resolution performance of their maintainers revealed that all but one of the metrics of the SIG quality model show a significant positive correlation with the resolution speed of defects, enhancements, or both.
Abstract: We performed an empirical study of the relation between technical quality of software products and the issue resolution performance of their maintainers. In particular, we tested the hypothesis that ratings for source code maintainability, as employed by the Software Improvement Group (SIG) quality model, are correlated with ratings for issue resolution speed. We tested the hypothesis for issues of type defect and of type enhancement. This study revealed that all but one of the metrics of the SIG quality model show a significant positive correlation with the resolution speed of defects, enhancements, or both.

48 citations


Journal ArticleDOI
TL;DR: This paper uses data from 50 projects performed at one of the largest banks in Sweden to identify factors that have an impact on software development cost and validates well-known factors for cost estimation.
Abstract: Software systems of today are often complex, making development costs difficult to estimate. This paper uses data from 50 projects performed at one of the largest banks in Sweden to identify factors that have an impact on software development cost. Correlation analysis of the relationship between factor states and project costs was assessed using ANOVA and regression analysis. Ten out of the original 31 factors turned out to have an impact on software development project cost at the Swedish bank including the: number of function points, involved risk, number of budget revisions, primary platform, project priority, commissioning body's unit, commissioning body, number of project participants, project duration, and number of consultants. In order to be able to compare projects of different size and complexity, this study also considers the software development productivity defined as the amount of function points per working hour in a project. The study at the bank indicates that the productivity is affected by factors such as performance of estimation and prognosis efforts, project type, number of budget revisions, existence of testing conductor, presentation interface, and number of project participants. A discussion addressing how the productivity factors relate to cost estimation models and their factors is presented. Some of the factors found to have an impact on cost are already included in estimation models such as COCOMO II, TEAMATe, and SEER-SEM, for instance function points and software platform. Thus, this paper validates these well-known factors for cost estimation. However, several of the factors found in this study are not included in established models for software development cost estimation. Thus, this paper also provides indications for possible extensions of these models.

Journal ArticleDOI
TL;DR: An aspect-oriented SPL approach for the requirements phase that allows modelers to capture features, goals, and scenarios in a unified framework and to reason about stakeholders’ needs and perform trade-off analyses while considering undesirable interactions that are not obvious from the feature model is presented.
Abstract: Software Product Line Engineering concerns itself with domain engineering and application engineering. During domain engineering, the whole product family is modeled with a preferred flavor of feature models and additional models as required (e.g., domain models or scenario-based models). During application engineering, the focus shifts toward a single family member and the configuration of the member's features. Recently, aspectual concepts have been employed to better encapsulate individual features of a Software Product Line (SPL), but the existing body of SPL work does not include a unified reasoning framework that integrates aspect-oriented feature description artifacts with the capability to reason about stakeholders' goals while taking feature interactions into consideration. Goal-oriented SPL approaches have been proposed, but do not provide analysis capabilities that help modelers meet the needs of the numerous stakeholders involved in an SPL while at the same time considering feature interactions. We present an aspect-oriented SPL approach for the requirements phase that allows modelers (a) to capture features, goals, and scenarios in a unified framework and (b) to reason about stakeholders' needs and perform trade-off analyses while considering undesirable interactions that are not obvious from the feature model. The approach is based on the Aspect-oriented User Requirements Notation (AoURN) and helps identify, prioritize, and choose products based on analysis results provided by AoURN editor and analysis tools. We apply the AoURN-based SPL framework to the Via Verde SPL to demonstrate the feasibility of this approach through the selection of a Via Verde product configuration that satisfies stakeholders' needs and results in a high-level, scenario-based specification that is free from undesirable feature interactions.

Journal ArticleDOI
TL;DR: The aim in this work is to propose a theoretical harmonization process that supports organizations interested in introducing quality management and software development practices or concerned about improving those they already have, and to apply the theoretical comparison process to a real case, i.e., a Small Enterprise certified ISO 9001.
Abstract: In the past years, both industrial and research communities in Software Engineering have shown special interest in Software Process Improvement--SPI. This is evidenced by the growing number of publications on the topic. The literature offers numerous quality frameworks for addressing SPI practices, which may be classified into two groups: ones that describe "what" should be done (ISO 9001, CMMI) and ones that describe "how" it should be done (Six Sigma, Goal Question Metrics-GQM). When organizations decide to adopt improvement initiatives, many models may be implied, each leveraging the best practices provided, in the quest to address the improvement challenges as well as possible. This may at the same time, however, generate confusion and overlapping activities, as well as extra effort and cost. That, in turn, risks generating a series of inefficiencies and redundancies that end up leading to losses rather than to effective process improvement. Consequently, it is important to move toward a harmonization of quality frameworks, aiming to identify intersections and overlapping parts, as well as to create a multi-model improvement solution. Our aim in this work is twofold: first of all, we propose a theoretical harmonization process that supports organizations interested in introducing quality management and software development practices or concerned about improving those they already have. This is done with specific reference to CMMI-DEV and ISO 9001 models in the direction "ISO to CMMI-DEV", showing how GQM is used to define operational goals that address ISO 9001 statements, reusable in CMMI appraisals. Secondly, we apply the theoretical comparison process to a real case, i.e., a Small Enterprise certified ISO 9001.

Journal ArticleDOI
TL;DR: A rigorous and tooled approach in which techniques from Software Product Line (SPL) engineering are reused and extended to manage variability in service and workflow descriptions and the user is assisted in obtaining a consistent workflow is presented.
Abstract: The development of scientific workflows is evolving toward the systematic use of service-oriented architectures, enabling the composition of dedicated and highly parameterized software services into processing pipelines. Building consistent workflows then becomes a cumbersome and error-prone activity as users cannot manage such large-scale variability. This paper presents a rigorous and tooled approach in which techniques from Software Product Line (SPL) engineering are reused and extended to manage variability in service and workflow descriptions. Composition can be facilitated while ensuring consistency. Services are organized in a rich catalog which is organized as a SPL and structured according to the common and variable concerns captured for all services. By relying on sound merging techniques on the feature models that make up the catalog, reasoning about the compatibility between connected services is made possible. Moreover, an entire workflow is then seen as a multiple SPL (i.e., a composition of several SPLs). When services are configured within, the propagation of variability choices is then automated with appropriate techniques and the user is assisted in obtaining a consistent workflow. The approach proposed is completely supported by a combination of dedicated tools and languages. Illustrations and experimental validations are provided using medical imaging pipelines, which are representative of current scientific workflows in many domains.

Journal ArticleDOI
TL;DR: It is found that defects found by developers had the highest fix rates while those revealed by specialized testers had the lowest, and it is important to understand the diversity of individuals participating in software testing and the relevance of validation from the end users’ viewpoint.
Abstract: There is a recognized disconnect between testing research and industry practice, and more studies are needed on understanding how testing is conducted in real-world circumstances instead of demonstrating the superiority of specific methods. Recent literature indicates that testing is a cross-cutting activity that involves various organizational roles rather than the sole involvement of specialized testers. This research empirically investigates how testing involves employees in varying organizational roles in software product companies. We studied the organization and values of testing using an exploratory case study methodology through interviews, defect database analysis, workshops, analyses of documentation, and informal communications at three software product companies. We analyzed which employee groups test software in the case companies, and how many defects they find. Two companies organized testing as a team effort, and one company had a specialized testing group because of its different development model. We found evidence that testing was not an action conducted only by testing specialists. Testing by individuals with customer contact and domain expertise was an important validation method. We discovered that defects found by developers had the highest fix rates while those revealed by specialized testers had the lowest. The defect importance was susceptible to organizational competition of resources (i.e., overvaluing defects of reporter's own products or projects). We conclude that it is important to understand the diversity of individuals participating in software testing and the relevance of validation from the end users' viewpoint. Future research is required to evaluate testing approaches for diverse organizational roles. Finally, to improve defect information, we suggest increasing automation in defect data collection.

Journal ArticleDOI
TL;DR: A comprehensive investigation on the impact of data sampling followed by attribute selection on the defect predictors built with imbalanced data finds that attribute selection is more efficient when applied after data sampling, and defect prediction performance generally improves after applying data sampling and feature selection.
Abstract: A timely detection of high-risk program modules in high-assurance software is critical for avoiding the high consequences of operational failures. While software risk can initiate from external sources, such as management or outsourcing, software quality is adversely affected when internal software risks are realized, such as improper practice of standard software processes or lack of a defined software quality infrastructure. Practitioners employ various techniques to identify and rectify high-risk or low-quality program modules. Effectiveness of detecting such modules is affected by the software measurements used, making feature selection an important step during software quality prediction. We use a wrapper-based feature ranking technique to select the optimal set of software metrics to build defect prediction models. We also address the adverse effects of class imbalance (very few low-quality modules compared to high-quality modules), a practical problem observed in high-assurance systems. Applying a data sampling technique followed by feature selection is a relatively unique contribution of our work. We present a comprehensive investigation on the impact of data sampling followed by attribute selection on the defect predictors built with imbalanced data. The case study data are obtained from several real-world high-assurance software projects. The key results are that attribute selection is more efficient when applied after data sampling, and defect prediction performance generally improves after applying data sampling and feature selection.

Journal ArticleDOI
TL;DR: A Bayesian decision support model is described, designed to help enterprise IT system decision-makers evaluate the consequences of their decisions by analyzing various scenarios, based on expert elicitation from 50 experts on IT systems availability, obtained through an electronic survey.
Abstract: Ensuring the availability of enterprise IT systems is a challenging task. The factors that can bring systems down are numerous, and their impact on various system architectures is difficult to predict. At the same time, maintaining high availability is crucial in many applications, ranging from control systems in the electric power grid, over electronic trading systems on the stock market to specialized command and control systems for military and civilian purposes. This paper describes a Bayesian decision support model, designed to help enterprise IT system decision-makers evaluate the consequences of their decisions by analyzing various scenarios. The model is based on expert elicitation from 50 experts on IT systems availability, obtained through an electronic survey. The Bayesian model uses a leaky Noisy-OR method to weigh together the expert opinions on 16 factors affecting systems availability. Using this model, the effect of changes to a system can be estimated beforehand, providing decision support for improvement of enterprise IT systems availability. The Bayesian model thus obtained is then integrated within a standard, reliability block diagram-style, mathematical model for assessing availability on the architecture level. In this model, the IT systems play the role of building blocks. The overall assessment framework thus addresses measures to ensure high availability both on the level of individual systems and on the level of the entire enterprise architecture. Examples are presented to illustrate how the framework can be used by practitioners aiming to ensure high availability.

Journal ArticleDOI
TL;DR: This paper presents tool support for an Experience Factory approach for disseminating and improving practices used in an organization and indicates that organizational characteristics influence how practices and experiences can be used.
Abstract: Knowledge management in software engineering and software process improvement activities pose challenges as initiatives are deployed Most existing approaches are either too expensive to deploy or do not take an organization's specific needs into consideration There is thus a need for scalable improvement approaches that leverage knowledge already residing in the organizations This paper presents tool support for an Experience Factory approach for disseminating and improving practices used in an organization Experiences from using practices in development projects are captured in postmortems and provide iteratively improved decision support for identifying what practices work well and what needs improvement An initial evaluation of using the tool for organizational improvement has been performed utilizing both academia and industry The results from the evaluation indicate that organizational characteristics influence how practices and experiences can be used Experiences collected in postmortems are estimated to have little effect on improvements to practices used throughout the organization However, in organizations where different practices are used in different parts of the organization, making practices available together with experiences from use, as well as having context information, can influence decisions on what practices to use in projects

Journal ArticleDOI
TL;DR: A four-phase approach is proposed to guide the construction of the test case refactoring for design patterns by using some well-known design patterns and evaluating its feasibility by means of test coverage.
Abstract: In the current trend, Extreme Programing methodology is widely adopted by small and medium-sized projects for dealing with rapidly or indefinite changing requirements. Test-first strategy and code refactoring are the important practices of Extreme Programing for rapid development and quality support. The test-first strategy emphasizes that test cases are designed before system implementation to keep the correctness of artifacts during software development; whereas refactoring is the removal of "bad smell" code for improving quality without changing its semantics. However, the test-first strategy may conflict with code refactoring in the sense that the original test cases may be broken or inefficient for testing programs, which are revised by code refactoring. In general, the developers revise the test cases manually since it is not complicated. However, when the developers perform a pattern-based refactoring to improve the quality, the effort of revising the test cases is much more than that in simple code refactoring. In our observation, a pattern-based refactoring is composed of many simple and atomic code refactorings. If we have the composition relationship and the mapping rules between code refactoring and test case refactoring, we may infer a test case revision guideline in pattern-based refactoring. Based on this idea, in this research, we propose a four-phase approach to guide the construction of the test case refactoring for design patterns. We also introduce our approach by using some well-known design patterns and evaluate its feasibility by means of test coverage.

Journal ArticleDOI
TL;DR: A case study of a large, industrial embedded system, giving examples of what kinds of analyses could be realized and demonstrating the feasibility of implementing such analyses, and recommendations on how to realize system-specific analyses and how to get them adopted by industry.
Abstract: In this paper, we are exploring the approach to utilize system-specific static analyses of code with the goal to improve software quality for specific software systems. Specialized analyses, tailored for a particular system, make it possible to take advantage of system/domain knowledge that is not available to more generic analyses. Furthermore, analyses can be selected and/or developed in order to best meet the challenges and specific issues of the system at hand. As a result, such analyses can be used as a complement to more generic code analysis tools because they are likely to have a better impact on (business) concerns such as improving certain software quality attributes and reducing certain classes of failures. We present a case study of a large, industrial embedded system, giving examples of what kinds of analyses could be realized and demonstrate the feasibility of implementing such analyses. We synthesize lessons learned based on our case study and provide recommendations on how to realize system-specific analyses and how to get them adopted by industry.