scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Estimating software testing complexity

01 Dec 2013-Information & Software Technology (Butterworth-Heinemann)-Vol. 55, Iss: 12, pp 2125-2139
TL;DR: The definition of a new measure of the difficulty for a computer to generate test cases, called Branch Coverage Expectation (BCE), which is much more correlated with the code coverage than the existing complexity measures.
Abstract: Context: Complexity measures provide us some information about software artifacts. A measure of the difficulty of testing a piece of code could be very useful to take control about the test phase. Objective: The aim in this paper is the definition of a new measure of the difficulty for a computer to generate test cases, we call it Branch Coverage Expectation (BCE). We also analyze the most common complexity measures and the most important features of a program. With this analysis we are trying to discover whether there exists a relationship between them and the code coverage of an automatically generated test suite. Method: The definition of this measure is based on a Markov model of the program. This model is used not only to compute the BCE, but also to provide an estimation of the number of test cases needed to reach a given coverage level in the program. In order to check our proposal, we perform a theoretical validation and we carry out an empirical validation study using 2600 test programs. Results: The results show that the previously existing measures are not so useful to estimate the difficulty of testing a program, because they are not highly correlated with the code coverage. Our proposed measure is much more correlated with the code coverage than the existing complexity measures. Conclusion: The high correlation of our measure with the code coverage suggests that the BCE measure is a very promising way of measuring the difficulty to automatically test a program. Our proposed measure is useful for predicting the behavior of an automatic test case generator.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
20 Mar 2018
TL;DR: The first steps towards the definition of machine learning models to predict the branch coverage achieved by test data generation tools are taken, considering well known code metrics as a features.
Abstract: Software testing is a crucial component in modern continuous integration development environment. Ideally, at every commit, all the system's test cases should be executed and moreover, new test cases should be generated for the new code. This is especially true in a Continuous Test Generation (CTG) environment, where the automatic generation of test cases is integrated into the continuous integration pipeline. Furthermore, developers want to achieve a minimum level of coverage for every build of their systems. Since both executing all the test cases and generating new ones for all the classes at every commit is not feasible, they have to select which subset of classes has to be tested. In this context, knowing a priori the branch coverage that can be achieved with test data generation tools might give some useful indications for answering such a question. In this paper, we take the first steps towards the definition of machine learning models to predict the branch coverage achieved by test data generation tools. We conduct a preliminary study considering well known code metrics as a features. Despite the simplicity of these features, our results show that using machine learning to predict branch coverage in automated testing is a viable and feasible option.

22 citations

Journal ArticleDOI
TL;DR: It is argued that knowing a priori the branch coverage that can be achieved with test‐data generation tools can help developers into taking informed decision about issues and it is investigated the possibility to use source‐code metrics to predict the coverage achieved by test‐ data generation tools.
Abstract: This is the peer reviewed version which has been published in final form at [DOI]. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Use of Self-Archived Versions.

22 citations

Journal ArticleDOI
TL;DR: Fine-grained problem handling reports provide a wider scope of possible measures to assess the relevant development processes and provide significant enhancement in evaluation and refinement of software development processes as well as in reliability prediction.
Abstract: Context Although many papers have been published on software development and defect prediction techniques, problem reports in real projects quite often differ from those described in the literature. Hence, there is still a need for deeper exploration of case studies from industry. Objective The aim of this study is to present the impact of fine-grained problem reports on improving evaluation of testing and maintenance processes. It is targeted at projects involving several releases and complex schemes of problem handling. This is based on our experience gained while monitoring several commercial projects. Method Extracting certain features from detailed problem reports, we derive various measures and present analysis models which characterize and visualize the effectiveness of testing and problem resolution processes. The considered reports describe types of problems (e.g. defects), their locations in project versions and software modules, ways of their resolution, etc. The performed analysis is related to eleven projects developed in the same company. This study is an exploratory research with some explanatory features. Moreover, having identified some drawbacks, we present extensions of problem reports and their analysis which have been verified in another industrial case study project. Results Fine-grained (accurate) problem handling reports provide a wider scope of possible measures to assess the relevant development processes. This is helpful in controlling single projects (local perspective) as well as in managing these processes in the whole company (global perspective). Conclusion Detailed problem handling reports extend the space and quality of statistical analysis, they provide significant enhancement in evaluation and refinement of software development processes as well as in reliability prediction.

17 citations

Proceedings ArticleDOI
01 Jul 2017
TL;DR: This paper proposes an adaptive fitness function based on branch hardness, evaluated by the variation coefficient of the fitness function, of generating test data can be minimized and is more flexible than the traditional counterparts.
Abstract: Search based software testing has received great attention as a means of automating the test data generation, and the goal is to improve various criteria. There are different types of coverage criteria. In this paper, we deal with the path coverage. Concretely, we focus on the path that is the most difficult to cover. One major limitation of search based testing is the inefficient and insufficiently informed fitness function. To address this problem, we propose an adaptive fitness function based on branch hardness. The branch hardness is measured by the expected number of visits of each branch in the program, which is modeled by an absorbing discrete time Markov chain. By tuning the parameters of branch hardness heuristically, the search hardness, evaluated by the variation coefficient of the fitness function, of generating test data can be minimized. Therefore, this new fitness function is more flexible than the traditional counterparts. In addition, we point out that the present definition of branch distance and the use of normalizing functions are problematic, and propose some improvements. Finally, the empirical study reveals the promising result of our proposal in this paper.

14 citations

References
More filters
Journal ArticleDOI
TL;DR: A new, meaningful lower bound on the number of test cases needed to achieve branch coverage is introduced as the maximum number of unconstrained arcs in a ddgraph that are mutually weakly incomparable and fits into the testability model of Bache and Mullerburg (1990).

22 citations


"Estimating software testing complex..." refers background in this paper

  • ...Some works try to compute the lower bound [7] of the test cases required, and other works try to provide better understanding on the testing criterion used to generate those test cases [23]....

    [...]

Journal ArticleDOI
TL;DR: An attempt has been made to compare cognitive complexity measure in terms of nine Weyuker's properties with other complexity measures, such as McCabe's, Halstead's and Oviedo's complexity measures.
Abstract: Weyuker properties have been suggested as a guiding tool in identification of a good and comprehensive complexity measure. In this paper, an attempt has been made to compare cognitive complexity measure in terms of nine Weyuker's properties with other complexity measures, such as McCabe's, Halstead's and Oviedo's complexity measures. Our intension is to study what kinds of new information about the measures are able to give.

21 citations


Additional excerpts

  • ...the complexity of software [15, 26]....

    [...]

Book
11 Feb 2011
TL;DR: This book constitutes the refereed proceedings of IFIP Working Group 2.13 Open Source Software held in Milan, Italy on September 7-10, 2008.
Abstract: This book constitutes the refereed proceedings of IFIP Working Group 2.13 Open Source Software held in Milan, Italy on September 7-10, 2008. The IFIP series publishes state-of-the-art results in the sciences and technologies of information and communication. The scope of the series includes: foundations of computer science; software theory and practice; education; computer applications in technology; communication systems; systems modeling and optimization; information systems; computers and society; computer systems technology; security and protection in information processing systems; artificial intelligence; and human-computer interaction. Proceedings and post-proceedings of refereed international conferences in computer science and interdisciplinary fields are featured. These results often precede journal publication and represent the most current research. The principal aim of the IFIP series is to encourage education and the dissemination and exchange of information about all aspects of computing.

8 citations


Additional excerpts

  • ...Measurement is the prerequisite to management control’’....

    [...]

Proceedings ArticleDOI
03 Sep 2012
TL;DR: This work presents a new approach to infer software complexity with basis on the characteristics of automatically generated test cases, and the corresponding test suites will be automatically generated through an emergent approach for creating test data named as Evolutionary Testing.
Abstract: One characteristic that impedes software from achieving good levels of maintainability is the increasing complexity of software. Empirical observations have shown that typically, the more complex the software is, the bigger the test suite is. Thence, a relevant question, which originated the main research topic of our work, has raised: "Is there a way to correlate the complexity of the test cases utilized to test a software product with the complexity of the software under test?". This work presents a new approach to infer software complexity with basis on the characteristics of automatically generated test cases. From these characteristics, we expect to create a test case profile for a software product, which will then be correlated to the complexity, as well as to other characteristics, of the software under test. This research is expected to provide developers and software architects with means to support and validate their decisions, as well as to observe the evolution of a software product during its life-cycle. Our work focuses on object-oriented software, and the corresponding test suites will be automatically generated through an emergent approach for creating test data named as Evolutionary Testing.

3 citations


"Estimating software testing complex..." refers background in this paper

  • ...In a recent work, Nogueira focuses on the correlation between the complexity of the software under test and the complexity of the test cases [28], but the work did not propose any estimation measure....

    [...]