scispace - formally typeset
Open AccessJournal ArticleDOI

Estimating software testing complexity

TLDR
The definition of a new measure of the difficulty for a computer to generate test cases, called Branch Coverage Expectation (BCE), which is much more correlated with the code coverage than the existing complexity measures.
Abstract
Context: Complexity measures provide us some information about software artifacts. A measure of the difficulty of testing a piece of code could be very useful to take control about the test phase. Objective: The aim in this paper is the definition of a new measure of the difficulty for a computer to generate test cases, we call it Branch Coverage Expectation (BCE). We also analyze the most common complexity measures and the most important features of a program. With this analysis we are trying to discover whether there exists a relationship between them and the code coverage of an automatically generated test suite. Method: The definition of this measure is based on a Markov model of the program. This model is used not only to compute the BCE, but also to provide an estimation of the number of test cases needed to reach a given coverage level in the program. In order to check our proposal, we perform a theoretical validation and we carry out an empirical validation study using 2600 test programs. Results: The results show that the previously existing measures are not so useful to estimate the difficulty of testing a program, because they are not highly correlated with the code coverage. Our proposed measure is much more correlated with the code coverage than the existing complexity measures. Conclusion: The high correlation of our measure with the code coverage suggests that the BCE measure is a very promising way of measuring the difficulty to automatically test a program. Our proposed measure is useful for predicting the behavior of an automatic test case generator.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

How high will it be? Using machine learning models to predict branch coverage in automated testing

TL;DR: The first steps towards the definition of machine learning models to predict the branch coverage achieved by test data generation tools are taken, considering well known code metrics as a features.
Journal ArticleDOI

Branch coverage prediction in automated testing

TL;DR: It is argued that knowing a priori the branch coverage that can be achieved with test‐data generation tools can help developers into taking informed decision about issues and it is investigated the possibility to use source‐code metrics to predict the coverage achieved by test‐ data generation tools.
Journal ArticleDOI

Investigating software testing and maintenance reports: Case study

TL;DR: Fine-grained problem handling reports provide a wider scope of possible measures to assess the relevant development processes and provide significant enhancement in evaluation and refinement of software development processes as well as in reliability prediction.
Proceedings ArticleDOI

An adaptive fitness function based on branch hardness for search based testing

TL;DR: This paper proposes an adaptive fitness function based on branch hardness, evaluated by the variation coefficient of the fitness function, of generating test data can be minimized and is more flexible than the traditional counterparts.
References
More filters
Journal ArticleDOI

Experience in Predicting Fault-Prone Software Modules Using Complexity Metrics

TL;DR: This study shows how to utilize the prediction models generated from existing projects to improve the fault detection on other projects, and suggests that in order to improve model prediction efficiency, the selection of source datasets and the determination of cut-off values should be based on specific properties of a project.
Dissertation

Test data generation: two evolutionary approaches to mutation testing

Peter May
TL;DR: In this paper, the authors propose a method to solve the problem of unstructured data mining in the context of data mining and data visualization, and acknowledge acknowledgements xxviii
Journal ArticleDOI

The collateral coverage of data flow criteria when branch testing

TL;DR: The aim of the experiments was to investigate the extent of the collateral coverage that is achieved in respect of the data-flow testing criteria when branch testing is undertaken.
Related Papers (5)