Requirements Coverage as an Adequacy Measure for Conformance Testing
read more
Citations
Dafny: an automatic program verifier for functional correctness
Formalising java's data race free guarantee
Comparing the Expressiveness of Timed Automata and Timed Extensions of Petri Nets
The Why/Krakatoa/Caduceus platform for deductive program verification
Mutation Testing Advances: An Analysis and Survey
References
Model checking
Applied nonparametric statistics
Applied nonparametric statistics
Is mutation an appropriate tool for testing experiments
Related Papers (5)
Frequently Asked Questions (11)
Q2. What have the authors stated for future works in "Requirements coverage as an adequacy measure for conformance testing" ?
The authors hope to investigate this further in their future work. In their future work, the authors hope to define requirements coverage metrics that are more robust to the structure of the requirements. Test suites providing requirements coverage may be ineffective even with an excellent set of requirements.
Q3. How did the authors create the implementations that the authors used as a basis for the generation of large?
Using the Simulink models, the authors created implementations that the authors used as the basis for the generation of large sets of mutants by randomly seeding faults.
Q4. What is the role of the requirements coverage metric in the effectiveness of the generated test suites?
the rigor and robustness (with respect to requirements structure) of the requirements coverage metric used plays an important role in the effectiveness of the generated test suites.
Q5. What is the way to measure adequacy of conformance test suites?
the authors found that requirements coverage is useful when used in combination with model coverage to measure adequacy of conformance test suites.
Q6. What are the current tools for generating conformance tests?
There are currently several tools, such as model checkers, that provide the capability to automatically generate conformance tests [19, 7] from formal models.
Q7. How did the authors assess the effectiveness of the different test suites?
The authors assessed the effectiveness of the different test suites by measuring their fault finding capability, i.e., running them over the sets of mutants and measuring the number of faults detected.
Q8. How do the UFC suites achieve MC/DC?
On the DWM 1, Vertmax Batch, and Latctl Batch systems the UFC suites do reasonably well, achieving an average MC/DC of 78.2%, 88.6%, and 80.9% respectively as compared to 92.5%, 98% and 99.8% achievable MC/DC.
Q9. What is the procedure for calculating the reference distribution of the observations?
When performing a permutation test, a reference distribution is obtained by calculating all possible permutations of the observations [6, 11].
Q10. Why is the improvement in the DWM 2 system a result of the combined suites?
Under such circumstances, the fault finding improvement observed on combining the test suites would be solely due to the increased number of test cases.
Q11. Why did the authors generate multiple mutant sets for each example?
The authors generated multiple mutant sets for each example to reduce potential bias in their results from a mutant set that may have very hard (or easy) faults to detect.