scispace - formally typeset
Search or ask a question
Topic

Test suite

About: Test suite is a research topic. Over the lifetime, 5825 publications have been published within this topic receiving 127971 citations. The topic is also known as: testsuite.


Papers
More filters
Proceedings ArticleDOI
08 Dec 2008
TL;DR: A new symbolic execution tool, KLEE, capable of automatically generating tests that achieve high coverage on a diverse set of complex and environmentally-intensive programs, and significantly beat the coverage of the developers' own hand-written test suite is presented.
Abstract: We present a new symbolic execution tool, KLEE, capable of automatically generating tests that achieve high coverage on a diverse set of complex and environmentally-intensive programs. We used KLEE to thoroughly check all 89 stand-alone programs in the GNU COREUTILS utility suite, which form the core user-level environment installed on millions of Unix systems, and arguably are the single most heavily tested set of open-source programs in existence. KLEE-generated tests achieve high line coverage -- on average over 90% per tool (median: over 94%) -- and significantly beat the coverage of the developers' own hand-written test suite. When we did the same for 75 equivalent tools in the BUSYBOX embedded system suite, results were even better, including 100% coverage on 31 of them.We also used KLEE as a bug finding tool, applying it to 452 applications (over 430K total lines of code), where it found 56 serious bugs, including three in COREUTILS that had been missed for over 15 years. Finally, we used KLEE to crosscheck purportedly identical BUSYBOX and COREUTILS utilities, finding functional correctness errors and a myriad of inconsistencies.

2,896 citations

Journal ArticleDOI
TL;DR: The notion of adequacy criteria is examined together with its role in software dynamic testing and the methods for comparison and assessment of criteria are reviewed.
Abstract: Objective measurement of test quality is one of the key issues in software testing. It has been a major research focus for the last two decades. Many test criteria have been proposed and studied for this purpose. Various kinds of rationales have been presented in support of one criterion or another. We survey the research work in this area. The notion of adequacy criteria is examined together with its role in software dynamic testing. A review of criteria classification is followed by a summary of the methods for comparison and assessment of criteria.

1,287 citations

01 Jan 1999
TL;DR: This research organizes, presents, and analyzes contemporary MultiObjective Evolutionary Algorithm research and associated Multiobjective Optimization Problems (MOPs) and uses a consistent MOEA terminology and notation to present a complete, contemporary view of current MOEA "state of the art" and possible future research.
Abstract: : This research organizes, presents, and analyzes contemporary Multiobjective Evolutionary Algorithm (MOEA) research and associated Multiobjective Optimization Problems (MOPs). Using a consistent MOEA terminology and notation, each cited MOEAs' key factors are presented in tabular form for ease of MOEA identification and selection. A detailed quantitative and qualitative MOEA analysis is presented, providing a basis for conclusions about various MOEA-related issues. The traditional notion of building blocks is extended to the MOP domain in an effort to develop more effective and efficient MOEAs. Additionally, the MOEA community's limited test suites contain various functions whose origins and rationale for use are often unknown. Thus, using general test suite guidelines appropriate MOEA test function suites are substantiated and generated. An experimental methodology incorporating a solution database and appropriate metrics is offered as a proposed evaluation framework allowing absolute comparisons of specific MOEA approaches. Taken together, this document's classifications, analyses, and new innovations present a complete, contemporary view of current MOEA "state of the art" and possible future research. Researchers with basic EA knowledge may also use part of it as a largely self-contained introduction to MOEAs.

1,287 citations

Journal ArticleDOI
TL;DR: This paper surveys each area of minimization, selection and prioritization technique and discusses open problems and potential directions for future research.
Abstract: Regression testing is a testing activity that is performed to provide confidence that changes do not harm the existing behaviour of the software. Test suites tend to grow in size as software evolves, often making it too costly to execute entire test suites. A number of different approaches have been studied to maximize the value of the accrued test suite: minimization, selection and prioritization. Test suite minimization seeks to eliminate redundant test cases in order to reduce the number of tests to run. Test case selection seeks to identify the test cases that are relevant to some set of recent changes. Test case prioritization seeks to order test cases in such a way that early fault detection is maximized. This paper surveys each area of minimization, selection and prioritization technique and discusses open problems and potential directions for future research. Copyright © 2010 John Wiley & Sons, Ltd.

1,276 citations

Journal ArticleDOI
TL;DR: Test case prioritization techniques schedule test cases for execution in an order that attempts to increase their effectiveness at meeting some performance goal as discussed by the authors, such as rate of fault detection, a measure of how quickly faults are detected within the testing process.
Abstract: Test case prioritization techniques schedule test cases for execution in an order that attempts to increase their effectiveness at meeting some performance goal. Various goals are possible; one involves rate of fault detection, a measure of how quickly faults are detected within the testing process. An improved rate of fault detection during testing can provide faster feedback on the system under test and let software engineers begin correcting faults earlier than might otherwise be possible. One application of prioritization techniques involves regression testing, the retesting of software following modifications; in this context, prioritization techniques can take advantage of information gathered about the previous execution of test cases to obtain test case orderings. We describe several techniques for using test execution information to prioritize test cases for regression testing, including: 1) techniques that order test cases based on their total coverage of code components; 2) techniques that order test cases based on their coverage of code components not previously covered; and 3) techniques that order test cases based on their estimated ability to reveal faults in the code components that they cover. We report the results of several experiments in which we applied these techniques to various test suites for various programs and measured the rates of fault detection achieved by the prioritized test suites, comparing those rates to the rates achieved by untreated, randomly ordered, and optimally ordered suites.

1,200 citations


Network Information
Related Topics (5)
Software system
50.7K papers, 935K citations
93% related
Software construction
36.2K papers, 743.8K citations
92% related
Software development
73.8K papers, 1.4M citations
91% related
Web service
57.6K papers, 989K citations
85% related
Source code
30.1K papers, 687.1K citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023106
2022317
2021270
2020290
2019272
2018268