scispace - formally typeset
Search or ask a question
Topic

Random testing

About: Random testing is a research topic. Over the lifetime, 1117 publications have been published within this topic receiving 33250 citations.


Papers
More filters
Journal ArticleDOI
12 Jun 2005
TL;DR: DART is a new tool for automatically testing software that combines three main techniques, automated extraction of the interface of a program with its external environment using static source-code parsing, and dynamic analysis of how the program behaves under random testing and automatic generation of new test inputs to direct systematically the execution along alternative program paths.
Abstract: We present a new tool, named DART, for automatically testing software that combines three main techniques: (1) automated extraction of the interface of a program with its external environment using static source-code parsing; (2) automatic generation of a test driver for this interface that performs random testing to simulate the most general environment the program can operate in; and (3) dynamic analysis of how the program behaves under random testing and automatic generation of new test inputs to direct systematically the execution along alternative program paths. Together, these three techniques constitute Directed Automated Random Testing, or DART for short. The main strength of DART is thus that testing can be performed completely automatically on any program that compiles -- there is no need to write any test driver or harness code. During testing, DART detects standard errors such as program crashes, assertion violations, and non-termination. Preliminary experiments to unit test several examples of C programs are very encouraging.

2,346 citations

Proceedings ArticleDOI
01 Sep 2005
TL;DR: In this paper, the authors address the problem of automating unit testing with memory graphs as inputs, and develop a method to represent and track constraints that capture the behavior of a symbolic execution of a unit with memory graph as inputs.
Abstract: In unit testing, a program is decomposed into units which are collections of functions. A part of unit can be tested by generating inputs for a single entry function. The entry function may contain pointer arguments, in which case the inputs to the unit are memory graphs. The paper addresses the problem of automating unit testing with memory graphs as inputs. The approach used builds on previous work combining symbolic and concrete execution, and more specifically, using such a combination to generate test inputs to explore all feasible execution paths. The current work develops a method to represent and track constraints that capture the behavior of a symbolic execution of a unit with memory graphs as inputs. Moreover, an efficient constraint solver is proposed to facilitate incremental generation of such test inputs. Finally, CUTE, a tool implementing the method is described together with the results of applying CUTE to real-world examples of C code.

1,891 citations

Journal ArticleDOI
01 Sep 2000
TL;DR: QuickCheck is a tool which aids the Haskell programmer in formulating and testing properties of programs, and can be automatically tested on random input, but it is also possible to define custom test data generators.
Abstract: Quick Check is a tool which aids the Haskell programmer in formulating and testing properties of programs. Properties are described as Haskell functions, and can be automatically tested on random input, but it is also possible to define custom test data generators. We present a number of case studies, in which the tool was successfully used, and also point out some pitfalls to avoid. Random testing is especially suitable for functional programs because properties can be stated at a fine grain. When a function is built from separately tested components, then random testing suffices to obtain good coverage of the definition under test.

1,078 citations

Book ChapterDOI
19 Oct 2009
TL;DR: Direct automated random testing is described, an efficient approach which combines random and symbolic testing, and several heuristic search strategies are presented, including a novel strategy guided by the control flow graph of the program under test.
Abstract: Testing with manually generated test cases is the primary technique used in industry to improve reliability of software-in fact, such testing is reported to account for over half of the typical cost of software development I will describe directed automated random testing (also known as concolic testing), an efficient approach which combines random and symbolic testing Concolic testing enables automatic and systematic testing of programs, avoids redundant test cases and does not generate false warnings Experiments on real-world software show that concolic testing can be used to effectively catch generic errors such as assertion violations, memory leaks, uncaught exceptions, and segmentation faults From our initial experience with concolic testing we have learned that a primary challenge in scaling concolic testing to larger programs is the combinatorial explosion of the path space It is likely that sophisticated strategies for searching this path space are needed to generate inputs that effectively test large programs (by, eg, achieving significant branch coverage) I will present several such heuristic search strategies, including a novel strategy guided by the control flow graph of the program under test

827 citations

Proceedings ArticleDOI
24 May 2007
TL;DR: Experimental results indicate that feedback-directed random test generation can outperform systematic and undirectedrandom test generation, in terms of coverage and error detection.
Abstract: We present a technique that improves random test generation by incorporating feedback obtained from executing test inputs as they are created. Our technique builds inputs incrementally by randomly selecting a method call to apply and finding arguments from among previously-constructed inputs. As soon as an input is built, it is executed and checked against a set of contracts and filters. The result of the execution determines whether the input is redundant, illegal, contract-violating, or useful for generating more inputs. The technique outputs a test suite consisting of unit tests for the classes under test. Passing tests can be used to ensure that code contracts are preserved across program changes; failing tests (that violate one or more contract) point to potential errors that should be corrected. Our experimental results indicate that feedback-directed random test generation can outperform systematic and undirected random test generation, in terms of coverage and error detection. On four small but nontrivial data structures (used previously in the literature), our technique achieves higher or equal block and predicate coverage than model checking (with and without abstraction) and undirected random generation. On 14 large, widely-used libraries (comprising 780KLOC), feedback-directed random test generation finds many previously-unknown errors, not found by either model checking or undirected random generation.

815 citations


Network Information
Related Topics (5)
Software development
73.8K papers, 1.4M citations
80% related
Model checking
16.9K papers, 451.6K citations
79% related
Software system
50.7K papers, 935K citations
79% related
Component-based software engineering
24.2K papers, 461.9K citations
79% related
Software development process
23.7K papers, 420K citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202313
202236
202124
202052
201965
201854