scispace - formally typeset
Search or ask a question
Author

Willem Visser

Bio: Willem Visser is an academic researcher from Stellenbosch University. The author has contributed to research in topics: Model checking & Symbolic execution. The author has an hindex of 42, co-authored 133 publications receiving 7978 citations. Previous affiliations of Willem Visser include Research Institute for Advanced Computer Science & Ames Research Center.


Papers
More filters
Journal ArticleDOI
11 Sep 2000
TL;DR: A verification and testing environment for Java, called Java PathFinder (JPF), which integrates model checking, program analysis and testing, and uses state compression to handle big states and partial order and symmetry reduction, slicing, abstraction, and runtime analysis techniques to reduce the state space.
Abstract: The majority of the work carried out in the formal methods community throughout the last three decades has (for good reasons) been devoted to special languages designed to make it easier to experiment with mechanized formal methods such as theorem provers and model checkers. In this paper, we give arguments for why we believe it is time for the formal methods community to shift some of its attention towards the analysis of programs written in modern programming languages. In keeping with this philosophy, we have developed a verification and testing environment for Java, called Java PathFinder (JPF), which integrates model checking, program analysis and testing. Part of this work has consisted of building a new Java Virtual Machine that interprets Java bytecode. JPF uses state compression to handle large states, and partial order reduction, slicing, abstraction and run-time analysis techniques to reduce the state space. JPF has been applied to a real-time avionics operating system developed at Honeywell, illustrating an intricate error, and to a model of a spacecraft controller, illustrating the combination of abstraction, run-time analysis and slicing with model checking.

1,459 citations

Book ChapterDOI
07 Apr 2003
TL;DR: In this paper, a source-to-source translation is defined to instrument a program, which enables standard model checkers to perform symbolic execution of the program and gives a novel symbolic execution algorithm that handles dynamically allocated structures (e.g., lists and trees), method preconditions (i.e., acyclicity), data (e, integers and strings) and concurrency.
Abstract: Modern software systems, which often are concurrent and manipulate complex data structures must be extremely reliable. We present a novel framework based on symbolic execution, for automated checking of such systems. We provide a two-fold generalization of traditional symbolic execution based approaches. First, we define a source to source translation to instrument a program, which enables standard model checkers to perform symbolic execution of the program. Second, we give a novel symbolic execution algorithm that handles dynamically allocated structures (e.g., lists and trees), method preconditions (e.g., acyclicity), data (e.g., integers and strings) and concurrency. The program instrumentation enables a model checker to automatically explore different program heap configurations and manipulate logical formulae on program data (using a decision procedure). We illustrate two applications of our framework: checking correctness of multi-threaded programs that take inputs from unbounded domains with complex structure and generation of non-isomorphic test inputs that satisfy a testing criterion. Our implementation for Java uses the Java PathFinder model checker.

609 citations

Proceedings ArticleDOI
01 Jul 2004
TL;DR: The main contribution of this work is to show how efficient white-box test input generation can be done for code manipulating complex data, taking into account complex method preconditions.
Abstract: We show how model checking and symbolic execution can be used to generate test inputs to achieve structural coverage of code that manipulates complex data structures. We focus on obtaining branch-coverage during unit testing of some of the core methods of the red-black tree implementation in the Java TreeMap library, using the Java PathFinder model checker. Three different test generation techniques will be introduced and compared, namely, straight model checking of the code, model checking used in a black-box fashion to generate all inputs up to a fixed size, and lastly, model checking used during white-box test input generation. The main contribution of this work is to show how efficient white-box test input generation can be done for code manipulating complex data, taking into account complex method preconditions.

558 citations

Proceedings ArticleDOI
21 May 2011
TL;DR: Symbolic execution is a program analysis technique introduced in the 70s that has received renewed interest in recent years, due to algorithmic advances and increased availability of computational power and constraint solving technology.
Abstract: We present results for the "Impact Project Focus Area" on the topic of symbolic execution as used in software testing. Symbolic execution is a program analysis technique introduced in the 70s that has received renewed interest in recent years, due to algorithmic advances and increased availability of computational power and constraint solving technology. We review classical symbolic execution and some modern extensions such as generalized symbolic execution and dynamic test generation. We also give a preliminary assessment of the use in academia, research labs, and industry.

331 citations

Book ChapterDOI
24 Mar 2007
TL;DR: JPF-SE uses JPF to generate and explore symbolic execution paths and it uses off-the-shelf decision procedures to manipulate numeric constraints.
Abstract: We present JPF-SE, an extension to the Java PathFinder Model Checking framework (JPF) that enables the symbolic execution of Java programs. JPF-SE uses JPF to generate and explore symbolic execution paths and it uses off-the-shelf decision procedures to manipulate numeric constraints.

276 citations


Cited by
More filters
Book
25 Apr 2008
TL;DR: Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field.
Abstract: Our growing dependence on increasingly complex computer and software systems necessitates the development of formalisms, techniques, and tools for assessing functional properties of these systems. One such technique that has emerged in the last twenty years is model checking, which systematically (and automatically) checks whether a model of a given system satisfies a desired property such as deadlock freedom, invariants, and request-response properties. This automated technique for verification and debugging has developed into a mature and widely used approach with many applications. Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field. The book begins with the basic principles for modeling concurrent and communicating systems, introduces different classes of properties (including safety and liveness), presents the notion of fairness, and provides automata-based algorithms for these properties. It introduces the temporal logics LTL and CTL, compares them, and covers algorithms for verifying these logics, discussing real-time systems as well as systems subject to random phenomena. Separate chapters treat such efficiency-improving techniques as abstraction and symbolic manipulation. The book includes an extensive set of examples (most of which run through several chapters) and a complete set of basic results accompanied by detailed proofs. Each chapter concludes with a summary, bibliographic notes, and an extensive list of exercises of both practical and theoretical nature.

4,905 citations

Book ChapterDOI
29 Mar 2004
TL;DR: This work introduces a temporal logic of calls and returns (CaRet) for specification and algorithmic verification of correctness requirements of structured programs and presents a tableau construction that reduces the model checking problem to the emptiness problem for a Buchi pushdown system.
Abstract: Model checking of linear temporal logic (LTL) specifications with respect to pushdown systems has been shown to be a useful tool for analysis of programs with potentially recursive procedures. LTL, however, can specify only regular properties, and properties such as correctness of procedures with respect to pre and post conditions, that require matching of calls and returns, are not regular. We introduce a temporal logic of calls and returns (CaRet) for specification and algorithmic verification of correctness requirements of structured programs. The formulas of CaRet are interpreted over sequences of propositional valuations tagged with special symbols call and ret. Besides the standard global temporal modalities, CaRet admits the abstract-next operator that allows a path to jump from a call to the matching return. This operator can be used to specify a variety of non-regular properties such as partial and total correctness of program blocks with respect to pre and post conditions. The abstract versions of the other temporal modalities can be used to specify regular properties of local paths within a procedure that skip over calls to other procedures. CaRet also admits the caller modality that jumps to the most recent pending call, and such caller modalities allow specification of a variety of security properties that involve inspection of the call-stack. Even though verifying context-free properties of pushdown systems is undecidable, we show that model checking CaRet formulas against a pushdown model is decidable. We present a tableau construction that reduces our model checking problem to the emptiness problem for a Buchi pushdown system. The complexity of model checking CaRet formulas is the same as that of checking LTL formulas, namely, polynomial in the model and singly exponential in the size of the specification.

3,516 citations

01 Jan 2003

3,093 citations

Proceedings ArticleDOI
08 Dec 2008
TL;DR: A new symbolic execution tool, KLEE, capable of automatically generating tests that achieve high coverage on a diverse set of complex and environmentally-intensive programs, and significantly beat the coverage of the developers' own hand-written test suite is presented.
Abstract: We present a new symbolic execution tool, KLEE, capable of automatically generating tests that achieve high coverage on a diverse set of complex and environmentally-intensive programs. We used KLEE to thoroughly check all 89 stand-alone programs in the GNU COREUTILS utility suite, which form the core user-level environment installed on millions of Unix systems, and arguably are the single most heavily tested set of open-source programs in existence. KLEE-generated tests achieve high line coverage -- on average over 90% per tool (median: over 94%) -- and significantly beat the coverage of the developers' own hand-written test suite. When we did the same for 75 equivalent tools in the BUSYBOX embedded system suite, results were even better, including 100% coverage on 31 of them.We also used KLEE as a bug finding tool, applying it to 452 applications (over 430K total lines of code), where it found 56 serious bugs, including three in COREUTILS that had been missed for over 15 years. Finally, we used KLEE to crosscheck purportedly identical BUSYBOX and COREUTILS utilities, finding functional correctness errors and a myriad of inconsistencies.

2,896 citations

Journal ArticleDOI
12 Jun 2005
TL;DR: DART is a new tool for automatically testing software that combines three main techniques, automated extraction of the interface of a program with its external environment using static source-code parsing, and dynamic analysis of how the program behaves under random testing and automatic generation of new test inputs to direct systematically the execution along alternative program paths.
Abstract: We present a new tool, named DART, for automatically testing software that combines three main techniques: (1) automated extraction of the interface of a program with its external environment using static source-code parsing; (2) automatic generation of a test driver for this interface that performs random testing to simulate the most general environment the program can operate in; and (3) dynamic analysis of how the program behaves under random testing and automatic generation of new test inputs to direct systematically the execution along alternative program paths. Together, these three techniques constitute Directed Automated Random Testing, or DART for short. The main strength of DART is thus that testing can be performed completely automatically on any program that compiles -- there is no need to write any test driver or harness code. During testing, DART detects standard errors such as program crashes, assertion violations, and non-termination. Preliminary experiments to unit test several examples of C programs are very encouraging.

2,346 citations