scispace - formally typeset
Search or ask a question

Model checking

01 Sep 1996-pp 305-349
TL;DR: Model checking tools, created by both academic and industrial teams, have resulted in an entirely novel approach to verification and test case generation that often enables engineers in the electronics industry to design complex systems with considerable assurance regarding the correctness of their initial designs.
Abstract: Turing Lecture from the winners of the 2007 ACM A.M. Turing Award. In 1981, Edmund M. Clarke and E. Allen Emerson, working in the USA, and Joseph Sifakis working independently in France, authored seminal papers that founded what has become the highly successful field of model checking. This verification technology provides an algorithmic means of determining whether an abstract model---representing, for example, a hardware or software design---satisfies a formal specification expressed as a temporal logic (TL) formula. Moreover, if the property does not hold, the method identifies a counterexample execution that shows the source of the problem. The progression of model checking to the point where it can be successfully used for complex systems has required the development of sophisticated means of coping with what is known as the state explosion problem. Great strides have been made on this problem over the past 28 years by what is now a very large international research community. As a result many major hardware and software companies are beginning to use model checking in practice. Examples of its use include the verification of VLSI circuits, communication protocols, software device drivers, real-time embedded systems, and security algorithms. The work of Clarke, Emerson, and Sifakis continues to be central to the success of this research area. Their work over the years has led to the creation of new logics for specification, new verification algorithms, and surprising theoretical results. Model checking tools, created by both academic and industrial teams, have resulted in an entirely novel approach to verification and test case generation. This approach, for example, often enables engineers in the electronics industry to design complex systems with considerable assurance regarding the correctness of their initial designs. Model checking promises to have an even greater impact on the hardware and software industries in the future. ---Moshe Y. Vardi, Editor-in-Chief
Citations
More filters
Book
07 Jan 1999

4,478 citations

Journal ArticleDOI
TL;DR: An integrated approach to fitting psychometric functions, assessing the goodness of fit, and providing confidence intervals for the function’s parameters and other estimates derived from them, for the purposes of hypothesis testing is described.
Abstract: The psychometric function relates an observer’s performance to an independent variable, usually some physical quantity of a stimulus in a psychophysical task. This paper, together with its companion paper (Wichmann & Hill, 2001), describes an integrated approach to (1) fitting psychometric functions, (2) assessing the goodness of fit, and (3) providing confidence intervals for the function’s parameters and other estimates derived from them, for the purposes of hypothesis testing. The present paper deals with the first two topics, describing a constrained maximum-likelihood method of parameter estimation and developing several goodness-of-fit tests. Using Monte Carlo simulations, we deal with two specific difficulties that arise when fitting functions to psychophysical data. First, we note that human observers are prone to stimulus-independent errors (orlapses). We show that failure to account for this can lead to serious biases in estimates of the psychometric function’s parameters and illustrate how the problem may be overcome. Second, we note that psychophysical data sets are usually rather small by the standards required by most of the commonly applied statistical tests. We demonstrate the potential errors of applying traditionalX2 methods to psychophysical data and advocate use of Monte Carlo resampling techniques that do not rely on asymptotic theory. We have made available the software to implement our methods.

2,263 citations

Journal ArticleDOI
TL;DR: Different approaches to the determination of upper bounds on execution times are described and several commercially available tools1 and research prototypes are surveyed.
Abstract: The determination of upper bounds on execution times, commonly called worst-case execution times (WCETs), is a necessary step in the development and validation process for hard real-time systems. This problem is hard if the underlying processor architecture has components, such as caches, pipelines, branch prediction, and other speculative components. This article describes different approaches to this problem and surveys several commercially available tools1 and research prototypes.

1,946 citations

Book ChapterDOI
08 Jul 2003
TL;DR: Counterexample-guided abstraction refinement is an automatic abstraction method where the key step is to extract information from false negatives ("spurious counterexamples") due to over-approximation.
Abstract: The main practical problem in model checking is the combinatorial explosion of system states commonly known as the state explosion problem. Abstraction methods attempt to reduce the size of the state space by employing knowledge about the system and the specification in order to model only relevant features in the Kripke structure. Counterexample-guided abstraction refinement is an automatic abstraction method where, starting with a relatively small skeletal representation of the system to be verified, increasingly precise abstract representations of the system are computed. The key step is to extract information from false negatives ("spurious counterexamples") due to over-approximation.

1,520 citations

Proceedings ArticleDOI
12 May 2002
TL;DR: This paper presents an automated technique for generating and analyzing attack graphs, based on symbolic model checking algorithms, letting us construct attack graphs automatically and efficiently.
Abstract: An integral part of modeling the global view of network security is constructing attack graphs. Manual attack graph construction is tedious, error-prone, and impractical for attack graphs larger than a hundred nodes. In this paper we present an automated technique for generating and analyzing attack graphs. We base our technique on symbolic model checking algorithms, letting us construct attack graphs automatically and efficiently. We also describe two analyses to help decide which attacks would be most cost-effective to guard against. We implemented our technique in a tool suite and tested it on a small network example, which includes models of a firewall and an intrusion detection system.

1,302 citations

References
More filters
Proceedings ArticleDOI
01 Jan 1977
TL;DR: In this paper, the abstract interpretation of programs is used to describe computations in another universe of abstract objects, so that the results of abstract execution give some information on the actual computations.
Abstract: A program denotes computations in some universe of objects. Abstract interpretation of programs consists in using that denotation to describe computations in another universe of abstract objects, so that the results of abstract execution give some information on the actual computations. An intuitive example (which we borrow from Sintzoff [72]) is the rule of signs. The text -1515 * 17 may be understood to denote computations on the abstract universe {(+), (-), (±)} where the semantics of arithmetic operators is defined by the rule of signs. The abstract execution -1515 * 17 → -(+) * (+) → (-) * (+) → (-), proves that -1515 * 17 is a negative number. Abstract interpretation is concerned by a particular underlying structure of the usual universe of computations (the sign, in our example). It gives a summary of some facets of the actual executions of a program. In general this summary is simple to obtain but inaccurate (e.g. -1515 + 17 → -(+) + (+) → (-) + (+) → (±)). Despite its fundamentally incomplete results abstract interpretation allows the programmer or the compiler to answer questions which do not need full knowledge of program executions or which tolerate an imprecise answer, (e.g. partial correctness proofs of programs ignoring the termination problems, type checking, program optimizations which are not carried in the absence of certainty about their feasibility, …).

6,829 citations


"Model checking" refers background in this paper

  • ...The convergence between Model Checking and Abstract Interpretation [14] could lead to significant breakthroughs....

    [...]

Proceedings ArticleDOI
30 Sep 1977
TL;DR: A unified approach to program verification is suggested, which applies to both sequential and parallel programs, and the main proof method is that of temporal reasoning in which the time dependence of events is the basic concept.
Abstract: A unified approach to program verification is suggested, which applies to both sequential and parallel programs. The main proof method suggested is that of temporal reasoning in which the time dependence of events is the basic concept. Two formal systems are presented for providing a basis for temporal reasoning. One forms a formalization of the method of intermittent assertions, while the other is an adaptation of the tense logic system Kb, and is particularly suitable for reasoning about concurrent programs.

5,174 citations

Journal ArticleDOI
TL;DR: In this paper, the authors formulate and prove an elementary fixpoint theorem which holds in arbitrary complete lattices, and give various applications (and extensions) of this result in the theories of simply ordered sets, real functions, Boolean algebras, as well as in general set theory and topology.
Abstract: 1. A lattice-theoretical fixpoint theorem. In this section we formulate and prove an elementary fixpoint theorem which holds in arbitrary complete lattices. In the following sections we give various applications (and extensions) of this result in the theories of simply ordered sets, real functions, Boolean algebras, as well as in general set theory and topology. * By a lattice we understand as usual a system 21 = (A 9 <) formed by a non-empty set A and a binary relation <; it is assumed that < establishes a partial order in A and that for any two elements a f b E A there is a least upper bound (join) a u b and a greatest lower bound (meet) an b. The relations >L, <, and > are defined in the usual way in terms of <. The lattice 21 = (A, <) is called complete if every subset B of A has a least upper bound ΌB and a greatest lower bound Πβ. Such a lattice has in particular two elements 0 and 1 defined by the formulas 0 = ΓU and 1 = 11,4. Given any two elements a 9 b E A with a < b, we denote by [a 9 b] the interval with the endpoints a and b, that is, the set of all elements x E A for which a < x < b; in symbols, [ a,b] = E x [x E A and a .< x .< b ]. The system \[α,6], <) is clearly a lattice; it is a complete if 21 is complete. We shall consider functions on A to A and, more generally, on a subset B of A to another subset C of A. Such a function / is called increasing if, for any 1 For notions and facts concerning lattices, simply ordered systems, and Boolean algebras consult [l].

2,873 citations


"Model checking" refers methods in this paper

  • ...More generally, the Tarski-Knaster Theorem [42] permits the ascending iterative calculation S f (false) of any temporal property r characterized as a least fixpoint μZ = f(Z), provided that f(Z) is monotone, which is ensured by Z only appearing un-negated....

    [...]

Book ChapterDOI
02 Jan 1991
TL;DR: In this article, a multiaxis classification of temporal and modal logic is presented, and the formal syntax and semantics for two representative systems of propositional branching-time temporal logics are described.
Abstract: Publisher Summary This chapter discusses temporal and modal logic. The chapter describes a multiaxis classification of systems of temporal logic. The chapter describes the framework of linear temporal logic. In both its propositional and first-order forms, linear temporal logic has been widely employed in the specification and verification of programs. The chapter describes the competing framework of branching temporal logic, which has seen wide use. It also explains how temporal logic structures can be used to model concurrent programs using non-determinism and fairness. The chapter also discusses other modal and temporal logics in computer science. The chapter describes the formal syntax and semantics of Propositional Linear Temporal Logic (PLTL). The chapter also describes the formal syntax and semantics for two representative systems of propositional branching-time temporal logics.

2,871 citations

Book ChapterDOI
22 Mar 1999
TL;DR: This paper shows how boolean decision procedures, like Stalmarck's Method or the Davis & Putnam Procedure, can replace BDDs, and introduces a bounded model checking procedure for LTL which reduces model checking to propositional satisfiability.
Abstract: Symbolic Model Checking [3, 14] has proven to be a powerful technique for the verification of reactive systems. BDDs [2] have traditionally been used as a symbolic representation of the system. In this paper we show how boolean decision procedures, like Stalmarck's Method [16] or the Davis & Putnam Procedure [7], can replace BDDs. This new technique avoids the space blow up of BDDs, generates counterexamples much faster, and sometimes speeds up the verification. In addition, it produces counterexamples of minimal length. We introduce a bounded model checking procedure for LTL which reduces model checking to propositional satisfiability. We show that bounded LTL model checking can be done without a tableau construction. We have implemented a model checker BMC, based on bounded model checking, and preliminary results are presented.

2,424 citations