scispace - formally typeset
Search or ask a question
Author

D. Long

Bio: D. Long is an academic researcher. The author has contributed to research in topics: Model checking & Kripke structure. The author has an hindex of 1, co-authored 1 publications receiving 7382 citations.

Papers
More filters
01 Sep 1996
TL;DR: Model checking tools, created by both academic and industrial teams, have resulted in an entirely novel approach to verification and test case generation that often enables engineers in the electronics industry to design complex systems with considerable assurance regarding the correctness of their initial designs.
Abstract: Turing Lecture from the winners of the 2007 ACM A.M. Turing Award. In 1981, Edmund M. Clarke and E. Allen Emerson, working in the USA, and Joseph Sifakis working independently in France, authored seminal papers that founded what has become the highly successful field of model checking. This verification technology provides an algorithmic means of determining whether an abstract model---representing, for example, a hardware or software design---satisfies a formal specification expressed as a temporal logic (TL) formula. Moreover, if the property does not hold, the method identifies a counterexample execution that shows the source of the problem. The progression of model checking to the point where it can be successfully used for complex systems has required the development of sophisticated means of coping with what is known as the state explosion problem. Great strides have been made on this problem over the past 28 years by what is now a very large international research community. As a result many major hardware and software companies are beginning to use model checking in practice. Examples of its use include the verification of VLSI circuits, communication protocols, software device drivers, real-time embedded systems, and security algorithms. The work of Clarke, Emerson, and Sifakis continues to be central to the success of this research area. Their work over the years has led to the creation of new logics for specification, new verification algorithms, and surprising theoretical results. Model checking tools, created by both academic and industrial teams, have resulted in an entirely novel approach to verification and test case generation. This approach, for example, often enables engineers in the electronics industry to design complex systems with considerable assurance regarding the correctness of their initial designs. Model checking promises to have an even greater impact on the hardware and software industries in the future. ---Moshe Y. Vardi, Editor-in-Chief

7,392 citations


Cited by
More filters
Book
07 Jan 1999

4,478 citations

Journal ArticleDOI
TL;DR: An integrated approach to fitting psychometric functions, assessing the goodness of fit, and providing confidence intervals for the function’s parameters and other estimates derived from them, for the purposes of hypothesis testing is described.
Abstract: The psychometric function relates an observer’s performance to an independent variable, usually some physical quantity of a stimulus in a psychophysical task. This paper, together with its companion paper (Wichmann & Hill, 2001), describes an integrated approach to (1) fitting psychometric functions, (2) assessing the goodness of fit, and (3) providing confidence intervals for the function’s parameters and other estimates derived from them, for the purposes of hypothesis testing. The present paper deals with the first two topics, describing a constrained maximum-likelihood method of parameter estimation and developing several goodness-of-fit tests. Using Monte Carlo simulations, we deal with two specific difficulties that arise when fitting functions to psychophysical data. First, we note that human observers are prone to stimulus-independent errors (orlapses). We show that failure to account for this can lead to serious biases in estimates of the psychometric function’s parameters and illustrate how the problem may be overcome. Second, we note that psychophysical data sets are usually rather small by the standards required by most of the commonly applied statistical tests. We demonstrate the potential errors of applying traditionalX2 methods to psychophysical data and advocate use of Monte Carlo resampling techniques that do not rely on asymptotic theory. We have made available the software to implement our methods.

2,263 citations

Journal ArticleDOI
TL;DR: Different approaches to the determination of upper bounds on execution times are described and several commercially available tools1 and research prototypes are surveyed.
Abstract: The determination of upper bounds on execution times, commonly called worst-case execution times (WCETs), is a necessary step in the development and validation process for hard real-time systems. This problem is hard if the underlying processor architecture has components, such as caches, pipelines, branch prediction, and other speculative components. This article describes different approaches to this problem and surveys several commercially available tools1 and research prototypes.

1,946 citations

Book ChapterDOI
08 Jul 2003
TL;DR: Counterexample-guided abstraction refinement is an automatic abstraction method where the key step is to extract information from false negatives ("spurious counterexamples") due to over-approximation.
Abstract: The main practical problem in model checking is the combinatorial explosion of system states commonly known as the state explosion problem. Abstraction methods attempt to reduce the size of the state space by employing knowledge about the system and the specification in order to model only relevant features in the Kripke structure. Counterexample-guided abstraction refinement is an automatic abstraction method where, starting with a relatively small skeletal representation of the system to be verified, increasingly precise abstract representations of the system are computed. The key step is to extract information from false negatives ("spurious counterexamples") due to over-approximation.

1,520 citations

Proceedings ArticleDOI
12 May 2002
TL;DR: This paper presents an automated technique for generating and analyzing attack graphs, based on symbolic model checking algorithms, letting us construct attack graphs automatically and efficiently.
Abstract: An integral part of modeling the global view of network security is constructing attack graphs. Manual attack graph construction is tedious, error-prone, and impractical for attack graphs larger than a hundred nodes. In this paper we present an automated technique for generating and analyzing attack graphs. We base our technique on symbolic model checking algorithms, letting us construct attack graphs automatically and efficiently. We also describe two analyses to help decide which attacks would be most cost-effective to guard against. We implemented our technique in a tool suite and tested it on a small network example, which includes models of a firewall and an intrusion detection system.

1,302 citations