scispace - formally typeset
Search or ask a question

Showing papers presented at "Formal Methods in 2001"


Journal ArticleDOI
01 Jul 2001
TL;DR: Live sequence charts (LSCs) as discussed by the authors allow the distinction between possible and necessary behavior both globally, on the level of an entire chart and locally, when specifying events, conditions and progress over time within a chart.
Abstract: While message sequence charts (MSCs) are widely used in industry to document the interworking of processes or objects, they are expressively weak, being based on the modest semantic notion of a partial ordering of events as defined, eg, in the ITU standard A highly expressive and rigorously defined MSC language is a must for serious, semantically meaningful tool support for use-cases and scenarios It is also a prerequisite to addressing what we regard as one of the central problems in behavioral specification of systems: relating scenario-based inter-object specification to state-machine intra-object specification This paper proposes an extension of MSCs, which we call live sequence charts (or LSCs), since our main extension deals with specifying “liveness”, ie, things that must occur In fact, LSCs allow the distinction between possible and necessary behavior both globally, on the level of an entire chart and locally, when specifying events, conditions and progress over time within a chart This makes it possible to specify forbidden scenarios, for example, and enables naturally specified structuring constructs such as subcharts, branching and iteration

931 citations


Journal ArticleDOI
01 Jul 2001
TL;DR: This tutorial focuses on recent techniques that combine model checking with satisfiability solving, known as bounded model checking, which do a very fast exploration of the state space, and for some types of problems seem to offer large performance improvements over previous approaches.
Abstract: The phrase model checking refers to algorithms for exploring the state space of a transition system to determine if it obeys a specification of its intended behavior. These algorithms can perform exhaustive verification in a highly automatic manner, and, thus, have attracted much interest in industry. Model checking programs are now being commercially marketed. However, model checking has been held back by the state explosion problem, which is the problem that the number of states in a system grows exponentially in the number of system components. Much research has been devoted to ameliorating this problem. In this tutorial, we first give a brief overview of the history of model checking to date, and then focus on recent techniques that combine model checking with satisfiability solving. These techniques, known as bounded model checking, do a very fast exploration of the state space, and for some types of problems seem to offer large performance improvements over previous approaches. We review experiments with bounded model checking on both public domain and industrial designs, and propose a methodology for applying the technique in industry for invariance checking. We then summarize the pros and cons of this new technology and discuss future research efforts to extend its capabilities.

770 citations


Journal ArticleDOI
01 Oct 2001
TL;DR: An analysis of safety properties is presented that enables us to prevent the doubly-exponential blow up and to use the same automaton used for model checking of general properties, replacing the search for bad cycles by a search for finite bad prefixes.
Abstract: Of special interest in formal verification are safety properties, which assert that the system always stays within some allowed region. Proof rules for the verification of safety properties have been developed in the proof-based approach to verification, making verification of safety properties simpler than verification of general properties. In this paper we consider model checking of safety properties. A computation that violates a general linear property reaches a bad cycle, which witnesses the violation of the property. Accordingly, current methods and tools for model checking of linear properties are based on a search for bad cycles. A symbolic implementation of such a search involves the calculation of a nested fixed-point expression over the system's state space, and is often infeasible. Every computation that violates a safety property has a finite prefix along which the property is violated. We use this fact in order to base model checking of safety properties on a search for finite bad prefixes. Such a search can be performed using a simple forward or backward symbolic reachability check. A naive methodology that is based on such a search involves a construction of an automaton (or a tableau) that is doubly exponential in the property. We present an analysis of safety properties that enables us to prevent the doubly-exponential blow up and to use the same automaton used for model checking of general properties, replacing the search for bad cycles by a search for bad prefixes.

582 citations


Book ChapterDOI
12 Mar 2001
TL;DR: Houdini is presented, an annotation assistant for the modular checker ESC/Java, which generates a large number of candidate annotations and uses ESC/ Java to verify or refute each of these annotations.
Abstract: A static program checker that performs modular checking can check one program module for errors without needing to analyze the entire program. Modular checking requires that each module be accompanied by annotations that specify the module. To help reduce the cost of writing specifications, this paper presents Houdini, an annotation assistant for the modular checker ESC/Java. To infer suitable ESC/Java annotations for a given program, Houdini generates a large number of candidate annotations and uses ESC/Java to verify or refute each of these annotations. The paper describes the design, implementation, and preliminary evaluation of Houdini.

423 citations


Proceedings Article
01 Aug 2001
TL;DR: In this article, a calculus describing the movement of processes and devices, including movement through administrative domains, is introduced, and the authors introduce a calculus that describes the behavior of processes.
Abstract: This chapter introduces a calculus describing the movement of processes and devices, including movement through administrative domains.

322 citations


Book ChapterDOI
12 Mar 2001
TL;DR: This work outlines an approach that is designed to provide immediate benefit at relatively low cost, and its elements are a small and succinct modelling language, and a fully automatic analysis scheme that can perform simulations and find errors.
Abstract: Formal methods have offered great benefits, but often at a heavy price. For everyday software development, in which the pressures of the market don't allow full-scale formal methods to be applied, a more lightweight approach is called for. I'll outline an approach that is designed to provide immediate benefit at relatively low cost. Its elements are a small and succinct modelling language, and a fully automatic analysis scheme that can perform simulations and find errors. I'll describe some recent case studies using this approach, involving naming schemes, architectural styles, and protocols for networks with changing topologies. I'll make some controversial claims about this approach and its relationship to UML and traditional formal specification approaches, and I'll barbeque some sacred cows, such as the belief that executability compromises abstraction.

177 citations


Journal ArticleDOI
01 Jan 2001
TL;DR: The objective of this paper is to show how verification of dense-time systems modeled as timed automata can be effectively performed using untimed verification techniques, and to demonstrate the practical interest behind the approach, namely, Fischer's mutual exclusion protocol and the CSMA/CD communication protocol.
Abstract: The objective of this paper is to show how verification of dense-time systems modeled as timed automata can be effectively performed using untimed verification techniques. In that way, the existing rich infrastructure in algorithms and tools for the verification of untimed systems can be exploited. The paper completes the ideas introduced in (Tripakis and Yovine, 1996, in Proc. 8th Conf. Computer-Aided Verification, CAV'96, Rutgers, NJ. LNCS, Vol. 1102, Springer-Verlag, 1996, pp. 232–243). Our approach consists in two steps. First, given a timed system A, we compute a finite graph G which captures the behavior of A modulo the fact that exact time delays are abstracted away. Then, we apply untimed verification techniques on G to prove properties on A. As property-specification languages, we use both the linear-time formalism of timed Buchi automata (TBA) and the branching-time logic TCTL. Model checking A against properties specified as TBA or TCTL formulae comes down to applying, respectively, automata-emptiness or CTL model-checking algorithms on G. The abstraction of exact delays is formalized under the concept of time-abstracting bisimulations. We define three time-abstracting bisimulations which are strictly ordered with respect to their reduction power. The stronger of them preserves both linear- and branching-time properties whereas the two weaker ones preserve only linear-time properties. The finite graph G is the quotient A with respect to a time-abstracting bisimulation. Generating G is called minimization and can be done by adapting a partition-refinement algorithm to the timed case. The adapted algorithm is symbolic, that is, equivalence classes are represented as simple polyhedra. When these polyhedra are not convex, operations become expensive, therefore, we develop a partition-refinement technique which preserves convexity. We have implemented the minimization algorithm in a prototype module called minim, as part of the real-time verification platform KRONOS (Bozga et al., 1998, in CAV'98). minim connects KRONOS to the CADP tool suite for the verification of untimed graphs (Fernandez et al., 1992, in 14th Int. Conf. on Software Engineering). To demonstrate the practical interest behind our approach, we present two case studies, namely, Fischer's mutual exclusion protocol and the CSMA/CD communication protocol.

171 citations


Journal ArticleDOI
01 Mar 2001
TL;DR: A parallel version of the explicit state enumeration verifier Murϕ for distributed memory multiprocessors and networks of workstations using the message passing paradigm shows close to linear speedups, which are largely insensitive to communication latency and bandwidth.
Abstract: With the use of state and memory reduction techniques in verification by explicit state enumeration, runtime becomes a major limiting factor. We describe a parallel version of the explicit state enumeration verifier Murp for distributed memory multiprocessors and networks of workstations using the message passing paradigm. In experiments with three complex cache coherence protocols on an Sp2 multiprocessor and on a network of workstations at UC Berkeley, parallel Murp shows close to linear speedups, which are largely insensitive to communication latency and bandwidth. There is some slowdown with increasing communication overhead, for which a simple yet relatively accurate approximation formula is given. Techniques to reduce overhead and required bandwidth and to allow heterogeneity and dynamically changing load in the parallel machine are discussed, which we expect will allow good speedups when using conventional networks of workstations.

167 citations


Journal ArticleDOI
01 Mar 2001
TL;DR: The notions of vacuity and interesting witness are formalized, and it is shown how to detect Vacuity and generate interesting witnesses in temporal model checking and provide a practical solution for a useful subset of ACTL formulas.
Abstract: The ability to generate a counter-example is an important feature of model checking tools, because a counter-example provides information to the user in the case that the formula being checked is found to be non-valid. In this paper, we turn our attention to providing similar feedback to the user in the case that the formula is found to be valid, because valid formulas can hide real problems in the model. For instance, propositional logic formulas containing implications can suffer from antecedent failure, in which the formula is trivially valid because the pre-condition of the implication is not satisfiable. We call this vacuity, and extend the definition to cover other kinds of trivial validity. For non-vacuously valid formulas, we define an interesting witness as a non-trivial example of the validity of the formula. We formalize the notions of vacuity and interesting witness, and show how to detect vacuity and generate interesting witnesses in temporal model checking. Finally, we provide a practical solution for a useful subset of ACTL formulas.

142 citations


Journal ArticleDOI
01 Mar 2001
TL;DR: This work combines both approaches and develops a method for using partial-order reduction techniques in symbolic BDD-based invariant checking, and presents theoretical results to prove the correctness of the method, and experimental results to demonstrate its efficacy.
Abstract: State-space explosion is a fundamental obstacle in the formal verification of designs and protocols. Several techniques for combating this problem have emerged in the past few years, among which two are significant: partial-order reduction and symbolic state-space search. In asynchronous systems, interleavings of independent concurrent events are equivalent, and only a representative interleaving needs to be explored to verify local properties. Partial-order methods exploit this redundancy and visit only a subset of the reachable states. Symbolic techniques, on the other hand, capture the transition relation of a system and the set of reachable states as boolean functions. In many cases, these functions can be represented compactly using binary decision diagrams (BDDs). Traditionally, the two techniques have been practiced by two different schools—partial-order methods with enumerative depth-first search for the analysis of asynchronous network protocols, and symbolic breadth-first search for the analysis of synchronous hardware designs. We combine both approaches and develop a method for using partial-order reduction techniques in symbolic BDD-based invariant checking. We present theoretical results to prove the correctness of the method, and experimental results to demonstrate its efficacy.

132 citations


Journal ArticleDOI
01 Jul 2001
TL;DR: The design problem of output feedback controllers for Takagi–Sugeno fuzzy models is considered and sufficient conditions for the asymptotic convergence of the fuzzy observers are given.
Abstract: In this paper the design problem of output feedback controllers for Takagi–Sugeno fuzzy models is considered. As for the premise variables, we consider two cases: the outputs and the state variables. In each case, we first consider the design of observers. In the first case we give sufficient conditions for the asymptotic convergence of the fuzzy observers. In the second case we give observers for an approximation of the original system. We then propose the output feedback controllers based on state feedback controllers and observers. Two design examples are given to illustrate the theory.

Proceedings Article
12 Mar 2001
TL;DR: In this paper, a transformation primitives and combinators for grammar adaptation are derived from transformations for the adaptation of grammars, and three groups of operators are identified, namely operators for refactoring, construction and destruction.
Abstract: We employ transformations for the adaptation of grammars. Grammars need to be adapted in grammar development, grammar maintenance, grammar reengineering, and grammar recovery. Starting from a few fundamental transformation primitives and combinators, we derive an operator suite for grammar adaptation. Three groups of operators are identified, namely operators for refactoring, construction and destruction. While refactoring is semantics-preserving in the narrow sense, transformations for construction and destruction require the consideration of relaxed notions of semantics preservation based on other grammar relations than equality of generated languages. The consideration of semantics and accompanying preservation properties is slightly complicated by the fact that we cannot insist on reduced grammars.

Proceedings ArticleDOI
16 Jul 2001
TL;DR: A combination of the well-established formal specification languages Z and CSP is presented, aimed at the calculational refinement of specifications to programs written in a language similar to occam and Handel-C.
Abstract: We present a combination of the well-established formal specification languages Z and CSP; our objective is to provide support for the specification of both data and behaviour aspects of concurrent systems, and a development technique The resulting language, Circus, distinguishes itself in that it is aimed at the calculational refinement of specifications to programs written in a language similar to occam and Handel-C In this paper, we present Circus, the rationale for its design, and a case study in its use

Journal ArticleDOI
01 Jul 2001
TL;DR: It is shown that lazy learning is able to provide better modeling accuracy and higher control performance at the cost of a reduced readability of the resulting approximator.
Abstract: The composition of simple local models for approximating complex nonlinear mappings is a common practice in recent modeling and control literature. This paper presents a comparative analysis of two di,erent local approaches: the neuro-fuzzy inference system and the lazy learning approach. Neuro-fuzzy is a hybrid representation which combines the linguistic description typical of fuzzy inference systems, with learning procedures inspired by neural networks. Lazy learning is a memory-based technique that uses a query-based approach to select the best local model con0guration by assessing and comparing di,erent alternatives in cross-validation. In this paper, the two approaches are compared both as learning algorithms, and as identi0cation modules of an adaptive control system. We show that lazy learning is able to provide better modeling accuracy and higher control performance at the cost of a reduced readability of the resulting approximator. Illustrative examples of identi0cation and control of a nonlinear system starting from simulated data are given. c 2001 Elsevier Science B.V. All rights reserved.

Proceedings Article
01 Aug 2001
TL;DR: This chapter introduces PICCOLA, a small 'composition language' that embodies the paradigm 'Applications = Components + Scripts', and illustrates how PIC COLA offers explicit support for viewing applications as compositions of components and how separating components from their composition improves maintainability.
Abstract: Although object-oriented languages are well-suited to implement software components, they fail to shine in the construction of component-based applications, largely because object-oriented design tends to obscure a component-based architecture. We propose to tackle this problem by clearly separating component implementation and composition. In particular, we claim that application development is best supported by consciously applying the paradigm 'Applications = Components + Scripts'. In this chapter we introduce PICCOLA, a small 'composition language' that embodies this paradigm. PICCOLA models components and compositional abstractions by means of communicating concurrent agents. Flexibility, extensibility and robustness are obtained by modelling both interfaces of components and the contexts in which they live by 'forms', a special notion of extensible records. Using a concrete example, we illustrate how PICCOLA offers explicit support for viewing applications as compositions of components and show that separating components from their composition improves maintainability.

Book ChapterDOI
12 Mar 2001
TL;DR: Generic models for controling information flow in distributed systems have been thoroughly investigated, but cannot cope with common features of secure distributed systems like channel control, information filters, or explicit downgrading.
Abstract: The development of formal security models is a difficult, time consuming, and expensive task. This development burden can be considerably reduced by using generic security models. In a security model, confidentiality as well as integrity requirements can be expressed by restrictions on the information flow. Generic models for controling information flow in distributed systems have been thoroughly investigated. Nevertheless, the known approaches cannot cope with common features of secure distributed systems like channel control, information filters, or explicit downgrading. This limitation caused a major gap which has prevented the migration of a large body of research into practice. To bridge this gap is the main goal of this article.

Journal ArticleDOI
01 Jul 2001
TL;DR: An adaptive fuzzy control scheme for a class of continuous-time nonlinear dynamic systems for which explicit linear parameterizations of the uncertainties are either unknown or impossible is proposed.
Abstract: This paper proposes an adaptive fuzzy control scheme for a class of continuous-time nonlinear dynamic systems for which explicit linear parameterizations of the uncertainties are either unknown or impossible. To improve robustness under the approximation errors and disturbances, the proposed scheme includes a dead-zone in adaptation laws which varies its size adaptively. The assumption of known bounds on the approximation errors and disturbances is not required since those are estimated using adaptation laws. The overall adaptive scheme is proven to guarantee global uniform ultimate boundedness in the Lyapunov sense.

Book ChapterDOI
12 Mar 2001
TL;DR: The use of quasi-boolean multi-valued logics for reasoning about systems with uncertainty or inconsistency is proposed and semantics to a multi- valued extension of CTL is given.
Abstract: Classical logic cannot be used to effectively reason about systems with uncertainty (lack of essential information) or inconsistency (contradictory information often occurring when information is gathered from multiple sources) In this paper we propose the use of quasi-boolean multi-valued logics for reasoning about such systems We also give semantics to a multi-valued extension of CTL, describe an implementation of a symbolic multi-valued CTL model-checker called χchek, and analyze its correctness and running time

Journal ArticleDOI
01 May 2001
TL;DR: In this paper, the authors present a method of formally specifying, refining and verifying concurrent systems which uses the object-oriented state-based specification language Object-Z together with the process algebra CSP.
Abstract: This paper presents a method of formally specifying, refining and verifying concurrent systems which uses the object-oriented state-based specification language Object-Z together with the process algebra CSP. Object-Z provides a convenient way of modelling complex data structures needed to define the component processes of such systems, and CSP enables the concise specification of process interactions. The basis of the integration is a semantics of Object-Z classes identical to that of CSP processes. This allows classes specified in Object-Z to be used directly within the CSP part of the specification. In addition to specification, we also discuss refinement and verification in this model. The common semantic basis enables a unified method of refinement to be used, based upon CSP refinement. To enable state-based techniques to be used for the Object-Z components of a specification we develop state-based refinement relations which are sound and complete with respect to CSP refinement. In addition, a verification method for static and dynamic properties is presented. The method allows us to verify properties of the CSP system specification in terms of its component Object-Z classes by using the laws of the CSP operators together with the logic for Object-Z.

Journal ArticleDOI
01 Jan 2001
TL;DR: A technique is introduced that uses compositionality and dependency analysis to significantly improve the efficiency of symbolic model checking of state/event models and makes possible automated verification of large industrial designs with the use of only modest resources.
Abstract: A state/event model is a concurrent version of Mealy machines used for describing embedded reactive systems. This paper introduces a technique that uses compositionality and dependency analysis to significantly improve the efficiency of symbolic model checking of state/event models. It makes possible automated verification of large industrial designs with the use of only modest resources (less than 5 minutes on a standard PC for a model with 1421 concurrent machines). The results of the paper are being implemented in the next version of the commercial tool visualSTATETM.

Book ChapterDOI
12 Mar 2001
TL;DR: A useful paradigm of system development is that of stepwise refinement, but many security properties proposed in the literature are not preserved under refinement (refinement paradox).
Abstract: A useful paradigm of system development is that of stepwise refinement. In contrast to other system properties, many security properties proposed in the literature are not preserved under refinement (refinement paradox).

Book ChapterDOI
12 Mar 2001
TL;DR: It is shown how to (and how not to) perform LTL model checking of CSP processes using refinement checking in general and the FDR tool in particular, and how to handle (potentially) deadlocking systems.
Abstract: We study the possibility of doing LTL model checking on CSP specifications in the context of refinement. We present evidence that the refinement-based approach to verification does not seem to be very well suited for verifying certain temporal properties. To remedy this problem, we show how to (and how not to) perform LTL model checking of CSP processes using refinement checking in general and the FDR tool in particular. We show how one can handle (potentially) deadlocking systems, discuss the validity of our approach for infinite state systems, and shed light on the relationship between "classical" model checking and refinement checking.

ReportDOI
01 Aug 2001
TL;DR: A way of defining the subtype relation that ensures that subtype objects preserve behavioral properties of their supertypes is presented, and the ramifications of the approach of subtyping the design of type families are discussed.
Abstract: : We present a way of defining the subtype relation that ensures that subtype objects preserve behavioral properties of their supertypes. The subtype relation is based on the specifications of the sub and supertypes. Our approach handles mutable types and allows subtypes to have more methods than their supertypes. Dealing with mutable types and subtypes that extend their supertypes has surprising consequences on how to specify and reason about objects. In our approach, we discard the standard data type induction rule, we prohibit the use of an analogous history rule, and we make up for both losses by adding explicit predicates invariants and constraints to our type specifications. We also discuss the ramifications of our approach of subtyping the design of type families.

Journal ArticleDOI
01 Sep 2001
TL;DR: It is shown, that industrially sized applications can be modeled and verified with a verification tool to be offered as a commercial product by I-Logix, Inc.
Abstract: With the trend to partially move safety-related features from courtyards into on-board control software, new challenges arise in supporting such designs by formal verification capabilities, essentially entailing the need for a model-based design process. This paper reports on the usage of the STATEMATE Verification Environment to model and verify a radio-based signaling system, a trial case study offered by the German train system company DB. It shows, that industrially sized applications can be modeled and verified with a verification tool to be offered as a commercial product by I-Logix, Inc.

Book ChapterDOI
12 Mar 2001
TL;DR: A discrete semantics for closed timed automata is developed to get a finite state space required by the BDD-based representation and the equivalence to the continuous semantics regarding the set of reachable locations is proved.
Abstract: To develop efficient algorithms for the reachability analysis of timed automata, a promising approach is to use binary decision diagrams (BDDs) as data structure for the representation of the explored state space. The size of a BDD is very sensitive to the ordering of the variables. We use the communication structure to deduce an estimation for the BDD size. In our experiments, this guides the choice of good variable orderings, which leads to an efficient reachability analysis. We develop a discrete semantics for closed timed automata to get a finite state space required by the BDD-based representation and we prove the equivalence to the continuous semantics regarding the set of reachable locations. An upper bound for the size of the BDD representing the transition relation and an estimation for the set of reachable configurations based on the communication structure is given. We implemented these concepts in the verification tool Rabbit [BR00]. Different case studies justify our conjecture: Polynomial reachability analysis seems to be possible for some classes of real-time models, which have a good-natured communication structure.

Book ChapterDOI
Simon Jones1
12 Mar 2001
TL;DR: This talk will introduce a combinator library that allows us to describe and valuing financial contracts precisely, and a compositional denotational semantics that says what such contracts are worth.
Abstract: Financial and insurance contracts--options, derivatives, futures, and so on--do not sound like promising territory for functional programming and formal semantics. To our delight, however, we have discovered that insights from programming languages bear directly on the complex subject of describing and valuing a large class of contracts. In my talk I will introduce a combinator library that allows us to describe such contracts precisely, and a compositional denotational semantics that says what such contracts are worth. In fact, a wide range programming-language tools and concepts--denotatinoal semantics, equational reasoning, operational semantics, optimisation by transformation, and so on--turn out to be useful in this new setting. Sleep easy, though; you do not need any prior knowledge of financial engineering to understand this talk!

Journal ArticleDOI
01 Mar 2001
TL;DR: It is concluded that this heuristic can be used to advantage on “mature” designs for which the anticipated result of the verification is pass, and can result in a significant speed-up for verification runs that pass.
Abstract: We describe a new heuristic for detecting bad cycles (reachable cycles that are not confined within one or another designated sets of model states), a fundamental operation for model-checking algorithms. It is a variation on a standard implementation of the Emerson-Lei algorithm, which our experimental data suggests can result in a significant speed-up for verification runs that pass. We conclude that this heuristic can be used to advantage on “mature” designs for which the anticipated result of the verification is pass.

Journal ArticleDOI
01 Sep 2001
TL;DR: In this paper, the authors discuss the use of formal methods in the development of the control system for the Maeslant Kering Dam, a movable dam which has to protect Rotterdam from floodings while, at (almost) the same time, not restricting ship traffic to the port.
Abstract: This paper discusses the use of formal methods in the development of the control system for the Maeslant Kering. The Maeslant Kering is the movable dam which has to protect Rotterdam from floodings while, at (almost) the same time, not restricting ship traffic to the port of Rotterdam. The control system, called BOS, completely autonomously decides about closing and opening of the barrier and, when necessary, also performs these tasks without human intervention. BOS is a safety-critical software system of the highest Safety Integrity Level according to IEC 61508. One of the reliability increasing techniques used during its development is formal methods. This paper reports experiences obtained from using formal methods in the development of BOS. These experiences are presented in the context of Hall's famous “Seven Myths of Formal Methods”.

Proceedings ArticleDOI
Steve Dunne1
16 Jul 2001
TL;DR: The single-predicate model of sequential programs is reviewed, and it is shown how it can be recast to overcome its inability always to provide an adequate description of the required behaviour of a sequential program which implements a partial decision procedure.
Abstract: We review Hoare and He's single-predicate model of sequential programs, and uncover its inability always to provide an adequate description of the required behaviour of a sequential program which implements a partial decision procedure: a significant shortcoming in an interactive era where such programs are often found as components of reactive systems We show how the single-predicate model can be recast to overcome this limitation, using a variation on the predicate syntactic form employed by Hoare and He for their designs, which we call a prescription We show that our prescriptions have a remarkably simple semantic characterisation by means of a single intuitively compelling healthiness condition Our model also admits a pleasingly simple algebraic formulation which reveals it as a natural completion of Hoare and He's model; indeed, we can show the latter to be congruent to a restricted sub-model of the former

Book ChapterDOI
12 Mar 2001
TL;DR: This paper proposes a methodology for improving the throughput of software verification by performing some consistency checks between the original code and the model, specifically, by applying software testing and introduces the notion of a neighborhood of an error trace, consisting of a tree of execution paths, where the original error trace is one of them.
Abstract: Automatic and manual software verification is based on applying mathematical methods to a model of the software. Modeling is usually done manually, thus it is prone to modeling errors. This means that errors found in the model may not correspond to real errors in the code, and that if the model is found to satisfy the checked properties, the actual code may still have some errors. For this reason, it is desirable to be able to perform some consistency checks between the actual code and the model. Exhaustive consistency checks are usually not possible, for the same reason that modeling is necessary. We propose a methodology for improving the throughput of software verification by performing some consistency checks between the original code and the model, specifically, by applying software testing. In this paper we present such a combined testing and verification methodology and demonstrate how it is applied using a set of software reliability tools. We introduce the notion of a neighborhood of an error trace, consisting of a tree of execution paths, where the original error trace is one of them. Our experience with the methodology shows that traversing the neighborhood of an error is extremely useful in locating its cause. This is crucial not only in understanding where the error stems from, but in getting an initial idea of how to redesign the code. We use as a case study a robot control system, and report on several design and modeling errors found during the verification and testing process.