scispace - formally typeset
Search or ask a question

Showing papers in "International Journal on Software Tools for Technology Transfer in 2004"


Journal ArticleDOI
TL;DR: The class of A* directed search algorithms are presented and proposed heuristics together with bitstate compression techniques for the search of safety property violations and great reductions in the length of the error trails are achieved.
Abstract: The success of model checking is largely based on its ability to efficiently locate errors in software designs. If an error is found, a model checker produces a trail that shows how the error state can be reached, which greatly facilitates debugging. However, while current model checkers find error states efficiently, the counterexamples are often unnecessarily lengthy, which hampers error explanation. This is due to the use of "naive" search algorithms in the state space exploration. In this paper we present approaches to the use of heuristic search algorithms in explicit-state model checking. We present the class of A* directed search algorithms and propose heuristics together with bitstate compression techniques for the search of safety property violations. We achieve great reductions in the length of the error trails, and in some instances render problems analyzable by exploring a much smaller number of states than standard depth-first search. We then suggest an improvement of the nested depth-first search algorithm and show how it can be used together with A* to improve the search for liveness property violations. Our approach to directed explicit-state model checking has been implemented in a tool set called HSF-SPIN. We provide experimental results from the protocol validation domain using HSF-SPIN.

182 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigate the effect on efficiency of various design issues for BDD-like data structures of TA state space representation and manipulation and find that the efficiency is highly sensitive to decision atom design and canonical form definition.
Abstract: We investigate the effect on efficiency of various design issues for BDD-like data structures of TA state space representation and manipulation. We find that the efficiency is highly sensitive to decision atom design and canonical form definition. We explore the two issues in detail and propose to use CRD (Clock-Restriction Diagram) for TA state space representation and present algorithms for manipulating CRD in the verification of TAs. We compare three canonical forms for zones, develop a procedure for quick zone-containment detection, and present algorithms for verification with backward reachability analysis. Three possible evaluation orderings are also considered and discussed. We implement our idea in our tool Red 4.2 and carry out experiments to compare with other tools and various strategies of Red in both forward and backward analysis. Finally, we discuss the possibility of future improvement.

107 citations


Journal ArticleDOI
TL;DR: An array of heuristic model checking techniques for combating the state space explosion when searching for errors, including structural heuristics that attempt to explore the structure of a program in a manner intended to expose errors efficiently.
Abstract: Model checking of software programs has two goals --- the verification of correct software and the discovery of errors in faulty software. Some techniques for dealing with the most crucial problem in model checking, the state space explosion problem, concentrate on the first of these goals. In this paper we present an array of heuristic model checking techniques for combating the state space explosion when searching for errors. Previous work on this topic has mostly focused on property-specific heuristics closely related to particular kinds of errors. We present structural heuristics that attempt to explore the structure (branching structure, thread interdependency structure, abstraction structure) of a program in a manner intended to expose errors efficiently. Experimental results show the utility of this class of heuristics. In contrast to these very general heuristics, we also present very lightweight techniques for introducing program-specific heuristic guidance.

102 citations


Journal ArticleDOI
TL;DR: The solution to the problem of modelling functionally complex communication systems at the application level, based on lightweight coordination, is extended to seamlessly capture system-level testing as well to induce an understandable modelling paradigm of system-wide test cases that is adequate for the needs and requirements of industrial test engineers.
Abstract: In this paper, our solution to the problem of modelling functionally complex communication systems at the application level, based on lightweight coordination, is extended to seamlessly capture system-level testing as well. This extension could be realized simply by self-application: the bulk of the work for integrating system-level testing into our development environment, the ABC, concerned domain modelling, which can be done using the ABC. Therefore, the extension of the ABC to cover system-level testing was merely an application development on the basis of the ABC, illustrated here in the domain of Computer Telephony Integration. Here the adoption of a coarse-grained approach to test design, which is central to the scalability of the overall testing environment, is the enabling aspect for system-level test automation. Together with our lightweight coordination approach this induces an understandable modelling paradigm of system-wide test cases that is adequate for the needs and requirements of industrial test engineers. In particular, it enables test engineers to graphically design complex test cases that, in addition, can even be automatically checked for their intended purposes via model checking.

89 citations


Journal ArticleDOI
TL;DR: This work proposes two design patterns that provide a flexible basis for the integration of different tool data at the meta-model level and describes rule-based mechanisms providing generic solutions for managing overlapping and redundant data.
Abstract: Today’s development processes employ a variety of notations and tools, e.g., the Unified Modeling Language UML, the Standard Description Language SDL, requirements databases, design tools, code generators, model checkers, etc. For better process support, the employed tools may be organized within a tool suite or integration platform, e.g., Rational Rose or Eclipse. While these tool-integration platforms usually provide GUI adaption mechanisms and functional adaption via application programming interfaces, they frequently do not provide appropriate means for data integration at the meta-model level. Thus, overlapping and redundant data from different “integrated” tools may easily become inconsistent and unusable. We propose two design patterns that provide a flexible basis for the integration of different tool data at the meta-model level. To achieve consistency between meta-models, we describe rule-based mechanisms providing generic solutions for managing overlapping and redundant data. The proposed mechanisms are widely used within the Fujaba Tool Suite. We report about our implementation and application experiences .

86 citations


Journal ArticleDOI
TL;DR: This article extends the translation scheme to typical combinations of temporal operators and uses the notions of predicated diameter and radius to obtain revised bounds for its translation scheme, giving a tight bound on the minimal completeness bound for simple liveness properties.
Abstract: Two types of temporal properties are usually distinguished: safety and liveness. Recently we have shown how to verify liveness properties of finite state systems using safety checking. In this article we extend the translation scheme to typical combinations of temporal operators. We discuss optimizations that limit the overhead of our translation. Using the notions of predicated diameter and radius we obtain revised bounds for our translation scheme. These notions also give a tight bound on the minimal completeness bound for simple liveness properties. Experimental results show the feasibility of the approach for complex examples. For one example, even an exponential speedup can be observed.

70 citations


Journal ArticleDOI
TL;DR: This work describes the modeling concepts of the CASE tool AutoFocus and an approach to model-based test case generation that is based on symbolic execution with Constraint Logic Programming.
Abstract: Model-based testing relies on abstract behavior models for test case generation. These models are abstractions, i.e., simplifications. For deterministic reactive systems, test cases are sequences of input and expected output. To bridge the different levels of abstraction, input must be concretized before being applied to the system under test. The system’s output must then be abstracted before being compared to the output of the model.The concepts are discussed along the lines of a feasibility study, an inhouse smart card case study. We describe the modeling concepts of the CASE tool AutoFocus and an approach to model-based test case generation that is based on symbolic execution with Constraint Logic Programming.Different search strategies and algorithms for test case generation are discussed. Besides validating the model itself, generated test cases were used to verify the actual hardware with respect to these traces.

70 citations


Journal ArticleDOI
TL;DR: The research goal is to develop a framework and methodology for the integrated use of formal methods in the development of embedded medical systems that require high assurance and confidence.
Abstract: Reliability of medical devices such as the CARA Infusion Pump Control System is of extreme importance given that these devices are being used on patients in critical condition. The Infusion Pump Control System includes embedded processors and accompanying embedded software for monitoring as well as controlling sensors and actuators that allow the embedded systems to interact with their environments. This nature of the Infusion Pump Control System adds to the complexity of assuring the reliability of the total system. The traditional methods of developing embedded systems are inadequate for such safety-critical devices. In this paper, we study the application of formal methods to the requirements capture and analysis of the Infusion Pump Control System. Our approach consists of two phases. The first phase is to convert the informal design requirements into a set of reference specifications using a formal system, in this case EFSMs (Extended Finite State Machines). The second phase is to translate the reference specifications to the tools supporting formal analysis, such as SCR and Hermes. This allows us to conclude properties of the reference specifications. Our research goal is to develop a framework and methodology for the integrated use of formal methods in the development of embedded medical systems that require high assurance and confidence .

65 citations


Journal ArticleDOI
TL;DR: This paper presents a concept for an integration platform that allows for the integration of modelling tools, combining their models to build up a process model and performing computer-aided studies based on this integrated process model.
Abstract: A large number of modelling tools exist for the construction and solution of mathematical models of chemical processes. Each (chemical) process modelling tool provides its own model representation and model definition functions as well as its own solution algorithms, which are used for performing computer-aided studies for the process under consideration. However, in order to support reusability of existing models and to allow for the combined use of different modelling tools for the study of complex processes, model integration is needed. This paper presents a concept for an integration platform that allows for the integration of modelling tools, combining their models to build up a process model and performing computer-aided studies based on this integrated process model. In order to illustrate the concept without getting into complicated algorithmic issues, we focus on steady-state simulation using models comprising only algebraic equations. The concept is realized in the component-based integration platform CHEOPS, which focuses on integrating and solving existing models rather than providing its own modelling capabilities.

54 citations


Journal ArticleDOI
TL;DR: This work reports on the automatic verification of timed probabilistic properties of the IEEE 1394 root contention protocol combining two existing tools: the real-time model checker Kronos and the Probabilistic model Checker Prism, modelled as a probabilistically timed automaton.
Abstract: We report on the automatic verification of timed probabilistic properties of the IEEE 1394 root contention protocol combining two existing tools: the real-time model checker Kronos and the probabilistic model checker Prism. The system is modelled as a probabilistic timed automaton. We first use Kronos to perform a symbolic forwards reachability analysis to generate the set of states that are reachable with non-zero probability from the initial state and before the deadline expires. We then encode this information as a Markov decision process to be analyzed with Prism. We apply this technique to compute the minimal probability of a leader being elected before a deadline, for different deadlines, and study how this minimal probability is influenced by using a biased coin and considering different wire lengths.

54 citations


Journal ArticleDOI
TL;DR: It is shown that statistical properties of the transition graph of a system to be verified can be exploited to improve memory or time performances of verification algorithms.
Abstract: In this paper we show that statistical properties of the transition graph of a system to be verified can be exploited to improve memory or time performances of verification algorithms. We show experimentally that protocols exhibit transition locality. That is, with respect to levels of a breadth-first state space exploration, state transitions tend to be between states belonging to close levels of the transition graph. We support our claim by measuring transition locality for the set of protocols included in the Mur? verifier distribution . We present a cache-based verification algorithm that exploits transition locality to decrease memory usage and a disk-based verification algorithm that exploits transition locality to decrease disk read accesses, thus reducing the time overhead due to disk usage. Both algorithms have been implemented within the Mur? verifier. Our experimental results show that our cache-based algorithm can typically save more than 40% of memory with an average time penalty of about 50% when using (Mur?) bit compression and 100% when using bit compression and hash compaction, whereas our disk-based verification algorithm is typically more than ten times faster than a previously proposed disk-based verification algorithm and, even when using 10% of the memory needed to complete verification, it is only between 40 and 530% (300% on average) slower than (RAM) Mur? with enough memory to complete the verification task at hand. Using just 300 MB of memory our disk-based Mur? was able to complete verification of a protocol with about 109 reachable states. This would require more than 5 GB of memory using standard Mur? .

Journal ArticleDOI
TL;DR: It is shown that logic programming provides an efficient implementation platform for model checking π-calculus specifications and can be used to obtain an exact encoding of the π -calculus’s transitional semantics.
Abstract: We present MMC, a model checker for mobile systems specified in the style of the ?-calculus. MMC's development builds on that of XMC, a model checker for an expressive extension of Milner's value-passing calculus implemented using the XSB tabled logic-programming engine. MMC addresses the salient issues that arise in the ?-calculus, including scope extrusion and intrusion and dynamic generation of new names to avoid name capture. We show that logic programming provides an efficient implementation platform for model checking ?-calculus specifications and can be used to obtain an exact encoding of the ?-calculus's transitional semantics. Moreover, MMC is easily extended to handle process expressions in the spi-calculus of Abadi and Gordon. Our experimental data show that MMC outperforms other known tools for model checking the ?-calculus.

Journal ArticleDOI
TL;DR: How to bridge the gap between semiformal UML models and a formal technology ensuring test case generation and the formal tool used to automatically generate test sequences, named AGATHA, is described in minute detail.
Abstract: UML-based methodologies take more and more space in the software development domain. In addition, the need to validate applications as early as possible in the development cycle is now mandatory to satisfy cost and time-to-market constraints. In this context, this paper describes, first, how to bridge the gap between semiformal UML models and a formal technology ensuring test case generation. Second, the formal tool used to automatically generate test sequences, named AGATHA, is described in minute detail. Finally, this approach is illustrated throughout by a toy example of an elevator system.

Journal ArticleDOI
TL;DR: The theoretical complexity of the operations over covering sharing trees needed in symbolic model checking, a new heuristic rule based on structural properties of Petri Nets that can be used to efficiently prune the search during symbolic backward exploration is studied.
Abstract: The control state reachability problem is decidable for well-structured infinite-state systems like (Lossy) Petri Nets, Vector Addition Systems, and broadcast protocols. An abstract algorithm that solves the problem is the backward reachability algorithm of [1, 21 ]. The algorithm computes the closure of the predecessor operator with respect to a given upward-closed set of target states. When applied to this class of verification problems, symbolic model checkers based on constraints like [7, 26 ] suffer from the state explosion problem.In order to tackle this problem, in [13] we introduced a new data structure, called covering sharing trees, to represent in a compact way collections of infinite sets of system configurations. In this paper, we will study the theoretical complexity of the operations over covering sharing trees needed in symbolic model checking. We will also discuss several optimizations that can be used when dealing with Petri Nets. Among them, in [14] we introduced a new heuristic rule based on structural properties of Petri Nets that can be used to efficiently prune the search during symbolic backward exploration. The combination of these techniques allowed us to turn the abstract algorithm of [1, 21 ] into a practical method. We have evaluated the method on several finite-state and infinite-state examples taken from the literature [2, 18 , 20 , 30 ]. In this paper, we will compare the results we obtained in our experiments with those obtained using other finite and infinite-state verification tools.

Journal ArticleDOI
TL;DR: This paper analyzes the tool-integration problem at different abstraction levels and discusses different views on a layered software architecture that is designed specifically for a middleware that supports the execution of distributed applications for the orchestration of human/system activities.
Abstract: Tool integration is a very difficult challenge. Problems may arise at different abstraction levels and from several sources such as heterogeneity of manipulated data, incompatible interfaces, or uncoordinated services, to name just a few examples. On the other hand, applications based on the coherent composition of activities, components, services, and data from heterogeneous sources are increasingly present in our everyday lives. Consequently, tool integration takes on increasing significance. In this paper we analyze the tool-integration problem at different abstraction levels and discuss different views on a layered software architecture that we have designed specifically for a middleware that supports the execution of distributed applications for the orchestration of human/system activities. We noticed that the agent paradigm provided a suitable technology for abstraction in tool integration. Throughout the paper, the discussion refers to a case study in the bioinformatics domain.

Journal ArticleDOI
TL;DR: źspin this article is a tool for the integration of abstraction (for models and formulas) into the well-known model checker spin, which can reduce the state space and allow model checking of more complex systems.
Abstract: ion methods have become one of the most interesting topics in the automatic verification of software systems because they can reduce the state space to be explored and allow model checking of more complex systems. Nevertheless, there is a lack of tools actually supporting this technique. One direction for abstracting a system is to transform its formal description (its model) into a simpler version specified in the same language, thus skipping the construction of a specific (model checking) tool for the abstract model. The abstraction of the model should be followed by the abstraction of the temporal formulas to be checked. This paper presents źspin, a tool for the integration of abstraction (for models and formulas) into the well-known model checker spin. We present the theoretical results supporting the implementation together with a case study.

Journal ArticleDOI
TL;DR: In this article, the authors describe a tool to verify Erlang programs and show, by means of an industrial case study, how this tool is used, by using the Caesar/Aldebaran toolset.
Abstract: In this paper, we describe a tool to verify Erlang programs and show, by means of an industrial case study, how this tool is used. The tool includes a number of components, including a translation component, a state space generation component and a model checking component. To verify properties of the code, the tool first translates the Erlang code into a process algebraic specification. The outcome of the translation is made more efficient by taking advantage of the fact that software written in Erlang builds upon software design patterns such as client–server behaviours. A labelled transition system is constructed from the specification by use of the μCRL toolset. The resulting labelled transition system is model checked against a set of properties formulated in the μ-calculus using the Caesar/Aldebaran toolset.As a case study we focus on a simplified resource manager modelled on a real implementation in the control software of the AXD 301 ATM switch. Some of the key properties we verified for the program are mutual exclusion and non-starvation. Since the toolset supports only the regular alternation-free μ-calculus, some ingenuity is needed for checking the liveness property “non-starvation”. The case study has been refined step by step to provide more functionality, with each step motivated by a corresponding formal verification using model checking .

Journal ArticleDOI
TL;DR: A framework for concisely defining and evaluating symmetry reductions currently used in software model checking, involving heap objects and processes is presented, and an on-the-fly state space exploration algorithm combining both techniques is presented.
Abstract: Symmetry reduction techniques exploit symmetries that occur during the execution of a system in order to minimize its state space for efficient verification of temporal logic properties. This paper presents a framework for concisely defining and evaluating symmetry reductions currently used in software model checking, involving heap objects and processes. An on-the-fly state space exploration algorithm combining both techniques will also be presented. Second, the relation between symmetry and partial-order reductions is investigated, showing how one's strengths can be used to compensate for the other's weaknesses. The symmetry reductions presented here were implemented in the dSPIN model-checking tool. We also performed a number of experiments that show significant progress in reducing the cost of finite-state software verification.

Journal ArticleDOI
TL;DR: This paper presents a methodology for using simulated execution to assist a theorem prover in verifying safety properties of distributed systems and describes the use in a machine-checked proof of correctness of the Paxos algorithm for distributed consensus.
Abstract: This paper presents a methodology for using simulated execution to assist a theorem prover in verifying safety properties of distributed systems Execution-based techniques such as testing can increase confidence in an implementation, provide intuition about behavior, and detect simple errors quickly They cannot by themselves demonstrate correctness However, they can aid theorem provers by suggesting necessary lemmas and providing tactics to structure proofs This paper describes the use of these techniques in a machine-checked proof of correctness of the Paxos algorithm for distributed consensus

Journal ArticleDOI
TL;DR: This paper presents the use of directed explicit-state model checking to improve the length of already established error trails, and shows that partial-order reduction, which aims at reducing the size of the state space by exploiting the commutativity of concurrent transitions in asynchronous systems, can coexist well withdirected explicit- state model checking.
Abstract: In this paper we present work on trail improvement and partial-order reduction in the context of directed explicit-state model checking. Directed explicit-state model checking employs directed heuristic search algorithms such as A* or best-first search to improve the error-detection capabilities of explicit-state model checking. We first present the use of directed explicit-state model checking to improve the length of already established error trails. Second, we show that partial-order reduction, which aims at reducing the size of the state space by exploiting the commutativity of concurrent transitions in asynchronous systems, can coexist well with directed explicit-state model checking. Finally, we illustrate how to mitigate the excessive length of error trails produced by partial-order reduction in explicit-state model checking. In this context we also propose a combination of heuristic search and partial-order reduction to improve the length to already provided counterexamples.

Journal ArticleDOI
TL;DR: This work proposes a solution in which standard test-pattern generation technology is applied to search for concrete instances of abstract traces to solve the problem of undecidable whether an abstract trace corresponding to a counter-example has any concrete counterparts.
Abstract: The boundaries of model-checking have been extended through the use of abstraction. These techniques are conservative, in the following sense: when the verification succeeds, the verified property is guaranteed to hold; but when it fails, it may result either from the non satisfaction of the property, or from a too rough abstraction. In case of failure, it is, in general, undecidable whether an abstract trace corresponding to a counter-example has any concrete counterparts. For debugging purposes, one usually desires to go further than giving a “yes/no” answer (actually, a “yes/don’t know” answer!), and look for such concrete counter-examples. We propose a solution in which we apply standard test-pattern generation technology to search for concrete instances of abstract traces.

Journal ArticleDOI
TL;DR: This paper presents an initialization analysis that guarantees the deterministic behavior of programs and proposes a safe approximation of the property, precise enough for most dataflow programs.
Abstract: One of the appreciated features of the synchronous dataflow approach is that a program defines a perfectly deterministic behavior. But the use of the delay primitive leads to undefined values at the first cycle; thus a dataflow program is really deterministic only if it can be shown that such undefined values do not affect the behavior of the system. This paper presents an initialization analysis that guarantees the deterministic behavior of programs. This property being undecidable in general, the paper proposes a safe approximation of the property, precise enough for most dataflow programs. This analysis is a one-bit analysis --- expressions are either initialized or uninitialized --- and is defined as an inference-type system with subtyping constraints. This analysis has been implemented in the Lucid Synchrone compiler and in a new Scade-Lustre prototype compiler at Esterel Technologies. The analysis gives very good results in practice.

Journal ArticleDOI
TL;DR: This paper describes the experience in analyzing the requirements documents for the computer-aided resuscitation algorithm (CARA) designed by the Resuscitative Unit of the Walter Reed Army Institute of Research (WRAIR), and catalogs the effort required by a novice user of formal methods tools to carry out an analysis of the requirements papers.
Abstract: The design and functional complexity of medical devices have increased during the past 50 years, evolving from the use of a metronome circuit for the initial cardiac pacemaker to functions that include electrocardiogram analysis, laser surgery, and intravenous delivery systems that adjust dosage based on patient feedback. As device functionality becomes more intricate, concerns arise regarding efficacy, safety, and reliability. It thus becomes imperative to adopt a standard or methodology to ensure that the possibility of any defect or malfunction in these devices is minimized. It is with these facts in view that regulatory bodies are interested in investigating mechanisms to certify safety-crictical medical devices. These organizations advocate the use of formal methods techniques to evaluate safety-critical medical systems. However, the use of formal methods is keenly debated, with most manufacturers claiming that they are arduous and time consuming.In this paper we describe our experience in analyzing the requirements documents for the computer-aided resuscitation algorithm (CARA) designed by the Resuscitative Unit of the Walter Reed Army Institute of Research (WRAIR). We present our observations from two different angles – that of a nonbeliever in formal methods and that of a practitioner of formal methods. For the former we catalog the effort required by a novice user of formal methods tools to carry out an analysis of the requirements documents. For the latter we address issues related to choice of designs, errors in discovered requirements, and the tool support available for analyzing requirements .

Journal ArticleDOI
Hardi Hungar1, Bernhard Steffen
TL;DR: It turns out that abstract interpretation is the key to scaling known learning techniques for practical applications, that model checking may serve as a teaching aid in the learning process underlying the model construction, and that there are various synergies with other validation and verification techniques.
Abstract: In this paper, we review behavior-based model construction from a point of view characterized by verification, model checking, and abstraction. It turns out that abstract interpretation is the key to scaling known learning techniques for practical applications, that model checking may serve as a teaching aid in the learning process underlying the model construction, and that there are various synergies with other validation and verification techniques. We will illustrate our discussion by means of a realistic telecommunications scenario, where the underlying system has grown over the last two decades, the available system documentation consists of not much more than user manuals and protocol standards, and the revision cycle times are extremely short. In this situation, behavior-based model construction provides a sound basis, e.g., for test-suite design and maintenance, test organization, and test evaluation.

Journal ArticleDOI
TL;DR: It is shown in the paper that the theorem and its extension hold for the two’s complement architecture, and users should ensure that results are large enough on circuits that do not implement gradual underflow.
Abstract: Few designs, mostly those of Texas Instruments, continue to use two’s complement floating point units. Such units are simpler to build and to validate, but they do not comply to the dominant IEEE standard for floating point arithmetic. We compare some properties of the two systems in this text. Some features are lost, but others remain unchanged. One strong example is the case of Sterbenz’s theorem and our recent extension. We show in the paper that the theorem and its extension hold for the two’s complement architecture. Still, users should ensure that results are large enough on circuits that do not implement gradual underflow. Theorems have been proven and validated using the Coq automatic proof checker.

Journal ArticleDOI
TL;DR: A verification tool, the Concurrency Workbench of the New Century (CWB-NC), is used to analyze a model of the CARA system, and a technique called unit verification is developed, which entails taking small units of a system, putting them in a “verification harness” that exercises relevant executions appropriately within the unit, and then model checking these more tractable units.
Abstract: The computer-aided resuscitation algorithm, or CARA, is part of a US Army-developed automated infusion device for treating blood loss experienced by combatants injured on the battlefield. CARA is responsible for automatically stabilizing a patient’s blood pressure by infusing blood as needed based on blood pressure data the CARA system collects. The control part of the system is implemented in software, which is extremely safety critical and thus must perform correctly .This paper describes a case study in which a verification tool, the Concurrency Workbench of the New Century (CWB-NC), is used to analyze a model of the CARA system. The huge state space of CARA makes it problematic to conduct traditional “push-button” automatic verification such as model checking. Instead, we develop a technique called unit verification, which entails taking small units of a system, putting them in a “verification harness” that exercises relevant executions appropriately within the unit, and then model checking these more tractable units. For systems like CARA whose requirements are localized to individual system components or interactions between small numbers of components, unit verification offers a means of coping with huge state spaces.

Journal ArticleDOI
TL;DR: In this paper, the authors present a method for analyzing assembly programs obtained by compilation and checking safety properties on compiled programs, which is adapted to the certification of assembly or other machine-level kinds of programs.
Abstract: We present a method for analyzing assembly programs obtained by compilation and checking safety properties on compiled programs. It proceeds by analyzing the source program, translating the invariant obtained at the source level, and then checking the soundness of the translated invariant with respect to the assembly program. This process is especially adapted to the certification of assembly or other machine-level kinds of programs. Furthermore, the success of invariant checking enhances the level of confidence in the results of both the compilation and the static analysis. From a practical point of view, our method is generic in the choice of an abstract domain for representing sets of stores, and the process does not interact with the compilation itself. Hence a certification tool can be interfaced with an existing analyzer and designed so as to work with a class of compilers that do not need to be modified. Finally, a prototype was implemented to validate the approach.

Journal ArticleDOI
TL;DR: A case study of the Computer Assisted Resuscitation Algorithm (CARA) software for a casualty intravenous fluid infusion pump is presented and the effectiveness of performing rapid prototyping with parallel conceptualization to expose requirements issues is explored.
Abstract: Computer-aided prototyping evaluates and refines software requirements by defining requirements specifications, designing underlying compositional architecture, doing restricted real-time scheduling, and constructing a prototype by using reusable executable software components. This paper presents a case study of the Computer Assisted Resuscitation Algorithm (CARA) software for a casualty intravenous fluid infusion pump and explores the effectiveness of performing rapid prototyping with parallel conceptualization to expose requirements issues. Using a suite of prototyping tools, five different design model alternatives are generated based on the analysis of customer requirements documents. Further comparison is conducted with specific focus on a sample of comparative criteria: simplicity of design, safety aspects, requirements coverage, and enabling architecture. The case study demonstrates the usefulness of comparative rapid prototyping for revealing the omissions and discrepancies in the requirements document. The study also illustrates the efficiency of creating/modifying parallel models and reason for their complexity by using the tool suite. Additional enhancements for the prototyping suite are highlighted.

Journal ArticleDOI
TL;DR: This special section is the second devoted to publishing revised versions of contributions first presented at the International SPIN Workshop Series on Model Checking Software, and three of the papers included here have been extended to include significant new content and have undergone an independent round of reviewing.
Abstract: The term “software model checking” has recently been coined to refer to a flourishing area of research in software verification – the formal, automated analysis of program source code. Software model checking is considered an important application of classical model checking, where the model of a software system is analyzed in an automated fashion for compliance with a property specification. While classical model checking assumes the existence of an abstract model of the software system to be analyzed, in software model checking the emphasis is on directly analyzing program code given in a standard programming language, such as Java or C. This introduces a variety of significant obstacles, chief among them the efficient treatment of the complex data, e.g., heap structured data, and control constructs, e.g., procedure calls and exception handling, found in modern programming languages. These obstacles can also be viewed as opportunities for adapting traditional model checking data structures and algorithms to exploit the particular semantics of programming language constructs to gain improved performance. Moreover, while classical model checking emphasizes proving a model correct as the primary objective, an increasingly widely held view is that model checkers can function effectively as anomaly detectors or bug finders, i.e., they locate and explain undesired behavior of the software. This special section is the second devoted to publishing revised versions of contributions first presented at the International SPIN Workshop Series on Model Checking Software. In recent years this series of workshops has broadened its scope from focusing on the model checker SPIN to covering software model checking technology in general. The editorial introduction by Havelund and Visser to the first STTT special section devoted to SPIN papers [11] provides an excellent overview of the foundational ideas underlying software model checking. That special section was based on papers presented at the 7th International SPIN Workshop held at Stanford University (USA) in August/September 2001. Authors of wellregarded papers from the 8th International SPIN Workshop held in Toronto (Canada), colocated with ICSE 2001 on 10–11 May 2001, and the 9th International SPIN Workshop on Model Checking Software, held 11–13 April 2002 in Grenoble (France) as a satellite event of ETAPS 2002, were invited to submit to this special issue. All three of the papers included here have been extended to include significant new content and have undergone an independent round of reviewing.1

Journal ArticleDOI
TL;DR: This paper discusses the application of formal methods software engineering (FMSE) to the development of the Computer Automated Resuscitation A (CARA) medical device at Walter Reed Army Institute of Research.
Abstract: This paper discusses the application of formal methods software engineering (FMSE) to the development of the Computer Automated Resuscitation A (CARA) medical device at Walter Reed Army Institute of Research. Because this system is potentially life critical, a high level of quality was required. A formal engineering approach to the software development activities was chosen to satisfy this need. Specifically, a technique called sequence enumeration was applied to elicit and refine requirements while deriving a formal specification. The fundamentals of the specification process that was used on the project are described along with a brief summary of the project experience in the development and testing phases. The project employed recent advances in Cleanroom software engineering methods along with older box-structured development and usage-model-based statistical testing techniques.