scispace - formally typeset
Search or ask a question

Showing papers presented at "Formal Methods for Industrial Critical Systems in 2009"


Book ChapterDOI
04 Nov 2009
TL;DR: The IEEE 754 standard, the FLUCTUAT tool, the types of codes to be analyzed and the analysis methodology, together with code examples and analysis results are presented.
Abstract: Most modern safety-critical control programs, such as those embedded in fly-by-wire control systems, perform a lot of floating-point computations. The well-known pitfalls of IEEE 754 arithmetic make stability and accuracy analyses a requirement for this type of software. This need is traditionally addressed through a combination of testing and sophisticated intellectual analyses, but such a process is both costly and error-prone. FLUCTUAT is a static analyzer developed by CEA-LIST for studying the propagation of rounding errors in C programs. After a long time research collaboration with CEA-LIST on this tool, Airbus is now willing to use FLUCTUAT industrially, in order to automate part of the accuracy analyses of some control programs. In this paper, we present the IEEE 754 standard, the FLUCTUAT tool, the types of codes to be analyzed and the analysis methodology, together with code examples and analysis results.

154 citations


Book ChapterDOI
27 Jul 2009
TL;DR: This work presents dynamic communicating automata with timers and events to describe properties of systems, implemented in Larva, an event-based runtime verification tool for monitoring temporal and contextual properties of Java programs.
Abstract: Given the intractability of exhaustively verifying software, the use of runtime-verification, to verify single execution paths at runtime, is becoming popular. Although the use of runtime verification is increasing in industrial settings, various challenges still are to be faced to enable it to spread further. We present dynamic communicating automata with timers and events to describe properties of systems, implemented in Larva, an event-based runtime verification tool for monitoring temporal and contextual properties of Java programs. The combination of timers with dynamic automata enables the straightforward expression of various properties, including replication of properties, as illustrated in the use of Larva for the runtime monitoring of a real life case study -- an online transaction system for credit card. The features of Larva are also benchmarked and compared to a number of other runtime verification tools, to assess their respective strengths in property expressivity and overheads induced through monitoring.

104 citations


Book ChapterDOI
27 Jul 2009
TL;DR: The main conclusion and recommendation for practitioner is to be critical to claims of dramatic improvement brought by a single sophisticated technique, rather use many different simple techniques and combine them.
Abstract: In order to apply formal methods in practice, the practitioner has to comprehend a vast amount of research literature and realistically evaluate practical merits of different approaches. In this paper we focus on explicit finite state model checking and study this area from practitioner's point of view. We provide a systematic overview of techniques for fighting state space explosion and we analyse trends in the research. We also report on our own experience with practical performance of techniques. Our main conclusion and recommendation for practitioner is the following: be critical to claims of dramatic improvement brought by a single sophisticated technique, rather use many different simple techniques and combine them.

91 citations


Book ChapterDOI
04 Nov 2009
TL;DR: The communication between a developer and a domain expert (or manager) is very important for successful deployment of formal methods and it is useful to create domain specific visualisations.
Abstract: The communication between a developer and a domain expert (or manager) is very important for successful deployment of formal methods. On the one hand it is crucial for the developer to get feedback from the domain expert for further development. On the other hand the domain expert needs to check whether his expectations are met. An animation tool allows to check the presence of desired functionality and to inspect the behaviour of a specification, but requires knowledge about the mathematical notation. To avoid this problem, it is useful to create domain specific visualisations. One tool which performs this task is Brama. This tool is very important for ClearSy, and is being used for several industrial projects and has helped to obtain several contracts. However, the tool cannot be applied in conjunction with ProB. Also, creating the code that defines the mapping between a state and its graphical representation is a rather time consuming task. It can take several weeks to develop a custom visualisation.

55 citations


Book ChapterDOI
27 Jul 2009
TL;DR: This paper presents the formal verification of a primary-to-secondary leaking (abbreviated as PRISE) safety procedure in a nuclear power plant (NPP) using the coloured Petri net (CPN) representation.
Abstract: This paper presents the formal verification of a primary-to-secondary leaking (abbreviated as PRISE) safety procedure in a nuclear power plant (NPP). The software for the PRISE is defined by the Function Block Diagram specification method. Our approach to the formal verification of the PRISE safety procedure is based on the coloured Petri net (CPN) representation. The CPN model of the checked software is derived by reinterpretation from the FBD diagram, using a pre-developed library of CPN subnets. This results in a high-level, hierarchical coloured Petri net, that has an almost identical structure to the FBD specification. The state space of the CPN model was drastically reduced by "folding" equivalent states and trajectories into equivalence classes. Some of the safety properties could be proven based on the SCC (strongly connected components) graph of the reduced state space. Other properties were proven by CTL temporal logic based model checking.

32 citations


Book ChapterDOI
27 Jul 2009
TL;DR: Some lessons learned are identified, showing how to develop and verify the specification and check some properties in a compositional way using theoretical results and support tools to validate this complex system.
Abstract: This paper presents an experience report on the specification and the validation of a real case study in the context of the industrial CRISTAL project. The case study concerns a platoon of a new type of urban vehicles with new functionalities and services. It is specified using the combination, named CSP||B, of two well-known formal methods, and validated using the corresponding support tools. This large --- both distributed and embedded --- system typically corresponds to a multi-level composition of components that have to cooperate. We identify some lessons learned, showing how to develop and verify the specification and check some properties in a compositional way using theoretical results and support tools to validate this complex system.

29 citations


Book ChapterDOI
04 Nov 2009
TL;DR: Formal models of the e-passport protocols are developed that enable model-based testing using the TorXakis framework and help to rigorously test electronic passports.
Abstract: Electronic passports, or e-passports for short, contain a contactless smartcard which stores digitally-signed data. To rigorously test e-passports, we developed formal models of the e-passport protocols that enable model-based testing using the TorXakis framework.

24 citations


Book ChapterDOI
27 Jul 2009
TL;DR: A new methodology for requirements validation, based on the use of formal methods, optimized to deal with properties (rather than with models), is proposed.
Abstract: Flaws in requirements may have severe impacts on the subsequent phases of the development flow. However, an effective validation of requirements can be considered a largely open problem. In this paper, we propose a new methodology for requirements validation, based on the use of formal methods. The methodology consists of three main phases: first, an informal analysis is carried out, resulting in a structured version of the requirements, where each fragment is classified according to a fixed taxonomy. In the second phase, each fragment is then mapped onto a subset of UML, with a precise semantics, and enriched with static and temporal constraints. The third phase consists of the application of specialized formal analysis techniques, optimized to deal with properties (rather than with models).

24 citations


Book ChapterDOI
04 Nov 2009
TL;DR: Industrial experience of applying the B formal method in the industry, on diverse application fields (railways, automotive, smartcard, etc.) is presented.
Abstract: This article presents industrial experience of applying the B formal method in the industry, on diverse application fields (railways, automotive, smartcard, etc.). If the added value of such an approach has been demonstrated over the year, using a formal method is not the panacea and requires some precautions when introduced in an industrial development cycle.

22 citations


Book ChapterDOI
04 Nov 2009
TL;DR: This extended abstract briefly surveys the key concepts and describes the experience in the application of bi-abduction to real-world applications and systems programs of over one million lines of code.
Abstract: In joint work with Cristiano Calcagno, Peter O'Hearn, and Hongseok Yang, we have introduced bi-abductive inference and its use in reasoning about heap manipulating programs [5]. This extended abstract briefly surveys the key concepts and describes our experience in the application of bi-abduction to real-world applications and systems programs of over one million lines of code.

13 citations


Book ChapterDOI
27 Jul 2009
TL;DR: This work identifies three environmental assumptions and compares the implementability of a Held_For operator in each of them, formalizing this analysis in PVS.
Abstract: There has been relatively little work on the implementability of timing requirements. We have previously provided definitions of fundamental timing operators that explicitly considered tolerances on property durations and intersample jitter. In this work we identify three environmental assumptions and compare the implementability of a Held_For operator in each of them, formalizing this analysis in PVS. We show how to design a software component that implements the Held_For operator and then verify it in PVS. This pre-verified component is then used to guide the design of more complex components and to decompose their design verification into simple inductive proofs as demonstrated through the implementation of a timing requirement for an example application.

Book ChapterDOI
27 Jul 2009
TL;DR: A model for computation of I/O complexity on the model of Aggarwal and Vitter modified for flash memories is provided and an answer, when the usage of flash devices pays off and whether their further evolution in speed and capacity could broaden a range, where new algorithms outperform the old ones is given.
Abstract: As flash media become common and their capacities and speed grow, they are becoming a practical alternative for standard mechanical drives. So far, external memory model checking algorithms have been optimized for mechanical hard disks corresponding to the model of Aggarwal and Vitter [1]. Since flash memories are essentially different, the model of Aggarwal and Vitter no longer describes their typical behavior. On such a different device, algorithms can have different complexity, which may lead to the design of completely new flash-memory-efficient algorithms. We provide a model for computation of I/O complexity on the model of Aggarwal and Vitter modified for flash memories. We discuss verification algorithms optimized for this model and compare the performance of these algorithms with approaches known from I/O efficient model checking on mechanical hard disks. We also give an answer, when the usage of flash devices pays off and whether their further evolution in speed and capacity could broaden a range, where new algorithms outperform the old ones.

Book ChapterDOI
27 Jul 2009
TL;DR: This paper develops in Maude an abstract, finite-state version of the information-flow operational semantics of Java which supports finite program verification and proposes a certification technique for non-interference of Java programs based on rewriting logic.
Abstract: In this paper we propose a certification technique for non-interference of Java programs based on rewriting logic, a very general logical and semantic framework efficiently implemented in the high-level programming language Maude. Non---interference is a semantic program property that prevents illicit information flow to happen. Starting from a basic specification of the semantics of Java written in Maude, we develop an information---flow extension of this operational Java semantics which allows us to observe non-interference of Java programs. Then we develop in Maude an abstract, finite-state version of the information-flow operational semantics which supports finite program verification. As a by---product of the verification, a certificate of non-interference is delivered which consists of a set of (abstract) rewriting proofs that can be easily checked by the code consumer using a standard rewriting logic engine.

Book ChapterDOI
04 Nov 2009
TL;DR: This paper presents techniques developed to check program equivalences in the context of cryptographic software development, where specifications are typically reference implementations, and uses the fundamental notion of natural invariant to link the specification level and the interactive proof construction process.
Abstract: This paper presents techniques developed to check program equivalences in the context of cryptographic software development, where specifications are typically reference implementations. The techniques allow for the integration of interactive proof techniques (required given the difficulty and generality of the results sought) in a verification infrastructure that is capable of discharging many verification conditions automatically. To this end, the difficult results in the verification process (to be proved interactively) are isolated as a set of lemmas. The fundamental notion of natural invariant is used to link the specification level and the interactive proof construction process.

Book ChapterDOI
27 Jul 2009
TL;DR: An approach and an associated tool that have been proposed to automate the test oracle procedure of critical systems developed at Airbus have been successfully applied to several Airbus examples.
Abstract: This paper presents an approach and an associated tool that have been proposed to automate the test oracle procedure of critical systems developed at Airbus. The target tests concern the early validation of the SCADE design and are performed in a simulated environment. The proposed approach and tool have been successfully applied to several Airbus examples.

Book ChapterDOI
04 Nov 2009
TL;DR: This paper shows how Rate Transition Systems can be used as a unifying framework for the definition of the semantics of stochastic process algebras and how RTSs help describing different languages, their differences and their similarities.
Abstract: In this paper we show how Rate Transition Systems (RTSs ) can be used as a unifying framework for the definition of the semantics of stochastic process algebras. RTSs facilitate the compositional definition of such semantics exploiting operators on the next state functions which are the functional counterpart of classical process algebra operators. We apply this framework to representative fragments of major stochastic process calculi namely TIPP , PEPA and IML and show how they solve the issue of transition multiplicity in a simple and elegant way. We, moreover, show how RTSs help describing different languages, their differences and their similarities. For each calculus, we also show the formal correspondence between the RTSs semantics and the standard SOS one.

Book ChapterDOI
27 Jul 2009
TL;DR: This paper describes a powerful, fully automated method to evaluate Datalog queries by using Boolean Equation Systems (Bess), and its application to object-oriented program analysis.
Abstract: This paper describes a powerful, fully automated method to evaluate Datalog queries by using Boolean Equation Systems (Bess), and its application to object-oriented program analysis. Datalog is used as a specification language for expressing complex interprocedural program analyses involving dynamically created objects. In our methodology, Datalog rules encoding a particular analysis together with a set of constraints (Datalog facts that are automatically extracted from program source code) are dynamically transformed into a Bes, whose local resolution corresponds to the demand-driven evaluation of the program analysis. This approach allows us to reuse existing general purpose verification toolboxes, such as Cadp, providing local Bes resolutions with linear-time complexity. Our evaluation technique has been implemented and successfully tested on several Java programs and Datalog analyses that demonstrate the feasibility of our approach.

Book ChapterDOI
04 Nov 2009
TL;DR: A dynamic partitioning scheme usable by model checking techniques that divide the state space into partitions, such as most external memory and distributed model checking algorithms, to reduce the number of transitions that link states belonging to different partitions is described.
Abstract: We describe a dynamic partitioning scheme usable by model checking techniques that divide the state space into partitions, such as most external memory and distributed model checking algorithms The goal of the scheme is to reduce the number of transitions that link states belonging to different partitions, and thereby limit the amount of disk access and network communication We report on several experiments made with our verification platform ASAP that implements the dynamic partitioning scheme proposed in this paper

Book ChapterDOI
04 Nov 2009
TL;DR: This paper develops a technique to prove whether non-determinism does not affect the behavior of the simulation model, or whether there exists a situation in which the simulation models might produce different results.
Abstract: Cell libraries often contain a simulation model in a system design language, such as Verilog. These languages usually involve non-determinism, which in turn, poses a challenge to their validation. Simulators often resolve such problems by using certain rules to make the specification deterministic. This however is not justified by the behavior of the hardware that is to be modeled. Hence, simulation might not be able to detect certain errors. In this paper we develop a technique to prove whether non-determinism does not affect the behavior of the simulation model, or whether there exists a situation in which the simulation model might produce different results. To make our technique efficient, we show that the global property of equal behavior for all possible evaluations is equivalent to checking only a certain local property.

Book ChapterDOI
04 Nov 2009
TL;DR: The paper presents the certified implementation of the SVM on top of the JVM, a first-order functional language with unusual memory management features that is useful for programming small devices based on this machine.
Abstract: Safe is a first-order functional language with unusual memory management features: memory can be both explicitly and implicitly deallocated at some specific points in the program text, and there is no need for a runtime garbage collector. The final code is bytecode of the Java Virtual Machine (JVM), so the language is useful for programming small devices based on this machine. As an intermediate stage in the compiler's back-end, we have defined the Safe Virtual Machine (SVM), and have implemented this machine on top of the Java Virtual Machine (JVM). The paper presents the certified implementation of the SVM on top of the JVM. We have used the proof assistant Isabelle/HOL for this purpose.

Book ChapterDOI
04 Nov 2009
TL;DR: This work proposes a methodology to design and verify a concurrent system that splits the verification problem in two independent tasks: internal verification of shared resources, where some concurrency aspects like mutual exclusion and conditional synchronisation are isolated, and external verification of processes, where synchronisation mechanisms are not relevant.
Abstract: Testing is the more widely used approach to (partial) system validation in industry. The introduction of concurrency makes exhaustive testing extremely costly or just impossible, requiring shifting to formal verification techniques. We propose a methodology to design and verify a concurrent system that splits the verification problem in two independent tasks: internal verification of shared resources, where some concurrency aspects like mutual exclusion and conditional synchronisation are isolated, and external verification of processes, where synchronisation mechanisms are not relevant. Our method is language independent, non-intrusive for the development process, and improves the portability of the resulting system. We demonstrate it by actually checking several properties of an example application using the TLC model checker.

Book ChapterDOI
04 Nov 2009
TL;DR: This paper sums up the integration of a correct-by-construction components for the qualifiable geneauto automatic code generator ( Acg), which transforms Simulink models to C code for safety critical systems.
Abstract: This paper sums up the integration of a correct-by-construction components for the qualifiable geneauto automatic code generator ( Acg ). It transforms Simulink models to C code for safety critical systems. Our approach which combines classical development process and formal specification and verification using proof-assistants, led to preliminary fruitful exchanges with French certification authorities. The most rigorous objectives from qualification level and user standards conforms with DO-178B/ED-12B recommendations for a level A development tool. The resulting tool has been applied successfully to real-size industrial use cases from various transportation domain partners and led to detection of requirement errors.

Book ChapterDOI
27 Jul 2009
TL;DR: A new algorithm and a new tool that combines BDD-based model checking with partial order reduction (POR) to allow the verification of models featuring asynchronous processes, with significant performance improvements over currently available tools are presented.
Abstract: Different approaches have been developed to mitigate the state space explosion of model checking techniques. Among them, symbolic verification techniques use efficient representations such as BDDs to reason over sets of states rather than over individual states. Unfortunately, past experience has shown that these techniques do not work well for loosely-synchronized models. This paper presents a new algorithm and a new tool that combines BDD-based model checking with partial order reduction (POR) to allow the verification of models featuring asynchronous processes, with significant performance improvements over currently available tools. We start from the ImProviso algorithm (Lerda et al.) for computing reachable states, which combines POR and symbolic verification. We merge it with the FwdUntil method (Iwashita et al.) that supports verification of a subset of CTL. Our algorithm has been implemented in a prototype that is applicable to action-based models and logics such as process algebras and ACTL. Experimental results on a model of an industrial application show that our method can verify properties of a large industrial model which cannot be handled by conventional model checkers.

Book ChapterDOI
27 Jul 2009
TL;DR: In this paper, the authors considered an industrial implementation of the reentrant readers-writers problem and modeled it using a model checker revealing a serious error: a possible deadlock situation.
Abstract: The classic readers-writers problem has been extensively studied. This holds to a lesser degree for the reentrant version, where it is allowed to nest locking actions. Such nesting is useful when a library is created with various procedures that each start and end with a lock. Allowing nesting makes it possible for these procedures to call each other. We considered an existing widely used industrial implementation of the reentrant readers-writers problem. We modeled it using a model checker revealing a serious error: a possible deadlock situation. The model was improved and checked satisfactorily for a fixed number of processes. To achieve a correctness result for an arbitrary number of processes the model was converted to a theorem prover with which it was proven.

Book ChapterDOI
27 Jul 2009
TL;DR: An extension of structural coverage criteria for Lustre programs that use multiple clocks is extended to allow for the application of the existing coverage metrics to industrial software components, which usually operate on multiple clocks, without negatively affecting the complexity of the criteria.
Abstract: Lustre is a formal synchronous declarative language widely used for modeling and specifying safety-critical applications in the fields of avionics, transportation or energy production. Testing this kind of applications is an important and demanding task during the development process. It mainly consists in generating test data and measuring the achieved coverage. A hierarchy of structural coverage criteria for Lustre programs have been recently defined to assess the thoroughness of a given test set. They are based on the operator network, which is the graphical representation of a Lustre program and depicts the way that input flows are transformed into output flows through their propagation along the program paths. The above criteria definition aimed at demonstrating the opportunity of such a coverage assessment approach but doesn't deal with all the language constructions. In particular, the use of multiple clocks has not been taken into account. In this paper, we extend the criteria to programs that use multiple clocks. Such an extension allows for the application of the existing coverage metrics to industrial software components, which usually operate on multiple clocks, without negatively affecting the complexity of the criteria.

Book ChapterDOI
04 Nov 2009
TL;DR: An explanation is given of the approach taken by Cress (Communication Representation Employing Systematic Specification) and an application to grid service composition in e-Social Science.
Abstract: Creating new services through composition of existing ones is an attractive option. However, composition can be complex and service compatibility needs to be checked. A rigorous and industrially-usable methodology is therefore desirable for creating, verifying, implementing and validating composed services. An explanation is given of the approach taken by Cress (Communication Representation Employing Systematic Specification). Formal verification and validation are performed through automated translation to Lotos (Language Of Temporal Ordering Specification). Implementation and validation are performed through automated translation to Bpel (Business Process Execution Logic) and WSDL (Web Services Description Language). The approach is illustrated with an application to grid service composition in e-Social Science.

Book ChapterDOI
04 Nov 2009
TL;DR: In this article, a Linux driver for I2C (Inter-Integrated Circuit) is analyzed using model checking and static analysis on a common case study to analyse the behavior of a Linux kernel driver for inter-integrated circuits.
Abstract: Introduction . Formal methods for the analysis of system behaviour offer solutions to problems with concurrency, such as race conditions and deadlocks.We employ two such methods that are presently most applied in industry: model checking and static analysis on a common case study to analyse the behaviour of a Linux driver for I2C (Inter-Integrated Circuit). An industrial client provided us with the source code of the driver for which it was known that it contained defects. Based on the code, some documentation, and feedback by the developers we extracted a model of the device driver. The model was checked using the mCRL2 toolset [3] and some potential defects were revealed which were later confirmed by the developers. The errors were caused by inconsistent use of routines for interrupt enabling and disabling, resulting in unprotected references to shared memory and calls to lower-level functions. In addition, we performed checks with UNO [4], a static analysis tool that works directly with the source code. We employed UNO to statically detect the errors that were found by the dynamic analysis in the model checking phase. Based on our findings, we modified the source code to avoid the discovered potential defects. Although some errors remained unsolved, an improvement was observed in the standard tests that were carried out with our fixed version.

Book ChapterDOI
04 Nov 2009
TL;DR: The specification and verification of a protocol intended to facilitate communication in an experimental remotely operated vehicle used by NASA researchers is presented, and compositional techniques are demonstrated that allow for automating the tedious and usually cumbersome part of the proof, thereby making the iterative design process of protocols feasible.
Abstract: We present the specification and verification, in PVS, of a protocol intended to facilitate communication in an experimental remotely operated vehicle used by NASA researchers. The protocol is defined as a stack-layered composition of simpler protocols. It can be seen as the vertical composition of protocol layers, where each layer performs input and output message processing, and the horizontal composition of different processes concurrently inhabiting the same layer, where each process satisfies a distinct requirement. We formally prove that the protocol components satisfy certain delivery guarantees. Then, we demonstrate compositional techniques that allow us to prove that these guarantees also hold in the composed system. Although the protocol itself is not novel, the methodology employed in its verification extends existing techniques by automating the tedious and usually cumbersome part of the proof, thereby making the iterative design process of protocols feasible.

Book ChapterDOI
04 Nov 2009
TL;DR: Results of experiments show that use of the restrictions on concurrency in model checking with Java PathFinder reduces the state space size by an order of magnitude and also reduces the time needed to discover errors in Java programs.
Abstract: The main limitation of software model checking is that, due to state explosion, it does not scale to real-world multi-threaded programs. One of the reasons is that current software model checkers adhere to full semantics of programming languages, which are based on very permissive models of concurrency. Current runtime platforms for programs, however, restrict concurrency in various ways -- it is visible especially in the case of critical embedded systems, which typically involve only a single processor and use a threading model based on limited preemption. In this paper, we present a technique for addressing state explosion in model checking of Java programs for embedded systems, which exploits restrictions on concurrency common to current Java platforms for such systems. We have implemented the technique in Java PathFinder and performed a number of experiments on Purdue Collision Detector, which is a non-trivial multi-threaded Java program. Results of experiments show that use of the restrictions on concurrency in model checking with Java PathFinder reduces the state space size by an order of magnitude and also reduces the time needed to discover errors in Java programs.

Book ChapterDOI
27 Jul 2009
TL;DR: This talk will report on the use of an approach, called Instrumentation Based Verification, for checking the correctness of models of control software given in Simulink® and Stateflow®, and on a project between the Fraunhofer Center for Experimental Software Engineering and a major automotive supplier on using IBV to verify models of an exterior-lighting control system.
Abstract: This talk will report on the use of an approach, called Instrumentation Based Verification, for checking the correctness of models of control software given in Simulink® and Stateflow®. In IBV, engineers formalize requirements as so-called monitor models, whose purpose is to search executions of the main controller model for violations of required behavior. Testing is then performed on the instrumented controller model in order to check for the possibility of deviations between controller and requirements. Tools such as Reactis® provide automated support for conducting these activities, and the technique has attracted interest in automotive, aerospace and medical-device settings. The presentation will first review model-based development and IBV and their industrial motivations. It will then report on a project between the Fraunhofer Center for Experimental Software Engineering and a major automotive supplier on using IBV to verify models of an exterior-lighting control system.