scispace - formally typeset
Search or ask a question

Showing papers presented at "Formal Methods for Industrial Critical Systems in 2013"


Book ChapterDOI
23 Sep 2013
TL;DR: In this paper, the authors conducted an informal survey of contractors, customers, and certification authorities in the United States aerospace domain to identify barriers to the adoption of formal methods and suggested mitigations for those barriers.
Abstract: The authors conducted an informal survey of contractors, customers, and certification authorities in the United States aerospace domain to identify barriers to the adoption of formal methods and suggested mitigations for those barriers. We surveyed 31 individuals from the following nine organizations: United States Army, Boeing, FAA, Galois, Honeywell, Lockheed Martin, NASA, Rockwell Collins, and Wind River. The top three barrier categories were education, tools, and the industrial environment (i.e., non-technical barriers with respect to personnel changes, contracts, and schedules) The top three mitigation categories were education, improving tool integration, and creating and disseminating evidence of the benefits of formal analysis. Strategies to accelerate adoption of formal methods include making formal methods a part of the undergraduate software engineering curriculum, hosting courses in formal methods for working engineers, funding the integration of tools, funding improvements to tool interfaces, and promoting/requiring the use of formal methods on future contracts.

53 citations


Book ChapterDOI
23 Sep 2013
TL;DR: The paper presents a foundational model for a relay-based protected component that can be incrementally updated to represent more advanced behaviors, such as self-checking, routine test and continuous monitoring, and provides a set of reliability assessment properties of power distribution systems that can been formally verified by PRISM.
Abstract: Relays are widely used in power distribution systems to isolate their faulty components and thus avoid disruption of power and damaging expensive equipment. The reliability of relay-based protection of power distribution systems is of utmost importance and is judged by first constructing Markovian models of individual modules and then analyzing these models analytically or using simulation. However, due to their inherent limitations, simulations and analytical methods cannot ascertain accurate results and are not scalable, respectively. To overcome these limitations, we propose a modular approach for developing Markovian models of relay-based protected components and then analyzing the reliability of the overall power distribution system by executing its individual modules in parallel using the PRISM probabilistic model checker. The paper presents a foundational model for a relay-based protected component that can be incrementally updated to represent more advanced behaviors, such as self-checking, routine test and continuous monitoring. Moreover, the paper provides a set of reliability assessment properties of power distribution systems that can be formally verified by PRISM. For illustration purposes, we present the analysis of a typical power distribution substation.

12 citations


Book ChapterDOI
23 Sep 2013
TL;DR: This paper compares approaches based on model enumeration, on resolution, on dependency sequents, on substitution, and on knowledge compilation with projection in the area of automotive configuration and describes two real-life applications: model counting on a set of customer-relevant options and projection of BOM (bill of materials) constraints.
Abstract: This paper evaluates different algorithms for existential Boolean quantifier elimination in the area of automotive configuration. We compare approaches based on model enumeration, on resolution, on dependency sequents, on substitution, and on knowledge compilation with projection. We describe two real-life applications: model counting on a set of customer-relevant options and projection of BOM (bill of materials) constraints. Our work includes an implementation of the presented techniques on top of state-of-the-art tools. We evaluate the different approaches on real production data from our collaboration with BMW.

11 citations


Book ChapterDOI
23 Sep 2013
TL;DR: This paper uses the CADP toolbox to develop and validate a generic formal model of an SoC compliant with the recent ACE specification proposed by ARM to implement system-level coherency.
Abstract: System-on-Chip (SoC) architectures integrate now many different components, such as processors, accelerators, memory, and I/O blocks, some but not all of which may have caches. Because the validation effort with simulation-based validation techniques, as currently used in industry, grows exponentially with the complexity of the SoC, we investigate in this paper the use of formal verification techniques. More precisely, we use the CADP toolbox to develop and validate a generic formal model of an SoC compliant with the recent ACE specification proposed by ARM to implement system-level coherency.

10 citations


Book ChapterDOI
23 Sep 2013
TL;DR: This work presents a new analysis framework combining the analysis of open-loop stable controllers with safety constructs (redundancy, voters, ...) and introduces the basic analysis approaches: abstract interpretation synthesizing quadratic invariants and backward analysis based on quantifier elimination and convex hull computation synthesizing linear invariants.
Abstract: Critical control systems are often built as a combination of a control core with safety mechanisms allowing to recover from failures. For example a PID controller used with triplicated inputs and voting. Typically these systems would be designed at the model level in a synchronous language like Lustre or Simulink, and their code automatically generated from these models. We present a new analysis framework combining the analysis of open-loop stable controllers with safety constructs (redundancy, voters, ...). We introduce the basic analysis approaches: abstract interpretation synthesizing quadratic invariants and backward analysis based on quantifier elimination and convex hull computation synthesizing linear invariants. Then we apply it on a simple but representative example that no other available state-of-the-art technique is able to analyze. This contribution is another step towards early use of formal methods for critical embedded software such as the ones of the aerospace industry.

9 citations


Book ChapterDOI
23 Sep 2013
TL;DR: In the context of specification of complex digital systems and their implementation on FPGA, a tool-based methodology is developed using a component-based approach by means of Interpreted Prioritized Time Petri nets which are formalized in this article.
Abstract: In the context of specification of complex digital systems and their implementation on FPGA, a tool-based methodology is developed using a component-based approach. The component's behavior is described by means of Interpreted Prioritized Time Petri nets which are formalized in this article. Formal analysis is used to validate the model's properties and to optimize its implementation. Our approach is illustrated on the micro machine of a distributed stimulation unit.

8 citations


Book ChapterDOI
23 Sep 2013
TL;DR: This paper advocates this approach, through the presentation of the validation of an industrial HDLC controller IP using synthesizable property monitors, and draws conclusions from these experiments.
Abstract: Assertion-Based Verification is widely gaining acceptance. It makes use of assertions, which are formal expressions of the expected specification or requirements. Writing assertions concurrently with the design can bring significant benefits to both the design and verification processes for digital circuits. From the concrete perspective of an industrial development flow, inserting synthesized assertion monitors and associated debug infrastructures in an FPGA-based environment can improve the debugging phases in many application domains. This paper advocates this approach, through the presentation of the validation of an industrial HDLC controller IP using synthesizable property monitors, and draws conclusions from these experiments.

7 citations


Book ChapterDOI
23 Sep 2013
TL;DR: A formalization of PLC programs in first order logic is given, which is then used to automatically derive a predicate abstraction using SMT solving, and an abstraction called predicate scoping is employed which reduces the evaluation of predicates to certain program locations and thus can be used to exploit the cyclic scanning mode of PL programs.
Abstract: In this paper, we present a predicate abstraction for programs for programmable logic controllers (PLCs) so as to allow for model checking safety related properties. Our contribution is twofold: First, we give a formalization of PLC programs in first order logic, which is then used to automatically derive a predicate abstraction using SMT solving. Second, we employ an abstraction called predicate scoping which reduces the evaluation of predicates to certain program locations and thus can be used to exploit the cyclic scanning mode of PLC programs. We show the effectiveness of this approach in a small case study using programs from industry and academia.

6 citations


Book ChapterDOI
23 Sep 2013
TL;DR: This work discusses here the use of an SMT solver to investigate the quality of user-provided axioms, to check for inconsistencies in axiom and to verify expected relationships between axiomatic, for example.
Abstract: A common approach to formally checking assertions inserted into a program is to first generate verification conditions, logical sentences that, if then proven, ensure the assertions are correct. Sometimes users provide axioms that get incorporated into verification conditions. Such axioms can capture aspects of the program's specification or can be hints to help automatic provers. There is always the danger of mistakes in these axioms. In the worst case these mistakes introduce inconsistencies and verification conditions become erroneously provable. We discuss here our use of an SMT solver to investigate the quality of user-provided axioms, to check for inconsistencies in axioms and to verify expected relationships between axioms, for example.

4 citations


Book ChapterDOI
23 Sep 2013
TL;DR: Evidence that FM can be successfully used in Industry is presented and a company-specific approach and a more general approach identifies general questions of interest to many companies in various Industry sectors are used.
Abstract: Developing complex critical software should require proper validation with regards to requirements as well as showing a high level of certainty on correctness of the resulting system. While formal methods (FM) have a large potential to address these two challenges, their current Industry adoption is still hampered by a number of hurdles of technical and organizational natures. Furthermore, many misconceptions (myths) about FM remain deeply anchored in Industry. To help to bring down these hurdles and myths, this paper presents evidence that FM can be successfully used in Industry. The evidence repository follows two strategy to present its content. First, a company-specific approach is used where success stories describe how a given company deployed FM in one or several of its development projects. Second, a more general approach identifies general questions of interest (FAQ) to many companies in various Industry sectors. Success stories and FAQs are made available using a public collaborative wiki-based website open to external contributions ( http://www.fm4industry.org ).

3 citations


Book ChapterDOI
23 Sep 2013
TL;DR: This paper considers current state-of-the-art verification techniques that are based upon, or supported by, formal methods principles to ensure a high degree of assurance and the practical application of such approaches in an industrial context so as to achieve an efficient, coherent and integrated workflow.
Abstract: This paper considers current state-of-the-art verification techniques that are based upon, or supported by, formal methods principles to ensure a high degree of assurance. It considers the practical application of such approaches in an industrial context so as to achieve an efficient, coherent and integrated workflow. The key focus is a clear process that starts from software requirements and works through to the final object code on the target, ensuring key verification aims are fulfilled with a high-degree of confidence at each step. The process combines both analysis and testing to maximise the strengths and to cover the weaknesses of each. For each step, a high-level description of the approach, potential benefits, prerequisites and limitations is given. The workflow outlined considers tools, methods and the supporting processes.

Book ChapterDOI
23 Sep 2013
TL;DR: This paper specifies this access control protocol in the first-order relational logic with Alloy, and it is verified that it preserves the correctness of the system on which it is deployed in such a way that the access control policy is enforced identically at all participating user sites and, accordingly, the data consistency remains still maintained.
Abstract: Distributed Collaborative Editors are interactive systems where several and dispersed users edit concurrently shared documents. Generally, these systems rely on data replication and use safe coordination protocol which ensures data consistency even though the users's updates are executed in any order on different copies. Controlling access in such systems is a challenging problem, as they need dynamic access changes and low latency access to shared documents. In [1], a flexible access control protocol is proposed; it is based on replicating the shared document and its authorization policy at the local memory of each user. To deal with latency and dynamic access changes, an optimistic access control technique is used where enforcement of authorizations is retroactive. However, verifying whether the combination of access control and coordination protocols preserves the data consistency is a hard task since it requires examining a large number of situations. In this paper, we specify this access control protocol in the first-order relational logic with Alloy, and we verify that it preserves the correctness of the system on which it is deployed in such a way that the access control policy is enforced identically at all participating user sites and, accordingly, the data consistency remains still maintained.

Book ChapterDOI
23 Sep 2013
TL;DR: It is shown how hybrid automata can be used to model a failing system and how backwards reachability analysis of this model and a given model of the emergency control can beused to prove the conditions under which safety switching will always succeed in ensuring fail-safe behavior.
Abstract: A fail-safe embedded system is a system that will transit to a safe state in the event of a system failure. In these situations the system will typically switch from the normal, now faulty, operational mode to an emergency control mode which will ensure the safety of the system. The switch will have a hard real-time constraint if the results of a temporal failure are catastrophic in nature. Many industry-critical systems fall into this category, such as industrial plants and vehicles. We show how hybrid automata can be used to model a failing system and how backwards reachability analysis of this model and a given model of the emergency control can be used to prove the conditions under which safety switching will always succeed in ensuring fail-safe behavior. To show the feasibility of the technique we present the prototype tool HyRev. The tool takes a description of the emergency control system and the catastrophic bad states of the system as input and produces a safety check routine with a well-defined worst-case execution time as output, which can then be run on the embedded system.