scispace - formally typeset
Search or ask a question

Showing papers presented at "Formal Methods in 2014"


Journal ArticleDOI
01 Feb 2014
TL;DR: In this article, the effect of post-weld heat treatment (PWHT) on the microstructure and mechanical properties of friction stir butt-joined AA6061 Al-alloy plates both in O and T6-temper conditions was investigated by detailed microstructural investigations and microhardness measurements, in combination with transverse tensile testing.
Abstract: In this study, the effect of post-weld heat treatment (PWHT) on the microstructure and mechanical properties of friction stir butt-joined AA6061 Al-alloy plates both in O and T6-temper conditions was investigated by detailed microstructural investigations and microhardness measurements, in combination with transverse tensile testing It was determined that the PWHT might result in abnormal grain growth (AGG) in the weld zone particularly in the joints produced in O-temper condition depending on the weld parameters used during friction stir welding The PWHT generally led to an improvement in the mechanical properties even if AGG took place Thus, the post-weld heat-treated joints exhibited mechanical properties much higher than those of respective as-welded plates and comparable to those of the respective base plates

137 citations


Book ChapterDOI
12 May 2014
TL;DR: This paper introduces the web-based model checker iscasMc, an easy-to-use web interface for the evaluation of Markov chains and decision processes against P CTL and PCTL* specifications that is particularly efficient in evaluating the probabilities of LTL properties.
Abstract: We introduce the web-based model checker iscasMc for probabilistic systems see http://iscasmc.ios.ac.cn/IscasMC . This Java application offers an easy-to-use web interface for the evaluation of Markov chains and decision processes against PCTL and PCTL* specifications. Compared to PRISM or MRMC, iscasMc is particularly efficient in evaluating the probabilities of LTL properties.

99 citations


Journal ArticleDOI
01 Feb 2014
TL;DR: Based on the presented techniques, Stranger, an automata-based string analysis tool for detecting string-related security vulnerabilities in PHP applications is implemented and able to detect known/unknown vulnerabilities, and prove the absence of vulnerabilities with respect to given attack patterns.
Abstract: Verifying string manipulating programs is a crucial problem in computer security. String operations are used extensively within web applications to manipulate user input, and their erroneous use is the most common cause of security vulnerabilities in web applications. We present an automata-based approach for symbolic analysis of string manipulating programs. We use deterministic finite automata (DFAs) to represent possible values of string variables. Using forward reachability analysis we compute an over-approximation of all possible values that string variables can take at each program point. Intersecting these with a given attack pattern yields the potential attack strings if the program is vulnerable. Based on the presented techniques, we have implemented Stranger, an automata-based string analysis tool for detecting string-related security vulnerabilities in PHP applications. We evaluated Stranger on several open-source Web applications including one with 350,000+ lines of code. Stranger is able to detect known/unknown vulnerabilities, and, after inserting proper sanitization routines, prove the absence of vulnerabilities with respect to given attack patterns.

66 citations


Journal ArticleDOI
01 Oct 2014
TL;DR: The core of the approach is a non-trivial, lattice-theoretic generalisation of the conflict-driven clause learning algorithm in modern sat solvers to lattice -based abstractions, which allows for directly handling arithmetic and is more efficient than encoding a formula as a bit-vector as in current floating-point solvers.
Abstract: We present a bit-precise decision procedure for the theory of floating-point arithmetic. The core of our approach is a non-trivial, lattice-theoretic generalisation of the conflict-driven clause learning algorithm in modern sat solvers to lattice-based abstractions. We use floating-point intervals to reason about the ranges of variables, which allows us to directly handle arithmetic and is more efficient than encoding a formula as a bit-vector as in current floating-point solvers. Interval reasoning alone is incomplete, and we obtain completeness by developing a conflict analysis algorithm that reasons natively about intervals. We have implemented this method in the mathsat5 smt solver and evaluated it on assertion checking problems that bound the values of program variables. Our new technique is faster than a bit-vector encoding approach on 80 % of the benchmarks, and is faster by one order of magnitude or more on 60 % of the benchmarks. The generalisation of cdcl we propose is widely applicable and can be used to derive abstraction-based smt solvers for other theories.

66 citations


Book ChapterDOI
12 May 2014
TL;DR: VerCors as mentioned in this paper implements thread-modular static verification of concurrent programs, annotated with functional properties and heap access permissions, and supports both generic multithreaded and vector-based programming models.
Abstract: The VerCors tool implements thread-modular static verification of concurrent programs, annotated with functional properties and heap access permissions. The tool supports both generic multithreaded and vector-based programming models. In particular, it can verify multithreaded programs written in Java, specified with JML extended with separation logic. It can also verify parallelizable programs written in a toy language that supports the characteristic features of OpenCL. The tool verifies programs by first encoding the specified program into a much simpler programming language and then applying the Chalice verifier to the simplified program. In this paper we discuss both the implementation of the tool and the features of its specification language.

52 citations


Book ChapterDOI
16 Jun 2014
TL;DR: This paper is an introductory survey of available methods for the computation and representation of probabilistic counterexamples for discrete-time Markov chains and Probabilistic automata, using explicit and symbolic techniques.
Abstract: This paper is an introductory survey of available methods for the computation and representation of probabilistic counterexamples for discrete-time Markov chains and probabilistic automata. In contrast to traditional model checking, probabilistic counterexamples are sets of finite paths with a critical probability mass. Such counterexamples are not obtained as a by-product of model checking, but by dedicated algorithms. We define what probabilistic counterexamples are and present approaches how they can be generated. We discuss methods based on path enumeration, the computation of critical subsystems, and the generation of critical command sets, both, using explicit and symbolic techniques.

50 citations


Book ChapterDOI
12 May 2014
TL;DR: Semantic Collaboration as discussed by the authors is a methodology to specify and reason about class invariants of sequential object-oriented programs, which models dependencies between collaborating objects by semantic means, combined with a simple ownership mechanism and useful default schemes, achieves the flexibility necessary to reason about complicated inter-object dependencies.
Abstract: Modular reasoning about class invariants is challenging in the presence of collaborating objects that need to maintain global consistency. This paper presents semantic collaboration: a novel methodology to specify and reason about class invariants of sequential object-oriented programs, which models dependencies between collaborating objects by semantic means. Combined with a simple ownership mechanism and useful default schemes, semantic collaboration achieves the flexibility necessary to reason about complicated inter-object dependencies but requires limited annotation burden when applied to standard specification patterns. The methodology is implemented in AutoProof, our program verifier for the Eiffel programming language but it is applicable to any language supporting some form of representation invariants. An evaluation on several challenge problems proposed in the literature demonstrates that it can handle a variety of idiomatic collaboration patterns, and is more widely applicable than the existing invariant methodologies.

48 citations


Book ChapterDOI
16 Jun 2014
TL;DR: This method described in this tutorial is based on automated run-time checking of a combination of protocol- and data-oriented properties of object-oriented programs.
Abstract: According to a study in 2002 commisioned by a US Department, software bugs annually costs the US economy an estimated $59 billion. A more recent study in 2013 by Cambridge University estimated that the global cost has risen to $312 billion globally. There exists various ways to prevent, isolate and fix software bugs, ranging from lightweight methods that are semi-automatic, to heavyweight methods that require significant user interaction. Our own method described in this tutorial is based on automated run-time checking of a combination of protocol- and data-oriented properties of object-oriented programs.

44 citations


Book ChapterDOI
12 May 2014
TL;DR: In this article, a survey of 40 years of formal methods for software development is presented, and the authors discuss the obstacles or hindrances to the proper integration of formal method in university research and education as well as in industry practice.
Abstract: In this "40 years of formal methods" essay we shall first delineate, Sect. 1, what we mean by method, formal method, computer science, computing science, software engineering, and model-oriented and algebraic methods. Based on this, we shall characterize a spectrum from specification-oriented methods to analysis-oriented methods. Then, Sect. 2, we shall provide a "survey": which are the 'prerequisite works' that have enabled formal methods, Sect. 2.1, and which are, to us, the, by now, classical 'formal methods', Sect. 2.2. We then ask ourselves the question: have formal methods for software development, in the sense of this paper been successful? Our answer is, regretfully, no! We motivate this answer, in Sect. 3.2, by discussing eight obstacles or hindrances to the proper integration of formal methods in university research and education as well as in industry practice. This "looking back" is complemented, in Sect. 3.4, by a "looking forward" at some promising developments -- besides the alleviation of the eighth or more hindrances!

42 citations


Journal ArticleDOI
01 Dec 2014
TL;DR: A new enforcement paradigm where enforcement mechanisms are time retardants: to produce a correct output sequence, additional delays are introduced between the events of the input sequence and two new features are introduced, physical constraints that describe how a time retardant is physically constrained when delaying a sequence of timed events.
Abstract: Runtime enforcement is a powerful technique to ensure that a running system satisfies some desired properties. Using an enforcement monitor, an (untrustworthy) input execution (in the form of a sequence of events) is modified into an output sequence that complies with a property. Over the last decade, runtime enforcement has been mainly studied in the context of untimed properties. This paper deals with runtime enforcement of timed properties by revisiting the foundations of runtime enforcement when time between events matters. We propose a new enforcement paradigm where enforcement mechanisms are time retardants: to produce a correct output sequence, additional delays are introduced between the events of the input sequence. We consider runtime enforcement of any regular timed property defined by a timed automaton. We prove the correctness of enforcement mechanisms and prove that they enjoy two usually expected features, revisited here in the context of timed properties. The first one is soundness meaning that the output sequences (eventually) satisfy the required property. The second one is transparency, meaning that input sequences are modified in a minimal way. We also introduce two new features, (i) physical constraints that describe how a time retardant is physically constrained when delaying a sequence of timed events, and (ii) optimality, meaning that output sequences are produced as soon as possible. To facilitate the adoption and implementation of enforcement mechanisms, we describe them at several complementary abstraction levels. Our enforcement mechanisms have been implemented and our experimental results demonstrate the feasibility of runtime enforcement in a timed context and the effectiveness of the mechanisms.

41 citations


Book ChapterDOI
12 May 2014
TL;DR: This paper shows that if an MPI program is single-path, the problem of discovering communication deadlocks is NP-complete, and presents a novel propositional encoding scheme which captures the existence of communication deadlock.
Abstract: The Message Passing Interface MPI is the standard API for high-performance and scientific computing. Communication deadlocks are a frequent problem in MPI programs, and this paper addresses the problem of discovering such deadlocks. We begin by showing that if an MPI program is single-path, the problem of discovering communication deadlocks is NP-complete. We then present a novel propositional encoding scheme which captures the existence of communication deadlocks. The encoding is based on modelling executions with partial orders, and implemented in a tool called MOPPER. The tool executes an MPI program, collects the trace, builds a formula from the trace using the propositional encoding scheme, and checks its satisfiability. Finally, we present experimental results that quantify the benefit of the approach in comparison to a dynamic analyser and demonstrate that it offers a scalable solution.

Journal ArticleDOI
01 Aug 2014
TL;DR: This work exploits the Model Checking Modulo Theories framework to derive a backward reachability version of lazy abstraction that supports reasoning about arrays and shows by means of experiments that this approach can synthesize and prove universally quantified properties over arrays in a completely automatic fashion.
Abstract: Lazy abstraction with interpolation-based refinement has been shown to be a powerful technique for verifying imperative programs. In presence of arrays, however, the method suffers from an intrinsic limitation, due to the fact that invariants needed for verification usually contain universally quantified variables, which are not present in program specifications. In this work we present an extension of the interpolation-based lazy abstraction framework in which arrays of unknown length can be handled in a natural manner. In particular, we exploit the Model Checking Modulo Theories framework to derive a backward reachability version of lazy abstraction that supports reasoning about arrays. The new approach has been implemented in a tool, called safari, which has been validated on a wide range of benchmarks. We show by means of experiments that our approach can synthesize and prove universally quantified properties over arrays in a completely automatic fashion.

Proceedings ArticleDOI
20 Nov 2014
TL;DR: The aim of this new procedure is to increase the amount of monitorable properties compared to the properties monitorable with ptDTL, which has been implemented on LEGO Mindstorms NXT robots communicating via Bluetooth.
Abstract: This paper studies runtime verification of distributed asynchronous systems and presents a moni- tor generation procedure for this purpose, which allows three-valued monitoring. The properties used in the monitors are specified in a logic that was newly created for this purpose and is called Distributed Temporal Logic (DTL). DTL combines the three-valued Linear Temporal Logic (LTL3) with the past-time Distributed Temporal Logic (ptDTL), which allows to mark subfor- mulas for remote evaluation. The monitor generation presented in this paper is based on an adopted version of the LTL3 monitor generation, which integrates the ptDTL monitor construction. The aim of this new pro- cedure is to increase the amount of monitorable prop- erties compared to the properties monitorable with ptDTL. Runtime verification using this new monitoring has been implemented on LEGO Mindstorms NXT robots communicating via Bluetooth.

Book ChapterDOI
12 May 2014
TL;DR: It is shown that for specifications that admit dominant strategies, distributed systems can be synthesized compositionally, considering one process at a time, which has dramatically better complexity and is uniformly applicable to all system architectures.
Abstract: Given the recent advances in synthesizing finite-state controllers from temporal logic specifications, the natural next goal is to synthesize more complex systems that consist of multiple distributed processes. The synthesis of distributed systems is, however, a hard and, in many cases, undecidable problem. In this paper, we investigate the synthesis problem for specifications that admit dominant strategies, i.e., strategies that perform at least as well as the best alternative strategy, although they do not necessarily win the game. We show that for such specifications, distributed systems can be synthesized compositionally, considering one process at a time. The compositional approach has dramatically better complexity and is uniformly applicable to all system architectures

Book ChapterDOI
12 May 2014
TL;DR: This paper shows that an SMT-based program verifier can support reasoning about co-induction--handling infinite data structures, lazy function calls, and user-defined properties defined as greatest fix-points, as well as letting users write co-Inductive proofs.
Abstract: This paper shows that an SMT-based program verifier can support reasoning about co-induction--handling infinite data structures, lazy function calls, and user-defined properties defined as greatest fix-points, as well as letting users write co-inductive proofs. Moreover, the support can be packaged to provide a simple user experience. The paper describes the features for co-induction in the language and verifier Dafny, defines their translation into input for a first-order SMT solver, and reports on some encouraging initial experience.

Book ChapterDOI
12 May 2014
TL;DR: In this article, the authors consider Segala's automata and propose a distribution-based bisimulation by joining the existing equivalence and bisimilarities, which provides a uniform way of studying their characteristics.
Abstract: Probabilistic automata were introduced by Rabin in 1963 as language acceptors. Two automata are equivalent if and only if they accept each word with the same probability. On the other side, in the process algebra community, probabilistic automata were re-proposed by Segala in 1995 which are more general than Rabin's automata. Bisimulations have been proposed for Segala's automata to characterize the equivalence between them. So far the two notions of equivalences and their characteristics have been studied most independently. In this paper, we consider Segala's automata, and propose a novel notion of distribution-based bisimulation by joining the existing equivalence and bisimilarities. Our bisimulation bridges the two closely related concepts in the community, and provides a uniform way of studying their characteristics. We demonstrate the utility of our definition by studying distribution-based bisimulation metrics, which gives rise to a robust notion of equivalence for Rabin's automata.

Book ChapterDOI
12 May 2014
TL;DR: The present study investigates how contracts are used in the practice of software development and has found that contracts are quite stable compared to implementations; and inheritance does not significantly affect qualitative trends of contract usage.
Abstract: Contracts are a form of lightweight formal specification embedded in the program text. Being executable parts of the code, they encourage programmers to devote proper attention to specifications, and help maintain consistency between specification and implementation as the program evolves. The present study investigates how contracts are used in the practice of software development. Based on an extensive empirical analysis of 21 contract-equipped Eiffel, C#, and Java projects totaling more than 260 million lines of code over 7700 revisions, it explores, among other questions: 1 which kinds of contract elements preconditions, postconditions, class invariants are used more often; 2 how contracts evolve over time; 3 the relationship between implementation changes and contract changes; and 4 the role of inheritance in the process. It has found, among other results, that: the percentage of program elements that include contracts is above 33% for most projects and tends to be stable over time; there is no strong preference for a certain type of contract element; contracts are quite stable compared to implementations; and inheritance does not significantly affect qualitative trends of contract usage.

Proceedings ArticleDOI
03 Jun 2014
TL;DR: It is shown how the formal specification language mCRL2 and its state-of-the-art toolset can be used successfully to model and analyze variability in software product lines.
Abstract: We show how the formal specification language mCRL2 and its state-of-the-art toolset can be used successfully to model and analyze variability in software product lines. The mCRL2 toolset supports parametrized modeling, model reduction and quality assurance techniques like model checking. We present a proof-of-concept, which moreover illustrates the use of data in mCRL2 and also how to exploit its data language to manage feature attributes of software product lines and quantitative constraints between attributes and features.

Journal ArticleDOI
01 Jun 2014
TL;DR: A runtime verification framework that allows on-line monitoring of past-time Metric Temporal Logic specifications in a discrete time setting and design observer algorithms for the time-bounded modalities of ptMTL, which take advantage of the highly parallel nature of hardware designs.
Abstract: We present a runtime verification framework that allows on-line monitoring of past-time Metric Temporal Logic (ptMTL) specifications in a discrete time setting. We design observer algorithms for the time-bounded modalities of ptMTL, which take advantage of the highly parallel nature of hardware designs. The algorithms can be translated into efficient hardware blocks, which are designed for reconfigurability, thus, facilitate applications of the framework in both a prototyping and a post-deployment phase of embedded real-time systems. We provide formal correctness proofs for all presented observer algorithms and analyze their time and space complexity. For example, for the most general operator considered, the time-bounded Since operator, we obtain a time complexity that is doubly logarithmic both in the point in time the operator is executed and the operator's time bounds. This result is promising with respect to a self-contained, non-interfering monitoring approach that evaluates real-time specifications in parallel to the system-under-test. We implement our framework on a Field Programmable Gate Array platform and use extensive simulation and logic synthesis runs to assess the benefits of the approach in terms of resource usage and operating frequency.

Book ChapterDOI
16 Jun 2014
TL;DR: This tutorial uses the core of a fault-tolerant distributed broadcasting algorithm as a case study to explain the concepts of the abstraction techniques, and discusses how they can be implemented.
Abstract: Recently we introduced an abstraction method for parameterized model checking of threshold-based fault-tolerant distributed algorithms. We showed how to verify distributed algorithms without fixing the size of the system a priori. As is the case for many other published abstraction techniques, transferring the theory into a running tool is a challenge. It requires understanding of several verification techniques such as parametric data and counter abstraction, finite state model checking and abstraction refinement. In the resulting framework, all these techniques should interact in order to achieve a possibly high degree of automation. In this tutorial we use the core of a fault-tolerant distributed broadcasting algorithm as a case study to explain the concepts of our abstraction techniques, and discuss how they can bei¾?implemented.

Book ChapterDOI
16 Jun 2014
TL;DR: The VerCors approach to verification of concurrent software is presented, and it is illustrated how permission-based separation logic is suitable to verify functional correctness properties of OpenCL kernels.
Abstract: This paper presents the VerCors approach to verification of concurrent software It first discusses why verification of concurrent software is important, but also challenging Then it shows how within the VerCors project we use permission-based separation logic to reason about multithreaded Java programs We discuss in particular how we use the logic to use different implementations of synchronisers in verification, and how we reason about class invariance properties in a concurrent setting Further, we also show how the approach is suited to reason about programs using a different concurrency paradigm, namely kernel programs using the Single Instruction Multiple Data paradigm Concretely, we illustrate how permission-based separation logic is suitable to verify functional correctness properties of OpenCL kernels All verification techniques discussed in this paper are supported by the VerCors tool set

Book ChapterDOI
12 May 2014
TL;DR: This paper shows how the invariant generation problem for HSs with general elementary functions can be solved by several different techniques including simulation, bounded model checking BMC and theorem proving, using the tools Simulink/Stateflow, iSAT-ODE and Flow*, and HHL Prover.
Abstract: We report on our recent experience in applying formal methods to the verification of a descent guidance control program of a lunar lander. The powered descent process of the lander gives a specific hybrid system HS, i.e. a sampled-data control system composed of the physical plant and the embedded control program. Due to its high complexity, verification of such a system is very hard. In the paper, we show how this problem can be solved by several different techniques including simulation, bounded model checking BMC and theorem proving, using the tools Simulink/Stateflow, iSAT-ODE and Flow*, and HHL Prover, respectively. In particular, for the theorem-proving approach to work, we study the invariant generation problem for HSs with general elementary functions. As a preliminary attempt, we perform verification by focusing on one of the 6 phases, i.e. the slow descent phase, of the powered descent process. Through such verification, trustworthiness of the lunar lander's control program is enhanced.


Book ChapterDOI
12 May 2014
TL;DR: It is shown how the metric operators of MTL, in combination with recursive definitions, can be used to specify policies to detect privilege escalation, under various fine grained constraints.
Abstract: We present a design and an implementation of a security policy specification language based on metric linear-time temporal logic MTL. MTL features temporal operators that are indexed by time intervals, allowing one to specify timing-dependent security policies. The design of the language is driven by the problem of runtime monitoring of applications in mobile devices. A main case of the study is the privilege escalation attack in the Android operating system, where an app gains access to certain resource or functionalities that are not explicitly granted to it by the user, through indirect control flow. To capture these attacks, we extend MTL with recursive definitions, that are used to express call chains betwen apps. We then show how the metric operators of MTL, in combination with recursive definitions, can be used to specify policies to detect privilege escalation, under various fine grained constraints. We present a new algorithm, extending that of linear time temporal logic, for monitoring safety policies written in our specification language. The monitor does not need to store the entire history of events generated by the apps, something that is crucial for practical implementations. We modified the Android OS kernel to allow us to insert our generated monitors modularly. We have tested the modified OS on an actual device, and show that it is effective in detecting policy violations.

Journal ArticleDOI
01 Dec 2014
TL;DR: A state-dependent control strategy which makes the trajectories of the analyzed system converge to finite cyclic sequences of points is proposed, which relies on a technique of decomposition of the state space into local regions where the control is uniform.
Abstract: We consider in this paper switched systems, a class of hybrid systems recently used with success in various domains such as automotive industry and power electronics. We propose a state-dependent control strategy which makes the trajectories of the analyzed system converge to finite cyclic sequences of points. Our method relies on a technique of decomposition of the state space into local regions where the control is uniform. We have implemented the procedure using zonotopes, and applied it successfully to several examples of the literature and industrial case studies in power electronics.

Book ChapterDOI
12 May 2014
TL;DR: This paper reports the development of a proof strategy that integrates the MetiTarski theorem prover as a trusted external decision procedure into the PVS theoremProver.
Abstract: This paper reports the development of a proof strategy that integrates the MetiTarski theorem prover as a trusted external decision procedure into the PVS theorem prover. The strategy automatically discharges PVS sequents containing real-valued formulas, including transcendental and special functions, by translating the sequents into first order formulas and submitting them to MetiTarski. The new strategy is considerably faster and more powerful than other strategies for nonlinear arithmetic available to PVS

Book ChapterDOI
12 May 2014
TL;DR: This paper formally and mechanically verifies the correctness of central CloudMake algorithms with formalization and proofs done entirely in Dafny, the proof engine of which is an SMT-based program verifier.
Abstract: CloudMake is a software utility that automatically builds executable programs and libraries from source code--a modern Make utility. Its design gives rise to a number of possible optimizations, like cached builds, and the executables to be built are described using a functional programming language. This paper formally and mechanically verifies the correctness of central CloudMake algorithms. The paper defines the CloudMake language using an operational semantics, but with a twist: the central operation exec is defined axiomatically, making it pluggable so that it can be replaced by calls to compilers, linkers, and other tools. The formalization and proofs of the central CloudMake algorithms are done entirely in Dafny, the proof engine of which is an SMT-based program verifier.

Book ChapterDOI
12 May 2014
TL;DR: The first formal definition of quiescent consistency is given, its relationship with linearizability is investigated, and a proof technique is provided based on coupled simulations of a non-linearizable FIFO queue built using a diffraction tree.
Abstract: Concurrent data structures like stacks, sets or queues need to be highly optimized to provide large degrees of parallelism with reduced contention. Linearizability, a key consistency condition for concurrent objects, sometimes limits the potential for optimization. Hence algorithm designers have started to build concurrent data structures that are not linearizable but only satisfy relaxed consistency requirements. In this paper, we study quiescent consistency as proposed by Shavit and Herlihy, which is one such relaxed condition. More precisely, we give the first formal definition of quiescent consistency, investigate its relationship with linearizability, and provide a proof technique for it based on coupled simulations. We demonstrate our proof technique by verifying quiescent consistency of a non-linearizable FIFO queue built using a diffraction tree.

Book ChapterDOI
12 May 2014
TL;DR: An algorithm for checking temporal precedence properties of nonlinear switched systems that handles nonlinear predicates that arise from dynamics-based predictions used in alerting protocols for state-of-the-art transportation systems is presented.
Abstract: This paper presents an algorithm for checking temporal precedence properties of nonlinear switched systems. This class of properties subsume bounded safety and capture requirements about visiting a sequence of predicates within given time intervals. The algorithm handles nonlinear predicates that arise from dynamics-based predictions used in alerting protocols for state-of-the-art transportation systems. It is sound and complete for nonlinear switch systems that robustly satisfy the given property. The algorithm is implemented in the Compare Execute Check Engine C2E2 using validated simulations. As a case study, a simplified model of an alerting system for closely spaced parallel runways is considered. The proposed approach is applied to this model to check safety properties of the alerting logic for different operating conditions such as initial velocities, bank angles, aircraft longitudinal separation, and runway separation.

Book ChapterDOI
12 May 2014
TL;DR: It is shown that compliance with respect to data protection policies can be checked based on logs free of personal data, and the integration of the formal framework for accountability in a global accountability process is described.
Abstract: Accountability is increasingly recognised as a cornerstone of data protection, notably in European regulation, but the term is frequently used in a vague sense. For accountability to bring tangible benefits, the expected properties of personal data handling logs used as "accounts" and the assumptions regarding the logging process must be defined with accuracy. In this paper, we provide a formal framework for accountability and show the correctness of the log analysis with respect to abstract traces used to specify privacy policies. We also show that compliance with respect to data protection policies can be checked based on logs free of personal data, and describe the integration of our formal framework in a global accountability process.