scispace - formally typeset
Search or ask a question

Showing papers presented at "Formal Methods in 2012"


Book ChapterDOI
18 Jun 2012
TL;DR: This chapter pretends to provide a comprehensive view of this language, its many applications and available tool support as well as the latest research developments and open challenges around it.
Abstract: The Object Constraint Language (OCL) started as a complement of the UML notation with the goal to overcome the limitations of UML (and in general, any graphical notation) in terms of precisely specifying detailed aspects of a system design. Since then, OCL has become a key component of any model-driven engineering (MDE) technique as the default language for expressing all kinds of (meta)model query, manipulation and specification requirements. Among many other applications, OCL is frequently used to express model transformations (as part of the source and target patterns of transformation rules), well-formedness rules (as part of the definition of new domain-specific languages), or code-generation templates (as a way to express the generation patterns and rules). This chapter pretends to provide a comprehensive view of this language, its many applications and available tool support as well as the latest research developments and open challenges around it.

215 citations


Journal ArticleDOI
04 May 2012
TL;DR: A comprehensive reference model, entitled FOrmal Reference Model for Self-adaptation (FORMS), that provides rigor in the manner self-adaptive software systems can be described and reasoned about and has a potential for documenting reusable architectural solutions to commonly encountered problems in this area.
Abstract: The challenges of pervasive and mobile computing environments, which are highly dynamic and unpredictable, have motivated the development of self-adaptive software systems. Although noteworthy successes have been achieved on many fronts, the construction of such systems remains significantly more challenging than traditional systems. We argue this is partially because researchers and practitioners have been struggling with the lack of a precise vocabulary for describing and reasoning about the key architectural characteristics of self-adaptive systems. Further exacerbating the situation is the fact that existing frameworks and guidelines do not provide an encompassing perspective of the different types of concerns in this setting. In this article, we present a comprehensive reference model, entitled FOrmal Reference Model for Self-adaptation (FORMS), that targets both issues. FORMS provides rigor in the manner such systems can be described and reasoned about. It consists of a small number of formally specified modeling elements that correspond to the key concerns in the design of self-adaptive software systems, and a set of relationships that guide their composition. We demonstrate FORMS's ability to precisely describe and reason about the architectural characteristics of distributed self-adaptive software systems through its application to several existing systems. FORMS's expressive power gives it a potential for documenting reusable architectural solutions (e.g., architectural patterns) to commonly encountered problems in this area.

194 citations


Book ChapterDOI
27 Aug 2012
TL;DR: This work introduces a new formalism for concisely capturing expressive specifications with parameters that is more expressive than the currently most efficient techniques while at the same time allowing for optimizations.
Abstract: Runtime verification is the process of checking a property on a trace of events produced by the execution of a computational system. Runtime verification techniques have recently focused on parametric specifications where events take data values as parameters. These techniques exist on a spectrum inhabited by both efficient and expressive techniques. These characteristics are usually shown to be conflicting - in state-of-the-art solutions, efficiency is obtained at the cost of loss of expressiveness and vice-versa. To seek a solution to this conflict we explore a new point on the spectrum by defining an alternative runtime verification approach. We introduce a new formalism for concisely capturing expressive specifications with parameters. Our technique is more expressive than the currently most efficient techniques while at the same time allowing for optimizations.

151 citations


Book ChapterDOI
27 Aug 2012
TL;DR: This article shows how such abstractions can be constructed fully automatically for a restricted class of extended finite state machines in which one can test for equality of data parameters, but no operations on data are allowed.
Abstract: ion is the key when learning behavioral models of realistic systems. Hence, in most practical applications where automata learning is used to construct models of software components, researchers manually define abstractions which, depending on the history, map a large set of concrete events to a small set of abstract events that can be handled by automata learning tools. In this article, we show how such abstractions can be constructed fully automatically for a restricted class of extended finite state machines in which one can test for equality of data parameters, but no operations on data are allowed. Our approach uses counterexample-guided abstraction refinement: whenever the current abstraction is too coarse and induces nondeterministic behavior, the abstraction is refined automatically. Using Tomte, a prototype tool implementing our algorithm, we have succeeded to learn – fully automatically – models of several realistic software components, including the biometric passport and the SIP protocol.

139 citations


Journal ArticleDOI
01 Aug 2012
TL;DR: This paper identifies the missing Herbrand-function countermodel for false QBF, and strengthens the connection between syntactic and semantic certificates by showing that, given a trueQBF, its Skolem-function model is derivable from its cube-resolution proof of satisfiability as well as from its clause- resolution proof of unsatisfiability under formula negation.
Abstract: Quantified Boolean formulae (QBF) allow compact encoding of many decision problems. Their importance motivated the development of fast QBF solvers. Certifying the results of a QBF solver not only ensures correctness, but also enables certain synthesis and verification tasks. To date the certificate of a true formula can be in the form of either a syntactic cube-resolution proof or a semantic Skolem-function model whereas that of a false formula is only in the form of a syntactic clause-resolution proof. The semantic certificate for a false QBF is missing, and the syntactic and semantic certificates are somewhat unrelated. This paper identifies the missing Herbrand-function countermodel for false QBF, and strengthens the connection between syntactic and semantic certificates by showing that, given a true QBF, its Skolem-function model is derivable from its cube-resolution proof of satisfiability as well as from its clause-resolution proof of unsatisfiability under formula negation. Consequently Skolem-function derivation can be decoupled from special Skolemization-based solvers and computed from standard search-based ones. Experimental results show strong benefits of the new method.

139 citations


Journal ArticleDOI
01 Feb 2012
TL;DR: The main result is that the set of dynamic masks under which S is opaque can be finitely represented and can be computed in EXPTIME and this is a lower bound.
Abstract: Opacity is a security property formalizing the absence of secret information leakage and we address in this paper the problem of synthesizing opaque systems. A secret predicate S over the runs of a system G is opaque to an external user having partial observability over G, if s/he can never infer from the observation of a run of G that the run belongs to S. We choose to control the observability of events by adding a device, called a mask, between the system G and the users. We first investigate the case of static partial observability where the set of events the user can observe is fixed a priori by a static mask. In this context, we show that checking whether a system is opaque is PSPACE-complete, which implies that computing an optimal static mask ensuring opacity is also a PSPACE-complete problem. Next, we introduce dynamic partial observability where the set of events the user can observe changes over time and is chosen by a dynamic mask. We show how to check that a system is opaque w.r.t. to a dynamic mask and also address the corresponding synthesis problem: given a system G and secret states S, compute the set of dynamic masks under which S is opaque. Our main result is that the set of such masks can be finitely represented and can be computed in EXPTIME and this is a lower bound. Finally we also address the problem of computing an optimal mask.

132 citations


Book ChapterDOI
27 Aug 2012
TL;DR: The new version Imitator 2.5 integrates the new features of stopwatches and updates and updates (in addition to standard clock resets), as well as powerful algorithmic improvements for state space reduction, which make the tool well-suited to analyze the robustness of solutions in several classes of preemptive scheduling problems.
Abstract: The tool Imitator implements the Inverse Method (IM) for Timed Automata (TAs). Given a TA \(\mathcal{A}\) and a tuple π 0 of reference valuations for timings, IM synthesizes a constraint around π 0 where \(\mathcal{A}\) behaves in the same discrete manner. This provides us with a quantitative measure of robustness of the behavior of \(\mathcal{A}\) around π 0. The new version Imitator 2.5 integrates the new features of stopwatches (in addition to standard clocks) and updates (in addition to standard clock resets), as well as powerful algorithmic improvements for state space reduction. These new features make the tool well-suited to analyze the robustness of solutions in several classes of preemptive scheduling problems.

115 citations


Journal ArticleDOI
04 May 2012
TL;DR: This work demonstrates a robust, decentralized approach for structural adaptation in explicitly modeled problem solving agent organizations based on self-organization principles that enables the autonomous agents to modify their structural relations to achieve a better allocation of tasks in a simulated task-solving environment.
Abstract: Self-organizing multi-agent systems provide a suitable paradigm for developing autonomic computing systems that manage themselves. Towards this goal, we demonstrate a robust, decentralized approach for structural adaptation in explicitly modeled problem solving agent organizations. Based on self-organization principles, our method enables the autonomous agents to modify their structural relations to achieve a better allocation of tasks in a simulated task-solving environment. Specifically, the agents reason about when and how to adapt using only their history of interactions as guidance. We empirically show that, in a wide range of closed, open, static, and dynamic scenarios, the performance of organizations using our method is close (70–90p) to that of an idealized centralized allocation method and is considerably better (10–60p) than the current state-of-the-art decentralized approaches.

111 citations


Journal ArticleDOI
01 Feb 2012
TL;DR: Using the notion of causality introduced by Halpern and Pearl, a set of causes for the failure of the specification on the given counterexample trace are formally defined and presented to the user as a visual explanation of the failure.
Abstract: When a model does not satisfy a given specification, a counterexample is produced by the model checker to demonstrate the failure. A user must then examine the counterexample trace, in order to visually identify the failure that it demonstrates. If the trace is long, or the specification is complex, finding the failure in the trace becomes a non-trivial task. In this paper, we address the problem of analyzing a counterexample trace and highlighting the failure that it demonstrates. Using the notion of causality introduced by Halpern and Pearl, we formally define a set of causes for the failure of the specification on the given counterexample trace. These causes are marked as red dots and presented to the user as a visual explanation of the failure. We study the complexity of computing the exact set of causes, and provide a polynomial-time algorithm that approximates it. We then analyze the output of the algorithm and compare it to the one expected by the definition. The algorithm is implemented as a feature in the IBM formal verification platform RuleBase PE, where the visual explanations are an integral part of every counterexample trace. Our approach is independent of the tool that produced the counterexample, and can be applied as a light-weight external layer to any model checking tool, or used to explain simulation traces.

107 citations


Proceedings ArticleDOI
02 Jun 2012
TL;DR: EMFtoCSP is presented, a new tool for the fully automatic, decidable and expressive verification of EMF models that uses constraint logic programming as the underlying formalism.
Abstract: The increasing popularity of MDE results in the creation of larger models and model transformations, hence converting the specification of MDE artefacts in an error-prone task. Therefore, mechanisms to ensure quality and absence of errors in models are needed to assure the reliability of the MDE-based development process. Formal methods have proven their worth in the verification of software and hardware systems. However, the adoption of formal methods as a valid alternative to ensure model correctness is compromised for the inner complexity of the problem. To circumvent this complexity, it is common to impose limitations such as reducing the type of constructs that can appear in the model, or turning the verification process from automatic into user assisted. Since we consider these limitations to be counterproductive for the adoption of formal methods, in this paper we present EMFtoCSP, a new tool for the fully automatic, decidable and expressive verification of EMF models that uses constraint logic programming as the underlying formalism.

95 citations


Book ChapterDOI
18 Jun 2012
TL;DR: This tutorial gives an introduction to the foundations of model versioning, the underlying technologies for processing models and their evolution, as well as the state of the art in model versioned, to equip students and researchers alike with enough information for commencing to contribute to this challenging research area.
Abstract: With the emergence of model-driven engineering (MDE), software models are considered as central artifacts in the software engineering process, going beyond their traditional use as sketches. In MDE, models rather act as the single source of information for automatically generating executable software. This shift poses several new research challenges. One of these challenges constitutes model versioning, which targets at enabling efficient team-based development of models. This compelling challenge induced a very active research community, who yielded remarkable methods and techniques ranging from model differencing to merging of models. In this tutorial, we give an introduction to the foundations of model versioning, the underlying technologies for processing models and their evolution, as well as the state of the art in model versioning. Thereby, we aim at equipping students and researchers alike that are new to this domain with enough information for commencing to contribute to this challenging research area.

Journal ArticleDOI
01 Apr 2012
TL;DR: This paper reconsider BDDs as state space representation and use it as data structure for bounded synthesis and shows that the new approach leads to a computation time improvement of several orders of magnitude.
Abstract: Synthesizing finite-state systems from full linear-time temporal logic (LTL) is an ambitious way to tackle the challenge of constructing correct-by-construction systems. One particularly promising approach in this context is bounded synthesis, originally proposed by Schewe and Finkbeiner, which in turn builds upon Safraless synthesis, as described by Kupferman and Vardi. Previous implementations of these approaches performed the computation either in an explicit way or used symbolic data structures other than binary decision diagrams (BDDs). In this paper, we reconsider BDDs as state space representation and use it as data structure for bounded synthesis. The key to this construction is the application of two novel optimisation techniques that decrease the number of state bits in such a representation significantly. The first technique uses signalling bits to connect sub-games representing the safety- and non-safety parts of the specification. The second technique is based on a closer analysis of the step of building a safety game from a universal automaton and uses a sufficient condition to remove some so-called counters from the state space of the game. We evaluate our approach on several benchmark suites and show that the new approach leads to a computation time improvement of several orders of magnitude.

Book ChapterDOI
27 Aug 2012
TL;DR: A publicly available toolkit and a benchmark suite for rigorous verification of Integer Numerical Transition Systems (INTS), which can be viewed as control-flow graphs whose edges are annotated by Presburger arithmetic formulas.
Abstract: This paper presents a publicly available toolkit and a benchmark suite for rigorous verification of Integer Numerical Transition Systems (INTS), which can be viewed as control-flow graphs whose edges are annotated by Presburger arithmetic formulas. We present Flata and Eldarica, two verification tools for INTS. The Flata system is based on precise acceleration of the transition relation, while the Eldarica system is based on predicate abstraction with interpolation-based counterexample-driven refinement. The Eldarica verifier uses the Princess theorem prover as a sound and complete interpolating prover for Presburger arithmetic. Both systems can solve several examples for which previous approaches failed, and present a useful baseline for verifying integer programs. The infrastructure is a starting point for rigorous benchmarking, competitions, and standardized communication between tools.

Book ChapterDOI
27 Aug 2012
TL;DR: To make the detection procedure more efficient, an abstraction is proposed that reduces drastically the size of the program model, and it is shown that this abstraction preserves all SCTPL∖X formulas, which are sufficient to precisely characterize malware specifications.
Abstract: Over the past decade, malware costs more than $10 billion every year and the cost is still increasing. Classical signature-based and emulation-based methods are becoming insufficient, since malware writers can easily obfuscate existing malware such that new variants cannot be detected by these methods. Thus, it is important to have more robust techniques for malware detection. In our previous work [24], we proposed to use model-checking to identify malware. We used pushdown systems (PDSs) to model the program (this allows to keep track of the program’s stack behavior), and we defined the SCTPL logic to specify the malicious behaviors, where SCTPL can be seen as an extension of the branching-time temporal logic CTL with variables, quantifiers, and predicates over the stack. Malware detection was then reduced to SCTPL model-checking of PDSs. However, in our previous work [24], the way we used SCTPL to specify malicious behaviors was not very precise. Indeed, we used the names of the registers and memory locations instead of their values. We show in this work how to sidestep this limitation and use precise SCTPL formulas that consider the values of the registers and memory locations to specify malware. Moreover, to make the detection procedure more efficient, we propose an abstraction that reduces drastically the size of the program model, and show that this abstraction preserves all SCTPL∖X formulas, where SCTPL∖X is a fragment of SCTPL that is sufficient to precisely characterize malware specifications. We implemented our techniques in a tool and applied it to automatically detect several malwares. The experimental results are encouraging.

Book ChapterDOI
27 Aug 2012
TL;DR: This work proposes a technique for collaborative verification and testing that makes compromises of static checkers explicit such that they can be compensated for by complementary checkers or testing.
Abstract: Many mainstream static code checkers make a number of compromises to improve automation, performance, and accuracy. These compromises include not checking certain program properties as well as making implicit, unsound assumptions. Consequently, the results of such static checkers do not provide definite guarantees about program correctness, which makes it unclear which properties remain to be tested. We propose a technique for collaborative verification and testing that makes compromises of static checkers explicit such that they can be compensated for by complementary checkers or testing. Our experiments suggest that our technique finds more errors and proves more properties than static checking alone, testing alone, and combinations that do not explicitly document the compromises made by static checkers. Our technique is also useful to obtain small test suites for partially-verified programs.

Journal ArticleDOI
01 Oct 2012
TL;DR: A formal generic framework for defining and reasoning about weak memory models is implemented in the Coq proof assistant, and it is proved formally that the implementation is equivalent to the native definition for each of these models.
Abstract: We present in this paper a formal generic framework, implemented in the Coq proof assistant, for defining and reasoning about weak memory models. We first present the three axioms of our framework, with several examples as illustration and justification. Then we show how to implement several existing weak memory models in our framework, and prove formally that our implementation is equivalent to the native definition for each of these models.

Book ChapterDOI
18 Jun 2012
TL;DR: This paper introduces the concept of Tract, a generalization of model transformation contracts, and shows how Tracts can be used for model transformation specification and black-box testing, and the kinds of analyses they allow.
Abstract: In this paper we present some of the key issues involved in model transformation specification and testing, discuss and classify some of the existing approaches, and introduce the concept of Tract, a generalization of model transformation contracts. We show how Tracts can be used for model transformation specification and black-box testing, and the kinds of analyses they allow. Some representative examples are used to illustrate this approach.

Book ChapterDOI
27 Aug 2012
TL;DR: VMC accepts a product family specified as a modal transition system, possibly with additional variability constraints, after which it can automatically generate all the family’s valid products and efficiently model check properties expressed in an action- and state-based branching-time temporal logic over products and families alike.
Abstract: We present VMC, a tool for the modeling and analysis of variability in product lines. It accepts a product family specified as a modal transition system, possibly with additional variability constraints, after which it can automatically generate all the family’s valid products, visualize the family/products as modal/labeled transition systems, and efficiently model check properties expressed in an action- and state-based branching-time temporal logic over products and families alike.

Journal ArticleDOI
01 Apr 2012
TL;DR: A new approach is explored where a model is captured at a high level of abstraction by requiring that it be described using a small set of well-defined microarchitectural primitives, and automatically strengthen some classes of properties in order to make them 1-step inductive and then use an rtl model checker to prove them.
Abstract: microarchitectural models of communication fabrics present a challenge for verification. Due to the presence of deep pipelining, a large number of queues and distributed control, the state space of such models is usually too large for enumeration by protocol verification tools such as Murphi. On the other hand, we find that state-of-the-art rtl model checkers such as abc have poor performance on these models since there is very little opportunity for localization and most of the recent capacity advances in rtl model checking have come from better ways of discarding the irrelevant parts of the model. In this work we explore a new approach for verifying these models where we capture a model at a high level of abstraction by requiring that it be described using a small set of well-defined microarchitectural primitives. We exploit the high level structure present in this description, to automatically strengthen some classes of properties, in order to make them 1-step inductive, and then use an rtl model checker to prove them. In some cases, even if we cannot make the property inductive, we can dramatically reduce the number and complexity of lemmas that are needed to make the property inductive.

Journal ArticleDOI
01 Dec 2012
TL;DR: This paper focuses on automated generation of runtime monitors from temporal properties, and identifies four issues in monitor generation: state minimization, alphabet representation, alphabet minimized, and monitor encoding.
Abstract: SystemC is a modeling language built as an extension of C++. Its growing popularity and the increasing complexity of designs have motivated research efforts aimed at the verification of SystemC models using assertion-based verification (ABV), where the designer asserts properties that capture the design intent in a formal language such as PSL or SVA. The model then can be verified against the properties using runtime or formal verification techniques. In this paper we focus on automated generation of runtime monitors from temporal properties. Our focus is on minimizing runtime overhead, rather than monitor size or monitor-generation time. We identify four issues in monitor generation: state minimization, alphabet representation, alphabet minimization, and monitor encoding. We conduct extensive experimentation and identify a combination of settings that offers the best performance in terms of runtime overhead.

Journal ArticleDOI
01 Apr 2012
TL;DR: For every context-free language L, there is a subset L N that has the same Parikh image as L and can be represented as a sequence of substitutions on a linear language as mentioned in this paper.
Abstract: We show a new and constructive proof of the following language-theoretic result: for every context-free language L, there is a bounded context-free language L??L which has the same Parikh (commutative) image as L. Bounded languages, introduced by Ginsburg and Spanier, are subsets of regular languages of the form $w_{1}^{*}w_{2}^{*}\cdots w_{m}^{*}$ for some w 1,?,w m ?Σ ?. In particular bounded context-free languages have nice structural and decidability properties. Our proof proceeds in two parts. First, we give a new construction that shows that each context free language L has a subset L N that has the same Parikh image as L and that can be represented as a sequence of substitutions on a linear language. Second, we inductively construct a Parikh-equivalent bounded context-free subset of L N . We show two applications of this result in model checking: to underapproximate the reachable state space of multithreaded procedural programs and to underapproximate the reachable state space of recursive counter programs. The bounded language constructed above provides a decidable underapproximation for the original problems. By iterating the construction, we get a semi-algorithm for the original problems that constructs a sequence of underapproximations such that no two underapproximations of the sequence can be compared. This provides a progress guarantee: every word w?L is in some underapproximation of the sequence, and hence, a program bug is guaranteed to be found. In particular, we show that verification with bounded languages generalizes context-bounded reachability for multithreaded programs.

Book ChapterDOI
24 Sep 2012
TL;DR: This paper gives a tutorial introduction to ABS, a novel language for modeling feature-rich, distributed, object-oriented systems at an abstract, yet precise level, and has a formal semantics and has been designed with formal analyzability in mind.
Abstract: ABS for abstract behavioral specification is a novel language for modeling feature-rich, distributed, object-oriented systems at an abstract, yet precise level. ABS has a clear and simple concurrency model that permits synchronous as well as actor-style asynchronous communication. ABS abstracts away from specific datatype or I/O implementations, but is a fully executable language and has code generators for Java, Scala, and Maude. ABS goes beyond conventional programming languages in two important aspects: first, it embeds architectural concepts such as components or feature hierarchies and allows to connect features with their implementation in terms of product families. In contrast to standard OO languages, code reuse in ABS is feature-based instead of inheritance-based. Second, ABS has a formal semantics and has been designed with formal analyzability in mind. This paper gives a tutorial introduction to ABS. We discuss all important design features, explain why they are present and how they are intended to be used.

Book ChapterDOI
27 Aug 2012
TL;DR: The integration of the Kodkod high-level interface to SAT-solvers into the kernel of ProB means predicates from B, Event-B, Z and TLA + can be solved using a mixture of SAT-s solving and ProB’s own constraint-solving capabilities developed using constraint logic programming.
Abstract: We present the integration of the Kodkod high-level interface to SAT-solvers into the kernel of ProB. As such, predicates from B, Event-B, Z and TLA + can be solved using a mixture of SAT-solving and ProB’s own constraint-solving capabilities developed using constraint logic programming: the first-order parts which can be dealt with by Kodkod and the remaining parts solved by the existing ProB kernel. We also present an empirical evaluation and analyze the respective merits of SAT-solving and classical constraint solving. We also compare to using SMT solvers via recently available translators for Event-B.

Book ChapterDOI
27 Aug 2012
TL;DR: The presented technique suggests that matching logic reachability has no theoretical limitation over Hoare logic, and provides a new approach to prove Hoare logics sound.
Abstract: Matching logic reachability has been recently proposed as an alternative program verification approach. Unlike Hoare logic, where one defines a language-specific proof system that needs to be proved sound for each language separately, matching logic reachability provides a language-independent and sound proof system that directly uses the trusted operational semantics of the language as axioms. Matching logic reachability thus has a clear practical advantage: it eliminates the need for an additional semantics of the same language in order to reason about programs, and implicitly eliminates the need for tedious soundness proofs. What is not clear, however, is whether matching logic reachability is as powerful as Hoare logic. This paper introduces a technique to mechanically translate Hoare logic proof derivations into equivalent matching logic reachability proof derivations. The presented technique has two consequences: first, it suggests that matching logic reachability has no theoretical limitation over Hoare logic; and second, it provides a new approach to prove Hoare logics sound.

Journal ArticleDOI
01 Apr 2012
TL;DR: A class of relaxed memory models, defined in Coq, parameterised by the chosen permitted local reorderings of reads and writes, and by the visibility of inter- and intra-processor communications through memory is presented.
Abstract: We present a class of relaxed memory models, defined in Coq, parameterised by the chosen permitted local reorderings of reads and writes, and by the visibility of inter- and intra-processor communications through memory (e.g. store atomicity relaxation). We prove results on the required behaviour and placement of memory fences to restore a given model (such as Sequential Consistency) from a weaker one. Based on this class of models we develop a tool, diy, that systematically and automatically generates and runs litmus tests. These tests can be used to explore the behaviour of processor implementations and the behaviour of models, and hence to compare the two against each other. We detail the results of experiments on Power and a model we base on them.

Journal ArticleDOI
01 Aug 2012
TL;DR: This work describes a transformation that enables off-the-shelf program analysis tools to naturally perform the reasoning necessary for proving temporal properties (e.g. backtracking, eventuality checking, tree counterexamples for branching-time properties, abstraction refinement, etc.).
Abstract: We describe a reduction from temporal property verification to a program analysis problem. First we present a proof system that, unlike the standard formulation, is more amenable to reasoning about infinite-state systems: disjunction is treated by partitioning, rather than enumerating, the state space and temporal operators are characterized with special sets of states called frontiers. We then describe a transformation that, with the use of procedures and nondeterminism, enables off-the-shelf program analysis tools to naturally perform the reasoning necessary for proving temporal properties (e.g. backtracking, eventuality checking, tree counterexamples for branching-time properties, abstraction refinement, etc.). Using examples drawn from the PostgreSQL database server, Apache web server, and Windows OS kernel, we demonstrate the practical viability of our work.

Journal ArticleDOI
01 Jun 2012
TL;DR: It is proved that ternary simulation, such as the practical algorithm proposed by Malik, decides the class of constructive circuits and that three-valued algebra is able to maintain correct and exact stabilization information under the UN-delay model, and thus provides an adequate electrical interpretation of Malik’s algorithm.
Abstract: We classify gate level circuits with cycles based on their stabilization behavior. We define a formal class of combinational circuits, the constructive circuits, for which signals settle to a unique value in bounded time, for any input, under a simple conservative delay model, called the up-bounded non-inertial (UN) delay. Since circuits with combinational cycles can exhibit asynchronous behavior, such as non-determinism or metastability, it is crucial to ground their analysis in a formal delay model, which previous work in this area did not do. We prove that ternary simulation, such as the practical algorithm proposed by Malik, decides the class of constructive circuits. We prove that three-valued algebra is able to maintain correct and exact stabilization information under the UN-delay model, and thus provides an adequate electrical interpretation of Malik's algorithm, which has been missing in the literature. Previous work on combinational circuits used the upbounded inertial (UI) delay to justify ternary simulation. We show that the match is not exact and that stabilization under the UI-model, in general, cannot be decided by ternary simulation. We argue for the superiority of the UN-model for reasons of complexity, compositionality and electrical adequacy. The UN-model, in contrast to the UI-model, is consistent with the hypothesis that physical mechanisms cannot implement non-deterministic choice in bounded time. As the corner-stone of our main results we introduce UN-Logic, an axiomatic specification language for UN-delay circuits that mediates between the real-time behavior and its abstract simulation in the ternary domain. We present a symbolic simulation calculus for circuit theories expressed in UN-logic and prove it sound and complete for the UN-model. This provides, for the first time, a correctness and exactness result for the timing analysis of cyclic circuits. Our algorithm is a timed extension of Malik's pure ternary algorithm and closely related to the timed algorithm proposed by Riedel and Bruck, which however was not formally linked with real-time execution models.

Proceedings ArticleDOI
Sune Wolff1
02 Jun 2012
TL;DR: A way to add the use of formal methods to the agile development process Scrum is described and experiences from using a variant of the strategy in an industrial case are summarised.
Abstract: Formal methods have had a relative low penetration in industry but have the potential for much wider use. The use of agile methods has been highly limited in development of safety-critical systems due to the lack of formal evaluation techniques and rigorous planning. A combination of formal methods and agile development processes can potentially widen the use of formal methods in industry as well as enabling the use of agile methods in development of safety-critical systems. This paper describes a way to add the use of formal methods to the agile development process Scrum. Experiences from using a variant of the strategy in an industrial case are summarised.

Journal ArticleDOI
04 May 2012
TL;DR: A novel approach is proposed for peer-to-peer-based pervasive computing that provides support for the identified classes and integrates them in a multilevel architecture and introduces a multidimensional classification for them.
Abstract: A major characteristic of pervasive computing applications is their ability to adapt themselves to changing execution environments and physical contexts. In this article, we analyze different kinds of adaptations and introduce a multidimensional classification for them. On this basis, we propose a novel approach for peer-to-peer-based pervasive computing that provides support for the identified classes and integrates them in a multilevel architecture. We give a comprehensive overview of this architecture and its current realization in the Peer-to-Peer Pervasive Computing (3PC) project, discussing what adaptation is realized on each level, how the levels interact with each other, and how the overall system benefits from the integrated treatment of adaptation.

Journal ArticleDOI
04 May 2012
TL;DR: Distributed W-Learning is described, a reinforcement learning-based algorithm for collaborative agent-based optimization of pervasive systems that outperforms widely deployed fixed-time and simple adaptive UTC controllers under a variety of traffic loads and patterns and is suggested as a suitable basis for optimization in other large-scale systems with similar characteristics.
Abstract: This article describes Distributed W-Learning (DWL), a reinforcement learning-based algorithm for collaborative agent-based optimization of pervasive systems. DWL supports optimization towards multiple heterogeneous policies and addresses the challenges arising from the heterogeneity of the agents that are charged with implementing them. DWL learns and exploits the dependencies between agents and between policies to improve overall system performance. Instead of always executing the locally-best action, agents learn how their actions affect their immediate neighbors and execute actions suggested by neighboring agents if their importance exceeds the local action's importance when scaled using a predefined or learned collaboration coefficient. We have evaluated DWL in a simulation of an Urban Traffic Control (UTC) system, a canonical example of the large-scale pervasive systems that we are addressing. We show that DWL outperforms widely deployed fixed-time and simple adaptive UTC controllers under a variety of traffic loads and patterns. Our results also confirm that enabling collaboration between agents is beneficial as is the ability for agents to learn the degree to which it is appropriate for them to collaborate. These results suggest that DWL is a suitable basis for optimization in other large-scale systems with similar characteristics.