scispace - formally typeset
Search or ask a question

Showing papers on "Specification language published in 2015"


Journal ArticleDOI
TL;DR: A consolidated view of the Frama-C platform, its main and composite analyses, and some of its industrial achievements are presented.
Abstract: Frama-C is a source code analysis platform that aims at conducting verification of industrial-size C programs. It provides its users with a collection of plug-ins that perform static analysis, deductive verification, and testing, for safety- and security-critical software. Collaborative verification across cooperating plug-ins is enabled by their integration on top of a shared kernel and datastructures, and their compliance to a common specification language. This foundational article presents a consolidated view of the platform, its main and composite analyses, and some of its industrial achievements.

374 citations


Book
10 Apr 2015
TL;DR: This book provides the rapidly expanding field of cyber-physical systems with a long-needed foundational text by an established authority and is suitable for classroom use or as a reference for professionals.
Abstract: A cyber-physical system consists of a collection of computing devices communicating with one another and interacting with the physical world via sensors and actuators in a feedback loop. Increasingly, such systems are everywhere, from smart buildings to medical devices to automobiles. This textbook offers a rigorous and comprehensive introduction to the principles of design, specification, modeling, and analysis of cyber-physical systems. The book draws on a diverse set of subdisciplines, including model-based design, concurrency theory, distributed algorithms, formal methods of specification and verification, control theory, real-time systems, and hybrid systems, explaining the core ideas from each that are relevant to system design and analysis.The book explains how formal models provide mathematical abstractions to manage the complexity of a system design. It covers both synchronous and asynchronous models for concurrent computation, continuous-time models for dynamical systems, and hybrid systems for integrating discrete and continuous evolution. The role of correctness requirements in the design of reliable systems is illustrated with a range of specification formalisms and the associated techniques for formal verification. The topics include safety and liveness requirements, temporal logic, model checking, deductive verification, stability analysis of linear systems, and real-time scheduling algorithms. Principles of modeling, specification, and analysis are illustrated by constructing solutions to representative design problems from distributed algorithms, network protocols, control design, and robotics.This book provides the rapidly expanding field of cyber-physical systems with a long-needed foundational text by an established authority. It is suitable for classroom use or as a reference for professionals.

332 citations


Proceedings Article
04 May 2015
TL;DR: NoD generalizes a specialized system, SecGuru, and is currently use in production to catch hundreds of configuration bugs a year and can also scale to large to large header spaces because of a new filter-project operator and a symbolic header representation.
Abstract: Network Verification is a form of model checking in which a model of the network is checked for properties stated using a specification language. Existing network verification tools lack a general specification language and hardcode the network model. Hence they cannot, for example, model policies at a high level of abstraction. Neither can they model dynamic networks; even a simple packet format change requires changes to internals. Standard verification tools (e.g., model checkers) have expressive specification and modeling languages but do not scale to large header spaces. We introduce Network Optimized Datalog (NoD) as a tool for network verification in which both the specification language and modeling languages are Datalog. NoD can also scale to large to large header spaces because of a new filter-project operator and a symbolic header representation. As a consequence, NoD allows checking for beliefs about network reachability policies in dynamic networks. A belief is a high-level invariant (e.g., "Internal controllers cannot be accessed from the Internet") that a network operator thinks is true. Beliefs may not hold, but checking them can uncover bugs or policy exceptions with little manual effort. Refuted beliefs can be used as a basis for revised beliefs. Further, in real networks, machines are added and links fail; on a longer term, packet formats and even forwarding behaviors can change, enabled by OpenFlow and P4. NoD allows the analyst to model such dynamic networks by adding new Datalog rules. For a large Singapore data center with 820K rules, NoD checks if any guest VM can access any controller (the equivalent of 5K specific reachability invariants) in 12 minutes. NoD checks for loops in an experimental SWAN backbone network with new headers in a fraction of a second. NoD generalizes a specialized system, SecGuru, we currently use in production to catch hundreds of configuration bugs a year. NoD has been released as part of the publicly available Z3 SMT solver.

164 citations


Journal ArticleDOI
TL;DR: The results show that metric first-order temporal logic can serve as an effective specification language for expressing and monitoring a wide variety of practically relevant system properties.
Abstract: Runtime monitoring is a general approach to verifying system properties at runtime by comparing system events against a specification formalizing which event sequences are allowed. We present a runtime monitoring algorithm for a safety fragment of metric first-order temporal logic that overcomes the limitations of prior monitoring algorithms with respect to the expressiveness of their property specification languages. Our approach, based on automatic structures, allows the unrestricted use of negation, universal and existential quantification over infinite domains, and the arbitrary nesting of both past and bounded future operators. Furthermore, we show how to use and optimize our approach for the common case where structures consist of only finite relations, over possibly infinite domains. We also report on case studies from the domain of security and compliance in which we empirically evaluate the presented algorithms. Taken together, our results show that metric first-order temporal logic can serve as an effective specification language for expressing and monitoring a wide variety of practically relevant system properties.

140 citations


Proceedings ArticleDOI
26 May 2015
TL;DR: A manipulation planning framework with linear temporal logic (LTL) specifications that allows the expression of rich and complex manipulation tasks and deals with the state-explosion problem through a novel abstraction technique.
Abstract: Manipulation planning from high-level task specifications, even though highly desirable, is a challenging problem. The large dimensionality of manipulators and complexity of task specifications make the problem computationally intractable. This work introduces a manipulation planning framework with linear temporal logic (LTL) specifications. The use of LTL as the specification language allows the expression of rich and complex manipulation tasks. The framework deals with the state-explosion problem through a novel abstraction technique. Given a robotic system, a workspace consisting of obstacles, manipulable objects, and locations of interest, and a co-safe LTL specification over the objects and locations, the framework computes a motion plan to achieve the task through a synergistic multi-layered planning architecture. The power of the framework is demonstrated through case studies, in which the planner efficiently computes plans for complex tasks. The case studies also illustrate the ability of the framework in intelligently moving away objects that block desired executions without requiring backtracking.

87 citations


Journal ArticleDOI
01 Jun 2015
TL;DR: This work extends an expressive language, metric first-order temporal logic, with aggregation operators inspired by the aggregation operators common in database query languages like SQL, and provides a monitoring algorithm for this enriched policy specification language.
Abstract: In system monitoring, one is often interested in checking properties of aggregated data. Current policy monitoring approaches are limited in the kinds of aggregations they handle. To rectify this, we extend an expressive language, metric first-order temporal logic, with aggregation operators. Our extension is inspired by the aggregation operators common in database query languages like SQL. We provide a monitoring algorithm for this enriched policy specification language. We show that, in comparison to related data processing approaches, our language is better suited for expressing policies, and our monitoring algorithm has competitive performance.

59 citations


Proceedings Article
25 Jan 2015
TL;DR: This paper puts forward an automata-based methodology for verifying and synthesising multi-agent systems against specifications given in SL[1G], and shows that the algorithm is sound and optimal from a computational point of view.
Abstract: Strategy Logic (SL) has recently come to the fore as a useful specification language to reason about multi-agent systems. Its one-goal fragment, or SL[1G], is of particular interest as it strictly subsumes widely used logics such as ATL*, while maintaining attractive complexity features. In this paper we put forward an automata-based methodology for verifying and synthesising multi-agent systems against specifications given in SL[1G]. We show that the algorithm is sound and optimal from a computational point of view. A key feature of the approach is that all data structures and operations on them can be performed on BDDs. We report on a BDD-based model checker implementing the algorithm and evaluate its performance on the fair process scheduler synthesis.

58 citations


Proceedings ArticleDOI
05 Nov 2015
TL;DR: This work proposes an approach based on Natural Language Processing (NLP) for analyzing the impact of change in Natural Language (NL) requirements and proposes a quantitative measure for calculating how likely a requirements statement is to be impacted by a change under given conditions.
Abstract: Requirements are subject to frequent changes as a way to ensure that they reflect the current best understanding of a system, and to respond to factors such as new and evolving needs. Changing one requirement in a requirements specification may warrant further changes to the specification, so that the overall correctness and consistency of the specification can be maintained. A manual analysis of how a change to one requirement impacts other requirements is time-consuming and presents a challenge for large requirements specifications. We propose an approach based on Natural Language Processing (NLP) for analyzing the impact of change in Natural Language (NL) requirements. Our focus on NL requirements is motivated by the prevalent use of these requirements, particularly in industry. Our approach automatically detects and takes into account the phrasal structure of requirements statements. We argue about the importance of capturing the conditions under which change should propagate to enable more accurate change impact analysis. We propose a quantitative measure for calculating how likely a requirements statement is to be impacted by a change under given conditions. We conduct an evaluation of our approach by applying it to 14 change scenarios from two industrial case studies.

56 citations


Journal ArticleDOI
TL;DR: An integrated system for generating, troubleshooting, and executing correct-by-construction controllers for autonomous robots using natural language input, allowing non-expert users to command robots to perform high-level tasks.
Abstract: This paper presents an integrated system for generating, troubleshooting, and executing correct-by-construction controllers for autonomous robots using natural language input, allowing non-expert users to command robots to perform high-level tasks. This system unites the power of formal methods with the accessibility of natural language, providing controllers for implementable high-level task specifications, easy-to-understand feedback on those that cannot be achieved, and natural language explanation of the reason for the robot's actions during execution. The natural language system uses domain-general components that can easily be adapted to cover the vocabulary of new applications. Generation of a linear temporal logic specification from the user's natural language input uses a novel data structure that allows for subsequent mapping of logical propositions back to natural language, enabling natural language feedback about problems with the specification that are only identifiable in the logical form. We demonstrate the robustness of the natural language understanding system through a user study where participants interacted with a simulated robot in a search and rescue scenario. Automated analysis and user feedback on unimplementable specifications is demonstrated using an example involving a robot assistant in a hospital.

52 citations


Book ChapterDOI
01 Jan 2015
TL;DR: There is a diversity of ontology languages in use, among them \(\mathsf{OWL}\), RDF, OBO, Common Logic, and F-logic, which provide bridges from ontology modeling to applications, e.g., in software engineering and databases.
Abstract: Over the last decades, the WADT community has studied the formal specification of software (and hardware) in great detail [1, 9, 42].

48 citations


Proceedings ArticleDOI
21 Oct 2015
TL;DR: The TruffleVM is presented, a multi-language runtime that allows composing different language implementations in a seamless way and reduces the amount of required boiler-plate code to a minimum by allowing programmers to access foreign functions or objects by using the notation of the host language.
Abstract: Programmers combine different programming languages because it allows them to use the most suitable language for a given problem, to gradually migrate existing projects from one language to another, or to reuse existing source code. However, existing cross-language mechanisms suffer from complex interfaces, insufficient flexibility, or poor performance. We present the TruffleVM, a multi-language runtime that allows composing different language implementations in a seamless way. It reduces the amount of required boiler-plate code to a minimum by allowing programmers to access foreign functions or objects by using the notation of the host language. We compose language implementations that translate source code to an intermediate representation (IR), which is executed on top of a shared runtime system. Language implementations use language-independent messages that the runtime resolves at their first execution by transforming them to efficient foreign-language-specific operations. The TruffleVM avoids conversion or marshaling of foreign objects at the language boundary and allows the dynamic compiler to perform its optimizations across language boundaries, which guarantees high performance. This paper presents an implementation of our ideas based on the Truffle system and its guest language implementations JavaScript, Ruby, and C.

Journal ArticleDOI
TL;DR: This paper proposes an integration of deployment architectures in the Real-Time ABS language, with restrictions on processing resources, a timed, abstract and behavioral specification language with a formal semantics and a Java-like syntax that targets concurrent, distributed and object-oriented systems.

Proceedings Article
01 Jan 2015
TL;DR: Ceptre can be viewed as an explication of a new methodology for understanding games based on linear logic, a formal logic concerned with resource usage, intended to enable rapid prototyping for experimental game mechanics, especially in domains that depend on procedural generation and multi-agent simulation.
Abstract: We present a rule specification language called Ceptre, intended to enable rapid prototyping for experimental game mechanics, especially in domains that depend on procedural generation and multi-agent simulation. Ceptre can be viewed as an explication of a new methodology for understanding games based on linear logic, a formal logic concerned with resource usage. We present a correspondence between gameplay and proof search in linear logic, building on prior work on generating narratives. In Ceptre, we introduce the ability to add interactivity selectively into a generative model, enabling inspection of intermediate states for debugging and exploration as well as a means of play. We claim that this methodology can support game designers and researchers in designing, anaylzing, and debugging the core systems of their work in generative, multi-agent gameplay. To support this claim, we provide two case studies implemented in Ceptre, one from interactive narrative and one from a strategy-like domain. Introduction Today, game designers and developers have a wealth of tools available for creating executable prototypes. Freelyavailable engines such as Twine, Unity, PuzzleScript, and Inform 7 provide carefully-crafted interfaces to creating different subdomains of games. On the other hand, to invent interesting, novel mechanics in any of these tools typically requires the use of ill-specified, general purpose languages: in Twine, manipulating state requires dipping into JavaScript; in Unity, specifying any interactive behavior requires learning a language such as C# or JavaScript; Inform 7 contains its own general-purpose imperative, functional, and logic programming languages; and PuzzleScript simply prevents the author from going outside the well-specified 2D tile grid mechanisms. The concept of operational logics (Mateas and WardripFruin 2009), the underlying substrate of meaningful statechange operations atop which mechanics are built, was recently proposed as a missing but critical component of game design and analysis. In most prototyping tools, there is a Copyright c © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. fixed operational logic (e.g. tile grids and layers in PuzzleScript, or text command-based navigation of a space in Inform 7) atop which the designer may be inventive in terms of rules and appearance, but all rules must be formed out of entities that mean something in the given operational logic (or use a much more complex language to subvert that default logic). We lack good specification tools for inventing new operational logics, or combining elements from several. In particular, many novel systems of play arise from the investigation of interactive storytelling (Mateas and Stern 2003), multi-agent social interaction (McCoy et al. 2010), and procedurally generated behavior (Hartsook et al. 2011), and no existing tools make it especially straightforward for a novice developer to codify and reason about those systems. Ceptre is a proposal for an operational-logic-agnostic specification language that may form the basis of an accessible front-end tool. It is based on a tradition of logical frameworks (Harper, Honsell, and Plotkin 1993), which use logical formulas to represent the rules of a system (a specification). Then proof search may be used to simulate execution, answer queries, and perform analysis of the specification. In Ceptre, we specify games, generative systems, and narratives with similar payoff. Our logic of choice for representing games is linear logic (Girard 1987), unique among logics in its ability to model state change and actions without the need for a frame rule or other axiomatization of inertia for predicates that do not change in an action. Using linear logic to specify a space of play closely resembles planning approaches. Planning has been used extensively to study games, especially in the interactive storytelling domain (Mateas and Stern 2003; Porteous, Cavazza, and Charles 2010; Medler and Magerko 2006) but also as a general mechanic description language (Zook and Riedl 2014). In contrast to this work, we position Ceptre as unique among game description languages in its combination of (a) direct correspondence to a pre-existing logic and proof theory, providing a portable and robust foundation for reasoning about games, and (b) use as a native authoring language, rather than as a library or auxiliary tool. On the other hand, unlike prior executable description languages (e.g. (Dormans 2011; Osborn, Grow, and Mateas 2013)), we retain the planning-like ability to specify generalized rule schema that apply in a multi-agent setting, allowing for causal analysis of event sequences. These statements Proceedings, The Eleventh AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE-15)

Journal ArticleDOI
TL;DR: This work proposed a formal specification language for the declarative formulation of transformation properties (by means of invariants, pre-, and postconditions) from which partial oracle functions used for transformation testing were generated, and extended the usage of this specificationlanguage for the automated generation of input test models by SAT solving.
Abstract: Testing model transformations poses several challenges, among them the automatic generation of appropriate input test models and the specification of oracle functions. Most approaches for the generation of input models ensure a certain coverage of the source meta-model or the transformation implementation code, whereas oracle functions are frequently defined using query or graph languages. However, these two tasks are usually performed independently regardless of their common purpose, and sometimes, there is a gap between the properties exhibited by the generated input models and those considered by the transformations. Recently, we proposed a formal specification language for the declarative formulation of transformation properties (by means of invariants, pre-, and postconditions) from which we generated partial oracle functions used for transformation testing. Here, we extend the usage of our specification language for the automated generation of input test models by SAT solving. The testing process becomes more intentional because the generated models ensure a certain coverage of the transformation requirements. Moreover, we use the same specification to consistently derive both the input test models and the oracle functions. A set of experiments is presented, aimed at measuring the efficacy of our technique.

Proceedings ArticleDOI
17 Apr 2015
TL;DR: This work proposes a generic framework, ConfValley, to make configuration validation easy, systematic and efficient, and to allow configuration validation as an ordinary part of system deployment.
Abstract: Studies and many incidents in the headlines suggest misconfigurations remain a major cause of unavailability in large systems despite the large amount of work put into detecting, diagnosing and repairing them. In part, this is because many of the solutions are either post-mortem or too expensive to use in production cloud-scale systems. Configuration validation is the process of explicitly defining specifications and proactively checking configurations against those specifications to prevent misconfigurations from entering production. We propose a generic framework, ConfValley, to make configuration validation easy, systematic and efficient, and to allow configuration validation as an ordinary part of system deployment. ConfValley consists of a declarative language for practitioners to express configuration specifications, an inference engine that automatically generates specifications, and a checker that determines if a given configuration obeys its specifications. Our specification language expressed the configuration validation code from Microsoft Azure in 10x fewer lines, many of which were automatically inferred. Using expert-written and inferred specifications, we detected a number of configuration errors in the latest configurations deployed in Microsoft Azure.

Patent
27 Feb 2015
TL;DR: In this article, the authors present a method for accessing a document plan containing one or more messages and applying a set of lexicalization rules to each of the messages to populate the phrase specifications.
Abstract: Methods, apparatuses, and computer program products are described herein that are configured to be embodied as a configurable microplanner. In some example embodiments, a method is provided that comprises accessing a document plan containing one or more messages. The method of this embodiment may also include generating a text specification containing one or more phrase specifications that correspond to the one or more messages in the document plan. The method of this embodiment may also include applying a set of lexicalization rules to each of the one or more messages to populate the one or more phrase specifications. In some example embodiments, the set of lexicalization rules are specified using a microplanning rule specification language that is configured to hide linguistic complexities from a user. In some example embodiments, genre parameters may also be used to specify constraints that provide default behaviors for the realization process.

Book ChapterDOI
01 Jan 2015
TL;DR: The tool StaRVOOrs combines the deductive theorem prover KeY and the RV tool LARVA, and uses properties written using the ppDATE specification language which combines the control-flow property language DATE used in LARVA with Hoare triples assigned to states.
Abstract: We present the tool StaRVOOrS (Static and Runtime Verification of Object-Oriented Software), which combines static and runtime verification (RV) of Java programs. The tool automates a framework which uses partial results extracted from static verification to optimise the runtime monitoring process. StaRVOOrs combines the deductive theorem prover KeY and the RV tool LARVA, and uses properties written using the ppDATE specification language which combines the control-flow property language DATE used in LARVA with Hoare triples assigned to states. We demonstrate the effectiveness of the tool by applying it to the electronic purse application Mondex.

01 Jan 2015
TL;DR: A case study to learn about the challenges that software engineers may face when using GR(1) synthesis for the development of a reactive robotic system, and developed two variants of a forklift controller deployed on a Lego robot.
Abstract: Reactive synthesis is an automated procedure to obtain a correct-by-construction reactive system from a given specification. GR(1) is a well-known fragment of linear temporal logic (LTL) where synthesis is possible using a polynomial symbolic algorithm. We conducted a case study to learn about the challenges that software engineers may face when using GR(1) synthesis for the development of a reactive robotic system. In the case study we developed two variants of a forklift controller, deployed on a Lego robot. The case study employs LTL specification patterns as an extension of the GR(1) specification language, an examination of two specification variants for execution scheduling, traceability from the synthesized controller to constraints in the specification, and generated counter strategies to support understanding reasons for unrealizability. We present the specifications we developed, our observations, and challenges faced during the case study.

Proceedings ArticleDOI
09 Feb 2015
TL;DR: MontiCore as mentioned in this paper is a tool for the engineering of grammar-based language components that can be independently developed, are syntactically composable, and ultimately reusable for modeling languages.
Abstract: Effective model-driven engineering of complex systems requires to appropriately describe different specific system aspects. To this end, efficient integration of different heterogeneous modeling languages is essential. Modeling language integaration is onerous and requires in-depth conceptual and technical knowledge and effort. Traditional modeling lanugage integration approches require language engineers to compose monolithic language aggregates for a specific task or project. Adapting these aggregates to different contexts requires vast effort and makes these hardly reusable. This contribution presents a method for the engineering of grammar-based language components that can be independently developed, are syntactically composable, and ultimately reusable. To this end, it introduces the concepts of language aggregation, language embedding, and language inheritance, as well as their realization in the language workbench MontiCore. The result is a generalizable, systematic, and efficient syntax-oriented composition of languages that allows the agile employment of modeling languages efficiently tailored for individual software projects.

Proceedings ArticleDOI
14 Dec 2015
TL;DR: This paper introduces an alternative approach for formal specification, implementation and analysis of the Partitioning Detection and Connectivity Restoration (PCR) algorithm in WSANs by model WSAN as a dynamic graph and using VDM-SL to describe the formal specification of the algorithm.
Abstract: Recently the interest in wireless sensor and actor networks has increased tremendously. Although there has been significant improvement in WSANs, but still many challenges are needed to overcome the issues of critical applications. In most of the published work the focus is on the performance analysis of non-functional properties but correctness of the approach is still ignored which is very important in large and complex systems. This paper introduces an alternative approach for formal specification, implementation and analysis of the Partitioning Detection and Connectivity Restoration (PCR) algorithm in WSANs. We model WSAN as a dynamic graph and use VDM-SL to describe the formal specification of the algorithm. Invariants are used to validate the algorithm and pre and post conditions confirm the correctness of the operations. VDM-SL is a formal specification language used for implementation of software systems and to illustrate detailed level examination. The PCR algorithm specification is implemented, verified, validated and analyzed through the VDM-SL toolbox.

Book ChapterDOI
29 Oct 2015
TL;DR: This paper describes a tool through which to automatically verify Nash equilibrium strategies for Reactive Modules Games, and makes extensive use of conventional temporal logic satisfiability and model checking techniques.
Abstract: Reactive Modules is a high-level specification language for concurrent and multi-agent systems, used in a number of practical model checking tools. Reactive Modules Games is a game-theoretic extension of Reactive Modules, in which concurrent agents in the system are assumed to act strategically in an attempt to satisfy a temporal logic formula representing their individual goal. The basic analytical concept for Reactive Modules Games is Nash equilibrium. In this paper, we describe a tool through which we can automatically verify Nash equilibrium strategies for Reactive Modules Games. Our tool takes as input a system, specified in the Reactive Modules language, a representation of players' goals expressed as CTL formulae, and a representation of players strategies; it then checks whether these strategies form a Nash equilibrium of the Reactive Modules Game passed as input. The tool makes extensive use of conventional temporal logic satisfiability and model checking techniques. We first give an overview of the theory underpinning the tool, briefly describe its structure and implementation, and conclude by presenting a worked example analysed using the tool.

Book ChapterDOI
24 Jun 2015
TL;DR: This paper presents a novel approach in which data-centric and control-oriented properties may be stated in a single formalism, amenable to both static and dynamic verification techniques, and applies the approach to Mondex, an electronic purse application.
Abstract: Static verification techniques can verify properties across all executions of a program, but powerful judgements are hard to achieve automatically. In contrast, runtime verification enjoys full automation, but cannot judge future and alternative runs. In this paper we present a novel approach in which data-centric and control-oriented properties may be stated in a single formalism, amenable to both static and dynamic verification techniques. We develop and formalise a specification notation, ppDATE, extending the control-flow property language used in the runtime verification tool Larva with pre/post-conditions and show how specifications written in this notation can be analysed both using the deductive theorem prover KeY and the runtime verification tool Larva. Verification is performed in two steps: KeY first partially proves the data-oriented part of the specification, simplifying the specification which is then passed on to Larva to check at runtime for the remaining parts of the specification including the control-centric aspects. We apply the approach to Mondex, an electronic purse application.

Book ChapterDOI
18 Jul 2015
TL;DR: This work uses timed regular expressions with events to specify patterns that define segments of simulation traces over which measurements are to be taken, and associates measure specifications over these patterns to describe a particular type of performance evaluation to be done over the matched signal segments.
Abstract: We propose a declarative measurement specification language for quantitative performance evaluation of hybrid (discrete-continuous) systems based on simulation traces. We use timed regular expressions with events to specify patterns that define segments of simulation traces over which measurements are to be taken. In addition, we associate measure specifications over these patterns to describe a particular type of performance evaluation (maximization, average, etc.) to be done over the matched signal segments. The resulting language enables expressive and versatile specification of measurement objectives. We develop an algorithm for our measurement framework, implement it in a prototype tool, and apply it in a case study of an automotive communication protocol. Our experiments demonstrate that the proposed technique is usable with very low overhead to a typical (computationally intensive) simulation.

Proceedings ArticleDOI
21 Oct 2015
TL;DR: Veritas is a workbench that simplifies the development of sound type systems by providing a single, high-level specification language for type systems, from which it automatically tries to derive soundness proofs and efficient and correct type-checking algorithms.
Abstract: The correct definition and implementation of non-trivial type systems is difficult and requires expert knowledge, which is not available to developers of domain-specific languages (DSLs) in practice. We propose Veritas, a workbench that simplifies the development of sound type systems. Veritas provides a single, high-level specification language for type systems, from which it automatically tries to derive soundness proofs and efficient and correct type-checking algorithms. For verification, Veritas combines off-the-shelf automated first-order theorem provers with automated proof strategies specific to type systems. For deriving efficient type checkers, Veritas provides a collection of optimization strategies whose applicability to a given type system is checked through verification on a case-by-case basis. We have developed a prototypical implementation of Veritas and used it to verify type soundness of the simply-typed lambda calculus and of parts of typed SQL. Our experience suggests that many of the individual verification steps can be automated and, in particular, that a high degree of automation is possible for type systems of DSLs.

Book ChapterDOI
01 Jan 2015
TL;DR: This paper considers the relationship between two widely-used specification approaches to parametric runtime verification: trace slicing and first-order temporal logic, and introduces a technique of identifying syntactic fragments of temporal logics that admit notions of sliceability.
Abstract: Parametric runtime verification is the process of verifying properties of execution traces of (data carrying) events produced by a running system. This paper considers the relationship between two widely-used specification approaches to parametric runtime verification: trace slicing and first-order temporal logic. This work is a first step in understanding this relationship. We introduce a technique of identifying syntactic fragments of temporal logics that admit notions of sliceability. We show how to translate formulas in such fragments into automata with a slicing-based semantics. In exploring this relationship, the paper aims to allow monitoring techniques to be shared between the two approaches and initiate a wider effort to unify specification languages for runtime verification.

Journal ArticleDOI
TL;DR: This paper model MAHSNs as dynamic graph and employ VDM-SL for formal specification and verification of LASCNN algorithm and analyzes and validated the specification using V DM-SL toolbox.

Journal ArticleDOI
TL;DR: A detailed syntax of GeoSpelling is proposed in this paper and is based on instructions used in computer programming language: call to functions and flow control by condition and loop.
Abstract: In order to tackle the ambiguities of Geometrical Product Specification (GPS), GeoSpelling language has been developed to express the semantics of specifications. A detailed syntax of GeoSpelling is proposed in this paper. A specification is defined as a sequence of operations on the skin model. The syntax is based on instructions used in computer programming language: call to functions and flow control by condition and loop. In GeoSpelling, the call to functions corresponds to the declaration of operations; loops make it possible to manage a set of features with rigor and conditions to select features from a set.

Book ChapterDOI
21 Sep 2015
TL;DR: The architecture of Checkers is described and it is demonstrated how it can be used to check proof objects by supplying the fpc specification for a subset of the inferences used by eprover and checking proofs using these inferences.
Abstract: Different theorem provers work within different formalisms and paradigms, and therefore produce various incompatible proof objects. Currently there is a big effort to establish foundational proof certificates fpc, which would serve as a common "specification language" for all these formats. Such framework enables the uniform checking of proof objects from many different theorem provers while relying on a small and trusted kernel to do so. Checkers is an implementation of a proof checker using foundational proof certificates. By trusting a small kernel based on focused sequent calculus on the one hand and by supporting fpc specifications in a prolog-like language on the other hand, it can be used for checking proofs of a wide range of theorem provers. The focus of this paper is on the output of equational resolution theorem provers and for this end, we specify the paramodulation rule. We describe the architecture of Checkers and demonstrate how it can be used to check proof objects by supplying the fpc specification for a subset of the inferences used by eprover and checking proofs using these inferences.

01 Jan 2015
TL;DR: This dissertation introduces a design methodology that addresses the complexity and heterogeneity of cyber-physical systems by using assume-guarantee contracts to formalize the design process and enable the realization of system architectures and control algorithms in a hierarchical and compositional way.
Abstract: Author(s): Nuzzo, Pierluigi | Advisor(s): Sangiovanni-Vincentelli, Alberto L | Abstract: The realization of large and complex cyber-physical systems (such as "smart" transportation, energy, security, and health-care systems) is creating design and verification challenges which will soon become insurmountable with the current engineering practices. These highly heterogeneous systems, tightly combining physical processes with computation, communication, and control elements, would substantially benefit from hierarchical and compositional methodologies to make their design possible let alone optimal. Several languages and tools have been proposed over the years to enable model-based development of complex systems. However, an all-encompassing design framework that helps interconnect different tools, possibly operating on different system representations, is still missing.In this dissertation, we introduce a design methodology that addresses the complexity and heterogeneity of cyber-physical systems by using assume-guarantee contracts to formalize the design process and enable the realization of system architectures and control algorithms in a hierarchical and compositional way. In our methodology, components are specified by contracts, and systems by compositions of contracts. Contracts explicitly define the assumptions of a component on its environment and the guarantees of the component under these assumptions. Contract operations and relations, such as composition, conjunction and refinement allow proving that: (i) an aggregation of components are compatible, i.e. there exists a legal environment in which they can operate; (ii) a set of specifications are consistent, i.e. there exists an implementation satisfying all of them; (iii) an aggregation of components refines a specification, i.e. it implements the specification contract and is able to operate in any environment admitted by it. While horizontal contracts are used to specify components and aggregations of components at the same level of abstraction, we introduce the notion of vertical contracts to reason about richer refinement relations and mappings between different abstraction levels, possibly described by heterogeneous architectures and behavior formalisms. Moreover, we further investigate the problem of compatibility for systems with uncontrolled inputs and controlled outputs, by establishing a link between the theory of contracts and the one of interfaces, which rely on different mathematical formalisms, while sharing the same objectives. From this link, we derive a new projection operator on contracts that enables the preservation of the semantics of interface composition and compatibility.Resting on the above contract framework, the design is carried out as a sequence of refinement steps from a high-level specification to an implementation built out of a library of components at the lower level. To allow for requirement analysis and early detection of inconsistencies, top-level system requirements are captured as contracts, by leveraging a front-end pattern-based specification language and a set of back-end formal languages, including mixed integer-linear constraints and temporal logic. Top-level contracts are then refined to achieve independent development of system architectures and control algorithms, by combining synthesis from requirements and optimization methods.To enable efficient architecture selection under safety and reliability constraints, we explore two optimization-based methods that use an approximate reliability analysis technique to overcome the exponential complexity of exact computations. The Integer-Linear Programming with Approximate Reliability (ILP-AR) method generates larger, monolithic optimization problems using approximate but efficient reliability computations with an explicit theoretical bound on the error. Conversely, the Integer-Linear Programming Modulo Reliability (ILP-MR) method breaks the complex architecture selection task into a sequence of smaller optimization tasks without reliability constraints, interleaved with exact reliability checks. By relying on efficient mechanisms to prune out candidate architectures that are inconsistent with the reliability constraints, ILP-MR can run faster than ILP-AR on large problem instances.We further explore two methods to systematically design control strategies for a given architecture. The reactive synthesis-based optimal control mapping (RS-OCM) method generates controllers by combining reactive synthesis from linear temporal logic contracts with optimization techniques based on simulation and monitoring of signal temporal logic contracts. Different design concerns are then addressed by leveraging the most appropriate abstraction levels, using contracts from the pre-characterized library to accelerate verification tasks. The programming-based optimal control mapping (P-OCM) method uses, instead, a discrete-time representation of the system and a formalization of the design requirements in terms of arithmetic constraints over real numbers to cast the control problem as an optimization problem over a finite time horizon. The optimization problem is then solved with a receding horizon approach and scales better than monolithic reactive synthesis from linear temporal logic.We demonstrate, for the first time, the effectiveness of a contract-based design flow on real-life examples of industrial relevance, namely, the design of aircraft electric power distribution and environment control systems. In our framework, optimal selection of large, industrial-scale power system architectures can be performed in a few minutes. Design validation of power system controllers based on linear temporal logic contracts shows up to two orders of magnitude improvement in terms of execution time with respect to conventional techniques. Finally, our optimization-based load management scheme allows better resource utilization than a conventional one.

02 Jul 2015
TL;DR: In this technical report a semantic model is provided for the Real-Time dialect of the Vienna Development Method, which builds upon both the formal semantics providing for the ISO standard VDM Specification Language and on other work on the core of the VDM-RT notation.
Abstract: All formally defined languages need to be given an unambiguous semantics such that the meaning of all models expressed using the language is clear. In this technical report a semantic model is provided for the Real-Time dialect of the Vienna Development Method (VDM). This builds upon both the formal semantics provided for the ISO standard VDM Specification Language, and on other work on the core of the VDM-RT notation. Although none of the VDM dialects are executable in general, the primary focus of the work presented here is on the executable subset. This focus is result of parallel work on an interpreter implementation for VDM-RT that chooses one of the pos-sible interpretations of a given model that is expressed in VDM-RT, based on the semantics presented here.