scispace - formally typeset
Search or ask a question

Showing papers on "Specification language published in 2016"


Proceedings ArticleDOI
11 Jan 2016
TL;DR: It is demonstrated that examples can, in general, be interpreted as refinement types, and formalizing synthesis as proof search in a sequent calculus with intersection and union refinements that prove to be sound with respect to a conventional type system is put into practice.
Abstract: Input-output examples have emerged as a practical and user-friendly specification mechanism for program synthesis in many environments. While example-driven tools have demonstrated tangible impact that has inspired adoption in industry, their underlying semantics are less well-understood: what are "examples" and how do they relate to other kinds of specifications? This paper demonstrates that examples can, in general, be interpreted as refinement types. Seen in this light, program synthesis is the task of finding an inhabitant of such a type. This insight provides an immediate semantic interpretation for examples. Moreover, it enables us to exploit decades of research in type theory as well as its correspondence with intuitionistic logic rather than designing ad hoc theoretical frameworks for synthesis from scratch. We put this observation into practice by formalizing synthesis as proof search in a sequent calculus with intersection and union refinements that we prove to be sound with respect to a conventional type system. In addition, we show how to handle negative examples, which arise from user feedback or counterexample-guided loops. This theory serves as the basis for a prototype implementation that extends our core language to support ML-style algebraic data types and structurally inductive functions. Users can also specify synthesis goals using polymorphic refinements and import monomorphic libraries. The prototype serves as a vehicle for empirically evaluating a number of different strategies for resolving the nondeterminism of the sequent calculus---bottom-up theorem-proving, term enumeration with refinement type checking, and combinations of both---the results of which classify, explain, and validate the design choices of existing synthesis systems. It also provides a platform for measuring the practical value of a specification language that combines "examples" with the more general expressiveness of refinements.

109 citations


Proceedings ArticleDOI
26 Jun 2016
TL;DR: This paper presents the preliminary design of CompassQL, which defines a partial specification that describes enumeration constraints, and methods for choosing, ranking, and grouping recommended visualizations in a specification language for querying over the space of visualizations.
Abstract: Creating effective visualizations requires domain familiarity as well as design and analysis expertise, and may impose a tedious specification process. To address these difficulties, many visualization tools complement manual specification with recommendations. However, designing interfaces, ranking metrics, and scalable recommender systems remain important research challenges. In this paper, we propose a common framework for facilitating the development of visualization recommender systems in the form of a specification language for querying over the space of visualizations. We present the preliminary design of CompassQL, which defines (1) a partial specification that describes enumeration constraints, and (2) methods for choosing, ranking, and grouping recommended visualizations. To demonstrate the expressivity of the language, we describe existing recommender systems in terms of CompassQL queries. Finally, we discuss the prospective benefits of a common language for future visualization recommender systems.

103 citations


Posted Content
TL;DR: In this paper, a method for using deep neural networks to amortize the cost of inference in models from the family induced by universal probabilistic programming languages is introduced, establishing a framework that combines the strengths of Probabilistic Programming and deep learning methods.
Abstract: We introduce a method for using deep neural networks to amortize the cost of inference in models from the family induced by universal probabilistic programming languages, establishing a framework that combines the strengths of probabilistic programming and deep learning methods. We call what we do "compilation of inference" because our method transforms a denotational specification of an inference problem in the form of a probabilistic program written in a universal programming language into a trained neural network denoted in a neural network specification language. When at test time this neural network is fed observational data and executed, it performs approximate inference in the original model specified by the probabilistic program. Our training objective and learning procedure are designed to allow the trained neural network to be used as a proposal distribution in a sequential importance sampling inference engine. We illustrate our method on mixture models and Captcha solving and show significant speedups in the efficiency of inference.

87 citations


Book ChapterDOI
17 Jul 2016
TL;DR: This work is the first to verify the functional correctness of a practical preemptive OS kernel with machine-checkable proofs and the priority-inversion-freedom (PIF) in \(\mu \text {C/OS-II}\).
Abstract: We propose a practical verification framework for preemptive OS kernels. The framework models the correctness of API implementations in OS kernels as contextual refinement of their abstract specifications. It provides a specification language for defining the high-level abstract model of OS kernels, a program logic for refinement verification of concurrent kernel code with multi-level hardware interrupts, and automated tactics for developing mechanized proofs. The whole framework is developed for a practical subset of the C language. We have successfully applied it to verify key modules of a commercial preemptive OS \(\mu \text {C/OS-II}\) [2], including the scheduler, interrupt handlers, message queues, and mutexes etc. We also verify the priority-inversion-freedom (PIF) in \(\mu \text {C/OS-II}\). All the proofs are mechanized in Coq. To our knowledge, our work is the first to verify the functional correctness of a practical preemptive OS kernel with machine-checkable proofs.

55 citations


Book ChapterDOI
27 Sep 2016
TL;DR: The language extends the specification language Lola with two new features: template stream expressions, which allow input data to be carried along the stream, and dynamic stream generation, where new monitors can be invoked during the monitoring process for the monitoring of new subtasks on their own time scale.
Abstract: We introduce Lola 2.0, a stream-based specification language for the precise description of complex security properties in network traffic. The language extends the specification language Lola with two new features: template stream expressions, which allow input data to be carried along the stream, and dynamic stream generation, where new monitors can be invoked during the monitoring process for the monitoring of new subtasks on their own time scale. Lola 2.0 is simple and expressive: it combines the ease-of-use of rule-based specification languages like Snort with the expressiveness of heavy-weight scripting languages or temporal logics previously needed for the description of complex stateful dependencies and statistical measures. Lola 2.0 specifications are monitored by incrementally constructing output streams from input streams, while maintaining a store of partially evaluated expressions. We demonstrate the flexibility and expressivity of Lola 2.0 using a prototype implementation on several practical examples.

52 citations


Proceedings ArticleDOI
01 Nov 2016
TL;DR: Electrum is proposed, an extension of the Alloy specification language with temporal logic operators, where both rich configurations and expressive temporal properties can easily be defined, and two alternative model-checking techniques are proposed, one bounded and the other unbounded, to verify systems expressed in this language.
Abstract: Model-checking is increasingly popular in the early phases of the software development process. To establish the correctness of a software design one must usually verify both structural and behavioral (or temporal) properties. Unfortunately, most specification languages, and accompanying model-checkers, excel only in analyzing either one or the other kind. This limits their ability to verify dynamic systems with rich configurations: systems whose state space is characterized by rich structural properties, but whose evolution is also expected to satisfy certain temporal properties. To address this problem, we first propose Electrum, an extension of the Alloy specification language with temporal logic operators, where both rich configurations and expressive temporal properties can easily be defined. Two alternative model-checking techniques are then proposed, one bounded and the other unbounded, to verify systems expressed in this language, namely to verify that every desirable temporal property holds for every possible configuration.

49 citations


Journal ArticleDOI
TL;DR: A process calculus which is a variant of the applied pi calculus with constructs for manipulation of a global state by processes running in parallel is proposed and it is shown that this language can be translated to msr rules whilst preserving all security properties expressible in a dedicated first-order logic for security properties.
Abstract: Security APIs, key servers and protocols that need to keep the status of transactions, require to maintain a global, non-monotonic state, e.g., in the form of a database or register. However, most existing automated verification tools do not support the analysis of such stateful security protocols – sometimes because of fundamental reasons, such as the encoding of the protocol as Horn clauses, which are inherently monotonic. A notable exception is the recent tamarin prover which allows specifying protocols as multiset rewrite (msr) rules, a formalism expressive enough to encode state. As multiset rewriting is a " low-level " specification language with no direct support for concurrent message passing, encoding protocols correctly is a difficult and error-prone process. We propose a process calculus which is a variant of the applied pi calculus with constructs for manipulation of a global state by processes running in parallel. We show that this language can be translated to msr rules whilst preserving all security properties expressible in a dedicated first-order logic for security properties. The translation has been implemented in a prototype tool which uses the tamarin prover as a backend. We apply the tool to several case studies among which a simplified fragment of PKCS#11, the Yubikey security token, and an optimistic contract signing protocol.

46 citations


Proceedings ArticleDOI
01 Dec 2016
TL;DR: A sampling-based algorithm is presented to synthesize control policies with temporal and uncertainty constraints and uses local feedback controllers to break the curse of history associated with belief space planning, and conventional automata-based methods become tractable.
Abstract: In this paper, we present a sampling-based algorithm to synthesize control policies with temporal and uncertainty constraints. We introduce a specification language called Gaussian Distribution Temporal Logic (GDTL), an extension of Boolean logic that allows us to incorporate temporal evolution and noise mitigation directly into the task specifications, e.g. “Go to region A and reduce the variance of your state estimate below 0.1 m2.” Our algorithm generates a transition system in the belief space and uses local feedback controllers to break the curse of history associated with belief space planning. Furthermore, conventional automata-based methods become tractable. Switching control policies are then computed using a product Markov Decision Process (MDP) between the transition system and the Rabin automaton encoding the task specification. We present algorithms to translate a GDTL formula to a Rabin automaton and to efficiently construct the product MDP by leveraging recent results from incremental computing. Our approach is evaluated in hardware experiments using a camera network and ground robot.

45 citations


Proceedings ArticleDOI
01 Dec 2016
TL;DR: This work addresses the question of what to do when the optimal probability of achieving the specification is not satisfactory by viewing the specification as a soft constraint and presenting a synthesis framework for MDPs that encodes and automates specification revision in a trade-off for higher probability.
Abstract: Optimal control policy synthesis for probabilistic systems from high-level specifications is increasingly often studied. One major question that is commonly faced, however, is what to do when the optimal probability of achieving the specification is not satisfactory? We address this question by viewing the specification as a soft constraint and present a synthesis framework for MDPs that encodes and automates specification revision in a trade-off for higher probability. The method uses co-safe LTL as the specification language and quantifies the revisions to the specification according to user-defined proposition costs. The framework computes a control policy that optimizes the trade-off between the probability of satisfaction and the cost of specification revision. The key idea of the method is a rule for the composition of the MDP, the automaton representing the specification, and the proposition costs such that all possible specification revisions along with their costs and probabilities of satisfaction are captured in one structure. The problem is then reduced to multi-objective optimization on an MDP. The power of the method is illustrated though simulations of a complex robotic scenario.

41 citations


Proceedings Article
01 May 2016
TL;DR: Odin is an information extraction framework that applies cascades of finite state automata over both surface text and syntactic dependency graphs, and Odin’s declarative language for writing these cascaded automata is described.
Abstract: Odin is an information extraction framework that applies cascades of finite state automata over both surface text and syntactic dependency graphs. Support for syntactic patterns allow us to concisely define relations that are otherwise difficult to express in languages such as Common Pattern Specification Language (CPSL), which are currently limited to shallow linguistic features. The interaction of lexical and syntactic automata provides robustness and flexibility when writing extraction rules. This paper describes Odin’s declarative language for writing these cascaded automata.

39 citations


Proceedings ArticleDOI
01 Nov 2016
TL;DR: Novel optimizations backed up by aggressive speculation techniques and implemented within FastR, an alternative R language implementation, utilizing Truffle -- a JVM-based language development framework developed at Oracle Labs are described.
Abstract: The R language, from the point of view of language design and implementation, is a unique combination of various programming language concepts. It has functional characteristics like lazy evaluation of arguments, but also allows expressions to have arbitrary side effects. Many runtime data structures, for example variable scopes and functions, are accessible and can be modified while a program executes. Several different object models allow for structured programming, but the object models can interact in surprising ways with each other and with the base operations of R. R works well in practice, but it is complex, and it is a challenge for language developers trying to improve on the current state-of-the-art, which is the reference implementation -- GNU R. The goal of this work is to demonstrate that, given the right approach and the right set of tools, it is possible to create an implementation of the R language that provides significantly better performance while keeping compatibility with the original implementation. In this paper we describe novel optimizations backed up by aggressive speculation techniques and implemented within FastR, an alternative R language implementation, utilizing Truffle -- a JVM-based language development framework developed at Oracle Labs. We also provide experimental evidence demonstrating effectiveness of these optimizations in comparison with GNU R, as well as Renjin and TERR implementations of the R language.

Book ChapterDOI
01 Jan 2016
TL;DR: This chapter introduces the first set of basic design concepts along with specification language elements to formally represent them and makes a sharp distinction between the object as a carrier of a behaviour and the behaviour itself.
Abstract: This chapter introduces the first set of basic design concepts along with specification language elements to formally represent them. When considering an object of design, we make a sharp distinction between the object as a carrier of a behaviour, i.e., its possible existence as a real world object, and the behaviour itself. We use this distinction to categorise our basic design concepts.

Proceedings ArticleDOI
01 Jan 2016
TL;DR: A new technique is proposed that enhances the generation of UML models through Natural Language requirements, which can easily provide automatic assistance to the developers to minimize the errors that arise in the existing system.
Abstract: The foremost problem that arises in the Software Development Cycle is during the Requirements specification and analysis. Errors that are encountered during the first phase of the cycle migrate to other phases too which in turn results in the most costly process than the original specified process. The reason is that the specifications of software requirements are termed in the Nature Language Format. One can easily transform the requirements specified into computer model using UML. To minimize the errors that arise in the existing system, we have proposed a new technique that enhances the generation of UML models through Natural Language requirements, which can easily provide automatic assistance to the developers. The main aim of our paper is to focus on the production of Activity Diagram and Sequence Diagram through Natural Language Specifications. Standard POS tagger and parser analyze the input i.e., requirements in English language given by the users and extract phrases, activities, etc. from the text specifies. The technique is beneficial as it reduces the gap between informal natural language and the formal modeling language. The input is the requirements laid down by the users in English language. Some stages like pre-processing, part of speech (POs), tagging, parsing, phrase identification and designing of UML diagrams occur along with the input. The application and its framework is developed in Java and it is tested on by implementing on a few technical documents

Proceedings ArticleDOI
14 Mar 2016
TL;DR: Preliminary experiments show that some of these models used as classifiers can achieve high precision and recall and can be used to properly identify language families, languages and even deal with embedded code fragments.
Abstract: Software language identification techniques are applicable to many situations from universal IDE support to legacy code analysis. Most widely used heuristics are based on software artefact metadata such as file extensions or on grammar-based text analysis such as keyword search. In this paper we propose to use statistical language models from the natural language processing field such as n-grams, skip-grams, multinominal naive Bayes and normalised compression distance. Our preliminary experiments show that some of these models used as classifiers can achieve high precision and recall and can be used to properly identify language families, languages and even deal with embedded code fragments.

Journal ArticleDOI
TL;DR: This proposal bridges the gap between norms and mechanism design allowing us to formally study and analyse the effect of norms and sanctions on the behaviour of rational agents and proposes a concrete executable specification language that can be used to implement multi-agent environments.

Proceedings ArticleDOI
02 Jun 2016
TL;DR: This approach extracts potential shape predicates based on the definition of constructors of arbitrary user-defined inductive data types, and combines these predicates within an expressive first-order specification language using a lightweight data-driven learning procedure.
Abstract: This paper presents a novel automated procedure for discovering expressive shape specifications for sophisticated functional data structures. Our approach extracts potential shape predicates based on the definition of constructors of arbitrary user-defined inductive data types, and combines these predicates within an expressive first-order specification language using a lightweight data-driven learning procedure. Notably, this technique requires no programmer annotations, and is equipped with a type-based decision procedure to verify the correctness of discovered specifications. Experimental results indicate that our implementation is both efficient and effective, capable of automatically synthesizing sophisticated shape specifications over a range of complex data types, going well beyond the scope of existing solutions.

Proceedings ArticleDOI
01 Nov 2016
TL;DR: Titanium is presented, an extension of Alloy for formal analysis of evolving specifications that narrows the state space of the revised specification, thereby greatly reducing the required computational effort.
Abstract: The Alloy specification language, and the corresponding Alloy Analyzer, have received much attention in the last two decades with applications in many areas of software engineering. Increasingly, formal analyses enabled by Alloy are desired for use in an on-line mode, where the specifications are automatically kept in sync with the running, possibly changing, software system. However, given Alloy Analyzer's reliance on computationally expensive SAT solvers, an important challenge is the time it takes for such analyses to execute at runtime. The fact that in an on-line mode, the analyses are often repeated on slightly revised versions of a given specification, presents us with an opportunity to tackle this challenge. We present Titanium, an extension of Alloy for formal analysis of evolving specifications. By leveraging the results from previous analyses, Titanium narrows the state space of the revised specification, thereby greatly reducing the required computational effort. We describe the semantic basis of Titanium in terms of models specified in relational logic. We show how the approach can be realized atop an existing relational logic model finder. Our experimental results show Titanium achieves a significant speed-up over Alloy Analyzer when applied to the analysis of evolving specifications.

Book ChapterDOI
10 Oct 2016
TL;DR: The suitability of statistical model checking for the analysis of quantitative properties of product line models by an extended treatment of earlier work by the authors is reported on.
Abstract: We report on the suitability of statistical model checking for the analysis of quantitative properties of product line models by an extended treatment of earlier work by the authors. The type of analysis that can be performed includes the likelihood of specific product behaviour, the expected average cost of products (in terms of the attributes of the products’ features) and the probability of features to be (un)installed at runtime. The product lines must be modelled in QFLan, which extends the probabilistic feature-oriented language PFLan with novel quantitative constraints among features and on behaviour and with advanced feature installation options. QFLan is a rich process-algebraic specification language whose operational behaviour interacts with a store of constraints, neatly separating product configuration from product behaviour. The resulting probabilistic configurations and probabilistic behaviour converge in a discrete-time Markov chain semantics, enabling the analysis of quantitative properties. Technically, a Maude implementation of QFLan, integrated with Microsoft’s SMT constraint solver Z3, is combined with the distributed statistical model checker MultiVeStA, developed by one of the authors. We illustrate the feasibility of our framework by applying it to a case study of a product line of bikes.

Journal ArticleDOI
Mo Li1, Shaoying Liu1
TL;DR: A new approach is put forward to allow formal specification to play more effective roles in software design by integrating specification animation-based inspection into the process of constructing formal design specifications.
Abstract: Software design has been well recognized as an important means to achieve high reliability, and formal specification can help enhance the quality of design. However, communications between the designer and the user can become difficult via formal specifications due to the potentially complex mathematical expressions in the specification. This difficulty may lead to the situation where the user may not be closely involved in the process of constructing the specification for quality assurance. To allow formal specification to play more effective roles in software design, we put forward a new approach to deal with this problem in this paper. The approach is characterized by integrating specification animation-based inspection into the process of constructing formal design specifications. We discuss the underlying principle of the approach by explaining how specification animation is utilized as a reading technique for inspection to validate, and then evolve, the current specification towards a satisfactory one. We describe a prototype software tool for the method, and present a case study to show how the method supported by the tool works in practice.

Proceedings ArticleDOI
18 Jul 2016
TL;DR: This work proposes a clear separation of concerns between C/S specification on one side, through the new rule-based description language CSml, and the algorithmic core of SE on the other side, revisited to take C/s policies into account, demonstrating the feasibility and the benefits of the method.
Abstract: Symbolic Execution (SE) is a popular and profitable approach to automatic code-based software testing. Concretization and symbolization (C/S) is a crucial part of modern SE tools, since it directly impacts the trade-offs between correctness, completeness and efficiency of the approach. Yet, C/S policies have been barely studied. We intend to remedy to this situation and to establish C/S policies on a firm ground. To this end, we propose a clear separation of concerns between C/S specification on one side, through the new rule-based description language CSml, and the algorithmic core of SE on the other side, revisited to take C/S policies into account. This view is implemented on top of an existing SE tool, demonstrating the feasibility and the benefits of the method. This work paves the way for more flexible SE tools with well-documented and reusable C/S policies, as well as for a systematic study of C/S policies.

01 Jan 2016
TL;DR: Liquid Haskell is presented, a usable program verifier that aims to establish formal verification as an integral part of the development process and serves as a prototype verifier in a future where formal techniques will be used to facilitate, instead of hinder, software development.
Abstract: Author(s): Vazou, Niki | Advisor(s): Jhala, Ranjit | Abstract: Code deficiencies and bugs constitute an unavoidable part of software systems. In safety-critical systems, like aircrafts or medical equipment, even a single bug can lead to catastrophic impacts such as injuries or death. Formal verification can be used to statically track code deficiencies by proving or disproving correctness properties of a system. However, at its current state formal verification is a cumbersome process that is rarely used by mainstream developers, mostly because it targets non general purpose languages (e.g., Coq, Agda, Dafny).We present Liquid Haskell, a usable program verifier that aims to establish formal verification as an integral part of the development process. Liquid Haskell naturally integrates the specification of correctness propertiesas logical refinements of Haskell's types. Moreover, it uses the abstract interpretation framework of liquid types to automatically check correctness of specifications via Satisfiability Modulo Theories (SMT) solversrequiring no explicit proofs or complicated annotations. Finally, the specification language is arbitrary expressive, allowing the user to write general correctness properties about their code, thus turning Haskell into a theorem prover. Transforming a mature language --- with optimized libraries and highly tuned parallelism --- into a theorem prover enables us to verify a wide variety of properties on real world applications. We used Liquid Haskell to verify shallow invariants of existing Haskell code, e.g., memory safety of the optimized string manipulation library ByteString. Moreover, we checked deep, sophisticated properties of parallel Haskell code, e.g., program equivalence of a naive string matcher and its parallelized version. Having verified about 20K of Haskell code, we present how Liquid Haskell serves as a prototype verifier in a future where formal techniques will be used to facilitate, instead of hinder, software development.

Book ChapterDOI
15 Nov 2016
TL;DR: A case study of how to specify and model check a given robot algorithm in Maude, a rewriting logic-based programming and specification language, and expresses in LTL the properties it should enjoy.
Abstract: Distributed mobile computing has been recently an active field of research, resulting in a large number of algorithms. However, to the best of our knowledge, few of the designed algorithms have been formally model checked. This paper presents a case study of how to specify and model check a given robot algorithm. We specify the system in Maude, a rewriting logic-based programming and specification language. To check the correctness of the algorithm, we express in LTL the properties it should enjoy. Our analysis leads to a counterexample which implies that the proposed algorithm is not correct.

Patent
11 Jan 2016
TL;DR: In this article, the authors present systems, methods, circuits and associated computer executable code for deep learning based natural language understanding, wherein training of one or more neural networks, includes: producing character strings inputs "noise" on a per-character basis, and introducing the produced noise into machine training character string inputs fed to a 'word tokenization and spelling correction language-model' to generate spell corrected word sets outputs; feeding machine training word sets inputs, including 'right' examples of correctly semantically-tagged word sets, to the 'word semantics derivation model'
Abstract: Disclosed are systems, methods, circuits and associated computer executable code for deep learning based natural language understanding, wherein training of one or more neural networks, includes: producing character strings inputs ‘noise’ on a per-character basis, and introducing the produced ‘noise’ into machine training character strings inputs fed to a ‘word tokenization and spelling correction language-model’, to generate spell corrected word sets outputs; feeding machine training word sets inputs, including one or more ‘right’ examples of correctly semantically-tagged word sets, to a ‘word semantics derivation model’, to generate semantically tagged sentences outputs. Upon models reaching a training ‘steady state’, the ‘word tokenization and spelling correction language-model’ is fed with input character strings representing ‘real’ linguistic user inputs, generating word sets outputs that are fed as inputs to the word semantics derivation model for generating semantically tagged sentences outputs.

Proceedings ArticleDOI
01 May 2016
TL;DR: Control of a multilevel system is developed for a discrete-event system (DES) structured by an engineering model where each subsystem has a set of children at the next-lower level and a unique parent at thenext-higher level.
Abstract: Control of a multilevel system is developed for a discrete-event system (DES) structured by an engineering model. In a multilevel system, each subsystem has a set of children at the next-lower level and a unique parent at the next-higher level. A coordinated multilevel DES is defined by the condition that a parent also is involved in the interaction of each tuple of its children. Control synthesis is carried out per subsystem. If the specification language is conditionally decomposable, conditionally controllable, and conditionally normal then there exists a set of supervisors such that the closed-loop system of the multilevel system meets the specification. The complexity gain is considerable. The examples of an MRI scanner and of a vehicle system illustrate the approach.

Journal ArticleDOI
TL;DR: A process calculus with an explicit representation of resources in which processes and resources co-evolve is described, and the resource semantics is formulated in such a way that soundness and completeness of bisimulation are obtained with respect to logical equivalence for the naturally full range of logical connectives and modalities.

Proceedings ArticleDOI
14 Mar 2016
TL;DR: GLAsT is presented, a new learning algorithm which accepts a small set of sentences describing correctness properties and corresponding SystemVerilog Assertions (SVAs) and creates a custom formal grammar which captures the writing style and sentence structure of a specification.
Abstract: The purpose of functional verification is to ensure that a design conforms to its specification. However, large written specifications can contain hundreds of statements describing correct operation which an engineer must use to create sets of correctness properties. This laborious manual process increases both verification time and cost. In this work we present GLAsT, a new learning algorithm which accepts a small set of sentences describing correctness properties and corresponding SystemVerilog Assertions (SVAs). GLAsT creates a custom formal grammar which captures the writing style and sentence structure of a specification and facilitates the automatic translation of English specification sentences into formal SystemVerilog Assertions. We evaluate GLAsT on English sentences from two ARM AMBA bus protocols. Results show that a translation system using the formal grammar generated by GLAsT automatically generates correctly formed SVAs from the targeted AMBA specification as well as from a second, different AMBA bus specification.


01 Jan 2016
TL;DR: In this article, the authors propose a solution that allows workflows to be portable across a range of clouds through a new framework for building, dynamically deploying and enacting workflows, combining the TOSCA specification language and container-based virtualization.
Abstract: Scientific workflows are increasingly being migrated to the Cloud. However, workflow developers face the problem of which Cloud to choose and, more importantly, how to avoid vendor lock-in. This is because there are a range of Cloud platforms, each with different functionality and interfaces. In this paper we propose a solution - a system that allows workflows to be portable across a range of Clouds. This portability is achieved through a new framework for building, dynamically deploying and enacting workflows. It combines the TOSCA specification language and container-based virtualization. TOSCA is used to build a reusable and portable description of a workflow which can be automatically deployed and enacted using Docker containers. We describe a working implementation of our framework and evaluate it using a set of existing scientific workflows that illustrate the flexibility of the proposed approach.

Proceedings ArticleDOI
10 Jun 2016
TL;DR: A tool called SESAMM Specifier is presented in which a subset of the specification patterns for formal requirements specification, called SPS, is integrated into an existing industrial tool-chain, providing the necessary means for the formal specification of system requirements and the later validation of the formally expressed behavior.
Abstract: The lack of formal system specifications is a major obstacle to the widespread adoption of formal verification techniques in industrial settings. Specification patterns represent a promising approach that can fill this gap by enabling non-expert practitioners to write formal specifications based on reusing solutions to commonly occurring problems. Despite the fact that the specification patterns have been proven suitable for specification of industrial systems, there is no engineer-friendly tool support adequate for industrial adoption. In this paper, we present a tool called SESAMM Specifier in which we integrate a subset of the specification patterns for formal requirements specification, called SPS, into an existing industrial tool-chain. The tool provides the necessary means for the formal specification of system requirements and the later validation of the formally expressed behavior.

Proceedings ArticleDOI
01 Dec 2016
TL;DR: A run-time security monitor for embedded system applications that detects both known and unknown computational cyber-attacks and isSound and complete, i.e. sound and complete), eliminating false alarms, as well as efficient, supporting real-time detection.
Abstract: We introduce a run-time security monitor for embedded system applications that detects both known and unknown computational cyber-attacks. Our security monitor is rigorous (i.e. sound and complete), eliminating false alarms, as well as efficient, supporting real-time detection. In contrast, conventional run-time security monitors for application software either produce (high rates of) false alarms (e.g. intrusion detection systems) or limit application performance (e.g. run-time verification systems). Such monitors are typically non-adaptive against constantly changing attacks of variable extent. Our run-time monitor detects attacks by checking the consistency between the application run-time behavior and its specified (expected) behavior model. Our specification language is based on monadic second order logic and event calculus interpreted over algebraic data structures; the application implementation can be in any programming language. Based on our defined denotational semantics of the specification language, we prove that the security monitor is sound and complete, i.e. it produces an alarm iff it detects an inconsistency between the application execution and the specified behavior. Importantly, the monitor detects not only cyber-attacks but all behavioral deviations from specification, e.g. bugs, and so, is readily applicable to the security of legacy systems. Through an application of our prototype monitor to a PID controller for a feedwater tank, we demonstrate that rigorous run-time monitors employing verification techniques are effective, efficient and readily applicable to demanding real-time critical systems, without scalability limitations.