scispace - formally typeset
Search or ask a question

Showing papers on "Specification language published in 2017"


Journal ArticleDOI
TL;DR: The underlying semantics of the specification language supported and the algorithms implemented in MCMAS, including its fairness and counterexample generation features, are presented and a detailed description of the implementation is provided.
Abstract: We present MCMAS, a model checker for the verification of multi-agent systems. MCMAS supports efficient symbolic techniques for the verification of multi-agent systems against specifications representing temporal, epistemic and strategic properties. We present the underlying semantics of the specification language supported and the algorithms implemented in MCMAS, including its fairness and counterexample generation features. We provide a detailed description of the implementation. We illustrate its use by discussing a number of examples and evaluate its performance by comparing it against other model checkers for multi-agent systems on a common case study.

208 citations


Proceedings ArticleDOI
01 Jan 2017
TL;DR: In this article, the authors take advantage of the expressive power of temporal logic (TL) to specify complex rules the robot should follow, and incorporate domain knowledge into learning, and demonstrate the proposed RL approach in a toast-placing task learned by a Baxter robot.
Abstract: Reinforcement learning (RL) depends critically on the choice of reward functions used to capture the desired behavior and constraints of a robot. Usually, these are handcrafted by a expert designer and represent heuristics for relatively simple tasks. Real world applications typically involve more complex tasks with rich temporal and logical structure. In this paper we take advantage of the expressive power of temporal logic (TL) to specify complex rules the robot should follow, and incorporate domain knowledge into learning. We propose Truncated Linear Temporal Logic (TLTL) as a specification language, We propose Truncated Linear Temporal Logic (TLTL) as a specification language, that is arguably well suited for the robotics applications, We show in simulated trials that learning is faster and policies obtained using the proposed approach outperform the ones learned using heuristic rewards in terms of the robustness degree, i.e., how well the tasks are satisfied. Furthermore, we demonstrate the proposed RL approach in a toast-placing task learned by a Baxter robot.

111 citations


Proceedings ArticleDOI
08 May 2017
TL;DR: The verification problem for synchronous, perfect recall multi-agent systems with imperfect information against a specification language that includes strategic and epistemic operators is analysed and it is shown that if the agents' actions are public, then verification is 2exptime-complete.
Abstract: We analyse the verification problem for synchronous, perfect recall multi-agent systems with imperfect information against a specification language that includes strategic and epistemic operators While the verification problem is undecidable, we show that if the agents' actions are public, then verification is 2exptime-complete To illustrate the formal framework we consider two epistemic and strategic puzzles with imperfect information and public actions: the muddy children puzzle and the classic game of battleships

55 citations


Posted Content
TL;DR: In this paper, a core programming language is used to teach a community of users a diverse language and use it to build hundreds of complex voxel structures, which can be used for analyzing data, manipulating text, and querying databases.
Abstract: Our goal is to create a convenient natural language interface for performing well-specified but complex actions such as analyzing data, manipulating text, and querying databases. However, existing natural language interfaces for such tasks are quite primitive compared to the power one wields with a programming language. To bridge this gap, we start with a core programming language and allow users to "naturalize" the core language incrementally by defining alternative, more natural syntax and increasingly complex concepts in terms of compositions of simpler ones. In a voxel world, we show that a community of users can simultaneously teach a common system a diverse language and use it to build hundreds of complex voxel structures. Over the course of three days, these users went from using only the core language to using the naturalized language in 85.9\% of the last 10K utterances.

48 citations


Journal ArticleDOI
TL;DR: This paper features a proof-of-concept implementation using the open source forensic framework named plaso to export data to CASE, a rational evolution of the Digital Forensic Analysis eXpression (DFAX) for representing digital forensic information and provenance.

38 citations


Journal ArticleDOI
04 Apr 2017
TL;DR: This article presents a novel approach in which data-oriented and control-oriented properties may be stated in a single formalism amenable to both static and dynamic verification techniques, and presents the applicability of this approach on two case studies.
Abstract: Static verification techniques are used to analyse and prove properties about programs before they are executed. Many of these techniques work directly on the source code and are used to verify data-oriented properties over all possible executions. The analysis is necessarily an over-approximation as the real executions of the program are not available at analysis time. In contrast, runtime verification techniques have been extensively used for control-oriented properties, analysing the current execution path of the program in a fully automatic manner. In this article, we present a novel approach in which data-oriented and control-oriented properties may be stated in a single formalism amenable to both static and dynamic verification techniques. The specification language we present to achieve this that of ppDATEs, which enhances the control-oriented property language of DATEs, with data-oriented pre/postconditions. For runtime verification of ppDATE specifications, the language is translated into a DATE. We give a formal semantics to ppDATEs, which we use to prove the correctness of our translation from ppDATEs to DATEs. We show how ppDATE specifications can be analysed using a combination of the deductive theorem prover KeY and the runtime verification tool LARVA. Verification is performed in two steps: KeY first partially proves the data-oriented part of the specification, simplifying the specification which is then passed on to LARVA to check at runtime for the remaining parts of the specification including the control-oriented aspects. We show the applicability of our approach on two case studies.

34 citations


Posted Content
TL;DR: This work introduces RTLola, a new stream-based specification language for the description of real-time properties of reactive systems that allows for an automatic memory analysis that guides the user in identifying the computationally expensive specifications.
Abstract: We introduce RTLola, a new stream-based specification language for the description of real-time properties of reactive systems. The key feature is the integration of sliding windows over real-time intervals with aggregation functions into the language. Using sliding windows we can detach fixed-rate output streams from the varying rate input streams. We provide an efficient evaluation algorithm of the sliding windows by partitioning the windows into intervals according to a given monitor frequency. For useful aggregation functions, the intervals allow a more efficient way to compute the aggregation value by dynamically reusing interval summaries. In general, the number of input values within a single window instance can grow arbitrarily large disallowing any guarantees on the expected memory consumption. Assuming a fixed monitor output rate, we can provide memory guarantees which can be computed a-priori. Additionally, for specifications using certain classes of aggregation functions, we can perform a more precise, better memory analysis. We demonstrate the applicability of the new language on practical examples.

33 citations


Journal ArticleDOI
TL;DR: The main entities of a new version (3.1) of the Nested Context Model (NCM) are presented, which concentrate efforts at integrating support for enriched concept description to the model, enabling the specification of relationships between concept descriptions and multimedia content in the hypermedia way.
Abstract: Most multimedia documents available today are agnostic to data semantics. Moreover, their specification language offers little to ease authoring of meaningful content. In this paper, we present the main entities of a new version (3.1) of the Nested Context Model (NCM), which concentrate efforts at integrating support for enriched concept description to the model. These extensions enable the specification of relationships between concept descriptions and multimedia content in the hypermedia way, composing what we call hyperknowledge in this paper. NCM previous version (3.0) is a hypermedia conceptual model. NCL (Nested Context Language), which is part of international standards and ITU recommendations, was engineered according to NCM 3.0 definitions. The extensions discussed in this paper contribute not only for advances in the NCL, but mainly as a conceptual model for hyperknowledge document engineering.

31 citations


Journal ArticleDOI
TL;DR: It is shown how codatatypes can be employed to produce compact, high-level proofs of key results in logic: the soundness and completeness of proof systems for variations of first-order logic.
Abstract: We show how codatatypes can be employed to produce compact, high-level proofs of key results in logic: the soundness and completeness of proof systems for variations of first-order logic. For the classical completeness result, we first establish an abstract property of possibly infinite derivation trees. The abstract proof can be instantiated for a wide range of Gentzen and tableau systems for various flavors of first-order logic. Soundness becomes interesting as soon as one allows infinite proofs of first-order formulas. This forms the subject of several cyclic proof systems for first-order logic augmented with inductive predicate definitions studied in the literature. All the discussed results are formalized using Isabelle/HOL's recently introduced support for codatatypes and corecursion. The development illustrates some unique features of Isabelle/HOL's new coinductive specification language such as nesting through non-free types and mixed recursion---corecursion.

31 citations


Book ChapterDOI
24 Jul 2017
TL;DR: Montre as mentioned in this paper is a monitoring tool to search patterns specified by timed regular expressions over real-time behaviors, which can be used for analyzing and reasoning about cyber-physical systems.
Abstract: We present Montre, a monitoring tool to search patterns specified by timed regular expressions over real-time behaviors. We use timed regular expressions as a compact, natural, and highly-expressive pattern specification language for monitoring applications involving quantitative timing constraints. Our tool essentially incorporates online and offline timed pattern matching algorithms so it is capable of finding all occurrences of a given pattern over both logged and streaming behaviors. Furthermore, Montre is designed to work with other tools via standard interfaces to perform more complex and versatile tasks for analyzing and reasoning about cyber-physical systems. As the first of its kind, we believe Montre will enable a new line of inquiries and techniques in these fields.

30 citations


Book ChapterDOI
13 Sep 2017
TL;DR: The debugging and health management support through stream runtime monitoring techniques have proven highly beneficial for system design and development and the project has identified usability improvements to the specification language, and has influenced the design of the language.
Abstract: Unmanned Aircraft Systems (UAS) with autonomous decision-making capabilities are of increasing interest for a wide area of applications such as logistics and disaster recovery. In order to ensure the correct behavior of the system and to recognize hazardous situations or system faults, we applied stream runtime monitoring techniques within the DLR ARTIS (Autonomous Research Testbed for Intelligent System) family of unmanned aircraft. We present our experience from specification elicitation, instrumentation, offline log-file analysis, and online monitoring on the flight computer on a test rig. The debugging and health management support through stream runtime monitoring techniques have proven highly beneficial for system design and development. At the same time, the project has identified usability improvements to the specification language, and has influenced the design of the language.

Journal ArticleDOI
TL;DR: This article extends transformation methods based on integer term rewriting systems to handle arbitrary data types, global variables, function calls, and arrays, and to encode safety checks, and shows that it can automatically verify memory safety and prove correctness of realistic functions.
Abstract: This article aims to develop a verification method for procedural programs via a transformation into logically constrained term rewriting systems (LCTRSs). To this end, we extend transformation methods based on integer term rewriting systems to handle arbitrary data types, global variables, function calls, and arrays, and to encode safety checks. Then we adapt existing rewriting induction methods to LCTRSs and propose a simple yet effective method to generalize equations. We show that we can automatically verify memory safety and prove correctness of realistic functions. Our approach proves equivalence between two implementations; thus, in contrast to other works, we do not require an explicit specification in a separate specification language.

Journal ArticleDOI
John Fox1
TL;DR: The foundations of the CREDO model are described, the main theoretical, technical and clinical contributions are summarized, and benefits of the cognitive approach are discussed.

Book ChapterDOI
27 Nov 2017
TL;DR: This paper presents an approach for rapidly adjustable embedded trace online monitoring of multi-core systems, called RETOM, which employs a novel online reconstruction technique that makes the program run available outside the SoC and allows for evaluating a specification formulated in the stream-based specification language TeSSLa in real time.
Abstract: This paper presents an approach for rapidly adjustable embedded trace online monitoring of multi-core systems, called RETOM. Today, most commercial multi-core SoCs provide accurate runtime information through an embedded trace unit without affecting program execution. Available debugging solutions can use it to reconstruct the run offline, but usually for up to a few seconds only. RETOM employs a novel online reconstruction technique that makes the program run available outside the SoC and allows for evaluating a specification formulated in the stream-based specification language TeSSLa in real time. The necessary computing performance is provided by an FPGA-based event processing system. In contrast to other hardware-based runtime verification techniques, changing the specification requires no circuit synthesis and thus seconds rather than minutes or hours. Therefore, iterated testing and property adjustment during development and debugging becomes feasible while preserving the option of arbitrarily extending observation time, which may be necessary to detect rarely occurring errors. Experiments show the feasibility of the approach.

Book ChapterDOI
01 Jan 2017
TL;DR: This chapter describes an annotation scheme for the markup of spatial relations, both static and dynamic, as expressed in text and other media, and reviews the annotation development process, and the adoption of the initial specification ISOspace.
Abstract: An understanding of spatial information in natural language is necessary for many computational linguistics and artificial intelligence applications. In this chapter, we describe an annotation scheme for the markup of spatial relations, both static and dynamic, as expressed in text and other media. The desiderata for such a specification language are presented along with what representational mechanisms are required for such a specification to be successful. We review the annotation development process, and the adoption of the initial specification ISOspace, as an ISO standard, renamed ISOspace. We conclude with a discussion of the use of ISOspace in the context of the shared task SpaceEval 2015.

Book ChapterDOI
18 May 2017
TL;DR: The objective of this language is to propose a way of achieving stability while constructing analysis patterns, and to present a pattern language for building stable analysis patterns.
Abstract: Software analysis patterns are believed to play a major role in reducing the cost and condensing the time of software product lifecycles. However, analysis patterns have not realized their full potential. One of the common problems with today”s analysis patterns is the lack of stability. In many cases, analysis pattern that model specific problems fail to model the same problem when it appears in different context, forcing software developers to analyze the problem from scratch. As a result, the reusability of the pattern will diminish. This paper presents a pattern language for building stable analysis patterns. The objective of this language is to propose a way of achieving stability while constructing analysis patterns.

Journal ArticleDOI
12 Oct 2017
TL;DR: The ableC framework as mentioned in this paper allows programmers to import new, domain-specific, independently developed language features into their programming language, in this case C. This is possible due to two modular analyses that extension developers can apply to their language extension to check its composability.
Abstract: This paper describes an extensible language framework, ableC, that allows programmers to import new, domain-specific, independently-developed language features into their programming language, in this case C. Most importantly, this framework ensures that the language extensions will automatically compose to form a working translator that does not terminate abnormally. This is possible due to two modular analyses that extension developers can apply to their language extension to check its composability. Specifically, these ensure that the composed concrete syntax specification is non-ambiguous and the composed attribute grammar specifying the semantics is well-defined. This assurance and the expressiveness of the supported extensions is a distinguishing characteristic of the approach. The paper describes a number of techniques for specifying a host language, in this case C at the C11 standard, to make it more amenable to language extension. These include techniques that make additional extensions pass these modular analyses, refactorings of the host language to support a wider range of extensions, and the addition of semantic extension points to support, for example, operator overloading and non-local code transformations.

Proceedings ArticleDOI
17 Sep 2017
TL;DR: The results of the evaluation show the feasibility of applying the model-driven approach for trace checking in realistic settings: TEMPSY-CHECK scales linearly with respect to the length of the input trace and can analyze traces with one million events in about two seconds.
Abstract: Trace checking is a procedure for evaluating requirements over a log of events produced by a system. This paper deals with the problem of performing trace checking of temporal properties expressed in TemPsy, a pattern-based specification language. The goal of the paper is to present a scalable and practical solution for trace checking, which can be used in contexts where relying on model-driven engineering standards and tools for property checking is a fundamental prerequisite.The main contributions of the paper are: a model-driven trace checking procedure, which relies on the efficient mapping of temporal requirements written in TemPsy into OCL constraints on a conceptual model of execution traces; the implementation of this trace checking procedure in the TEMPSY-CHECK tool; the evaluation of the scalability of TEMPSY-CHECK, applied to the verification of real properties derived from a case study of our industrial partner, including a comparison with a state-of-the-art alternative technology based on temporal logic. The results of the evaluation show the feasibility of applying our model-driven approach for trace checking in realistic settings: TEMPSY-CHECK scales linearly with respect to the length of the input trace and can analyze traces with one million events in about two seconds.

Journal ArticleDOI
TL;DR: This paper introduces a novel class of hierarchical state machines, called Dynamic STate Machines (DSTMs), and proposes an approach for modelling and validating railway control systems, based on the new specification language.

Proceedings ArticleDOI
09 Jan 2017
TL;DR: This work presents a formal approach for log-analysis and monitoring for the DLR ARTIS framework using the stream-based specification language LOLA, currently developed at Saarland University, for the runtime monitoring of formal specifications.
Abstract: System health management is an important feature of autonomy, enhancing consistency checks, overall system robustness and even some degree of self-awareness. Seemingly unrelated, debugging and analysis of such complex systems is another challenge during development that should not be underrated. We propose that the so-called runtime monitoring of relevant properties and system requirements is a viable technique to support both aforementioned concepts. A suitable monitoring approach for a cyber-physical system has to be efficient and capable of supervising various specifications, possibly relating different data sources and data history. We present a formal approach for log-analysis and monitoring for the DLR ARTIS framework using the stream-based specification language LOLA, currently developed at Saarland University, for the runtime monitoring of formal specifications. We have evaluated this approach by specifying relevant properties as LOLA stream equations. While we have identified a number of possible improvements in the specification language, we have demonstrated, even with the current language, that online and offline monitoring of relevant properties is indeed possible and gives engineers a powerful tool for debugging as well as implementing health management concepts.

Journal ArticleDOI
TL;DR: This work investigates how to apply ontologies for agentoriented software engineering by presenting a new modelling approach where multiagent systems are designed using the proposed OntoMAS ontology and describing techniques to help programmers bring their concepts into code and also generate code automatically from instantiated ontology models.
Abstract: Model-driven engineering provides abstractions and notations for improving the understanding and for supporting the modelling, coding, and verification of applications for specific domains. Ontologies, on the other hand, provide formal and explicit definitions of shared conceptualisations and enable the use of semantic reasoning. Although these areas have been developed by different communities, important synergies can be achieved when both are combined. These advantages can be explored in the development of multi-agent systems, given their complexity and the need for integrating several components that are often addressed from different angles. This work investigates how to apply ontologies for agentoriented software engineering. Initially, we present a new modelling approach where multiagent systems are designed using the proposed OntoMAS ontology. Then, we describe techniques, implemented in a tool, to help programmers bring their concepts into code and also generate code automatically from instantiated ontology models. Several advantages can be obtained from these new approaches to model and code multi-agent systems, such as semantic reasoning to carry out inferences and verification mechanisms. But the main advantage is the unified high (knowledge) level specification language that allows modelling the three dimensions that are united in the JaCaMo framework so that systems specifications can be better communicated across developing teams. The evaluations of these proposals indicate that they contribute with the different aspects of agent-oriented software engineering, such as the specification, verification, and programming of these systems.

Journal ArticleDOI
TL;DR: This research presents a method for integrating human considerations into system models through human-centered design, and new views allow systems engineers and human factors engineers to effectively communicate the role of the user during early system design trades.
Abstract: The human user is important to consider during system design. However, common system design models, such as the system modeling language, typically represent human users and operators as external a...

Proceedings ArticleDOI
01 Sep 2017
TL;DR: This paper proposes a formalization process for requirements specification and test statements, allowing us to detect redundant statements and thus reduce the efforts for specification and validation.
Abstract: Automotive systems are constantly increasing in complexity and size. Beside the increase of requirements specifications and related test specification due to new systems and higher system interaction, we observe an increase of redundant specifications. As the predominant specification language (both for requirements and test cases) is still natural text, it is not easy to detect these redundancies. In principle, to detect these redundancies, each statement has to be compared to all others. This proves to be difficult because of number and informal expression of statements. In this paper we propose a solution to the problem of detecting redundant specification and test statements described in structured natural language. We propose a formalization process for requirements specification and test statements, allowing us to detect redundant statements and thus reduce the efforts for specification and validation. Specification Pattern Systems and Linear Temporal Logic provide the base for our process. We did evaluate the method in the context of Mercedes-Benz Passenger Car Development. The results show that for the investigated sample set of test statements, we could detect about 30% of test steps as redundant. This indicates the savings potential of our approach.

Journal ArticleDOI
TL;DR: This article provides a comprehensive description of this method, and presents several extensions that significantly improve the scalability of PORPLE, which include a novel algorithm design for efficiently searching for the best data placements, the use of active profiling for reducing the online-profiling overhead, and a systematic examination of a path-based performance model.
Abstract: Modern GPUs feature complex memory system designs. One GPU may contain many types of memory of different properties. The best way to place data in memory is sensitive to many factors (e.g., program inputs, architectures), making portable optimizations of GPU data placement a difficult challenge. PORPLE is a recently proposed method that overcomes the difficulties by enabling online optimizations of data placement through a three-way synergy: a specification language for memory system description, a compiler framework for data access analysis and code staging, and a runtime library for efficiently finding and materializing data placement on the fly. This article provides a comprehensive description of this method, and presents several extensions that significantly improve the scalability of PORPLE, which include a novel algorithm design for efficiently searching for the best data placements, the use of active profiling for reducing the online-profiling overhead, and a systematic examination of a path-based performance model. By automatically tailoring data placements for each execution of a GPU program, the enhanced PORPLE brings significant speedups (1.72X on average) to many GPU kernels across GPU architectures and program inputs.

Proceedings ArticleDOI
01 Jan 2017
TL;DR: This work has created a proof of concept of a co-simulation framework based on the High Level Architecture (HLA) and Functional Mock-up Interface (FMI) standards and demonstrates the incorporation of software models expressed in the Parallel Object-Oriented Specification Language (POOSL) in this framework.
Abstract: The development of cyber-physical systems (CPSs) with mechanical, electrical and software components requires a multi-disciplinary approach. Moreover, the use of models is important to support trade-offs and design decisions early in the development process. Since the different engineering disciplines use different modelling languages and tools, this calls for a co-simulation framework for discrete and continuous models. The main challenge is the proper synchronisation of time and data between these models. Given the increasing importance of software in CPSs, our work concentrates on the incorporation of software models in such a framework. We have created a proof of concept of a co-simulation framework based on the High Level Architecture (HLA) and Functional Mock-up Interface (FMI) standards. We demonstrate the incorporation of software models expressed in the Parallel Object-Oriented Specification Language (POOSL) in this framework. This allows the use of virtual prototypes of CPSs early in the development process.

Proceedings ArticleDOI
23 Oct 2017
TL;DR: The formal semantics of FlowSpec is defined, which is rooted in Monotone Frameworks, and a prototype implementation of the language is discussed, built in the Spoofax Language Workbench.
Abstract: We present FlowSpec, a declarative specification language for the domain of dataflow analysis. FlowSpec has declarative support for the specification of control flow graphs of programming languages, and dataflow analyses on these control flow graphs. We define the formal semantics of FlowSpec, which is rooted in Monotone Frameworks. We also discuss a prototype implementation of the language, built in the Spoofax Language Workbench. Finally, we evaluate the expressiveness and conciseness of the language with two case studies. These case studies are analyses for Green-Marl, an industrial, domain-specific language for graph processing. The first case study is a classical dataflow analysis, scaled to this full language. The second case study is a domain-specific analysis of Green-Marl.

Book ChapterDOI
10 Oct 2017
TL;DR: This work introduces and investigates a weighted propositional configuration logic over a commutative semiring and extends the weighted configuration logic to its first-order level and succeeds in describing architecture styles equipped with quantitative characteristics.
Abstract: We introduce and investigate a weighted propositional configuration logic over a commutative semiring. Our logic, which is proved to be sound and complete, is intended to serve as a specification language for software architectures with quantitative features. We extend the weighted configuration logic to its first-order level and succeed in describing architecture styles equipped with quantitative characteristics. We provide interesting examples of weighted architecture styles. Surprisingly, we can construct a formula, in our logic, which describes a classical problem of a different nature than that of software architectures.

Journal ArticleDOI
TL;DR: A new property of relative observability is presented, and based on this property, two algorithms are proposed: the first one, that has polynomial complexity, verifies if a regular language is relatively observable; the second algorithm computes the supremal relatively observable sublanguage of a given regular language.
Abstract: In this technical note, we present a new property of relative observability, and based on this property, we propose two algorithms: the first one, that has polynomial complexity, verifies if a regular language is relatively observable; the second algorithm computes the supremal relatively observable sublanguage of a given regular language. Although the latter has exponential complexity, it is more efficient than a recently proposed algorithm, which has double exponential complexity. Moreover, the algorithm proposed here has polynomial complexity when the automaton that marks the specification language is state partition.

Journal ArticleDOI
TL;DR: This study aims to understand how programming language syntax is employed in actual development and explore their potential applications based on the results of syntax usage analysis on Java, a modern, mature, and widely-used programming language.

Book ChapterDOI
20 Sep 2017
TL;DR: The concepts and logical foundations of generalised test tables are presented – a specification language for reactive systems accessible for practitioners and enabling a table to capture not just a single test case but a family of similar behavioural cases.
Abstract: In industrial practice today, correctness of software is rarely verified using formal techniques. One reason is the lack of specification languages for this application area that are both comprehensible and sufficiently expressive. We present the concepts and logical foundations of generalised test tables – a specification language for reactive systems accessible for practitioners. Generalised test tables extend the concept of test tables, which are already frequently used in quality management of reactive systems. The main idea is to allow more general table entries, thus enabling a table to capture not just a single test case but a family of similar behavioural cases. The semantics of generalised test tables is based on a two-party game over infinite words.