scispace - formally typeset
Search or ask a question

Showing papers on "Specification language published in 2010"


Proceedings Article
11 Jul 2010
TL;DR: This work addresses the problem of selecting non-domain-specific language model training data to build auxiliary language models for use in tasks such as machine translation by comparing the cross-entropy, according to domain-specific and non- domain-specifc language models, for each sentence of the text source used to produce the latter language model.
Abstract: We address the problem of selecting non-domain-specific language model training data to build auxiliary language models for use in tasks such as machine translation. Our approach is based on comparing the cross-entropy, according to domain-specific and non-domain-specifc language models, for each sentence of the text source used to produce the latter language model. We show that this produces better language models, trained on less data, than both random data selection and two other previously proposed methods.

605 citations


Proceedings ArticleDOI
17 Oct 2010
TL;DR: The architecture of Spoofax is described and idioms for high-level specifications of language semantics using rewrite rules are introduced, showing how analyses can be reused for transformations, code generation, and editor services such as error marking, reference resolving, and content completion.
Abstract: Spoofax is a language workbench for efficient, agile development of textual domain-specific languages with state-of-the-art IDE support. Spoofax integrates language processing techniques for parser generation, meta-programming, and IDE development into a single environment. It uses concise, declarative specifications for languages and IDE services. In this paper we describe the architecture of Spoofax and introduce idioms for high-level specifications of language semantics using rewrite rules, showing how analyses can be reused for transformations, code generation, and editor services such as error marking, reference resolving, and content completion. The implementation of these services is supported by language-parametric editor service classes that can be dynamically loaded by the Eclipse IDE, allowing new languages to be developed and used side-by-side in the same Eclipse environment.

422 citations


Book ChapterDOI
29 Nov 2010
TL;DR: A subject reduction property is proved which shows that well-typedness is preserved during execution; in particular, "method not understood" errors do not occur at runtime for well-TYped ABS models.
Abstract: This paper presents ABS, an abstract behavioral specification language for designing executable models of distributed object-oriented systems. The language combines advanced concurrency and synchronization mechanisms for concurrent object groups with a functional language for modeling data. ABS uses asynchronous method calls, interfaces for encapsulation, and cooperative scheduling of method activations inside concurrent objects. This feature combination results in a concurrent object-oriented model which is inherently compositional. We discuss central design issues for ABS and formalize the type system and semantics of Core ABS, a calculus with the main features of ABS. For Core ABS, we prove a subject reduction property which shows that well-typedness is preserved during execution; in particular, "method not understood" errors do not occur at runtime for well-typed ABS models. Finally, we briefly discuss the tool support developed for ABS.

349 citations


Book
10 Oct 2010
TL;DR: Understanding Concurrent Systems presents a comprehensive introduction to CSP, and introduces other views of concurrency, using CSP to model and explain these, and explores the practical application of CSP.
Abstract: CSP notation has been used extensively for teaching and applying concurrency theory, ever since the publication of the text Communicating Sequential Processes by C.A.R. Hoare in 1985. Both a programming language and a specification language, the theory of CSP helps users to understand concurrent systems, and to decide whether a program meets its specification. As a member of the family of process algebras, the concepts of communication and interaction are presented in an algebraic style. An invaluable reference on the state of the art in CSP, Understanding Concurrent Systems also serves as a comprehensive introduction to the field, in addition to providing material for a number of more advanced courses. A first point of reference for anyone wanting to use CSP or learn about its theory, the book also introduces other views of concurrency, using CSP to model and explain these. The text is fully integrated with CSP-based tools such as FDR, and describes how to create new tools based on FDR. Most of the book relies on no theoretical background other than a basic knowledge of sets and sequences. Sophisticated mathematical arguments are avoided whenever possible. Topics and features: presents a comprehensive introduction to CSP; discusses the latest advances in CSP, covering topics of operational semantics, denotational models, finite observation models and infinite-behaviour models, and algebraic semantics; explores the practical application of CSP, including timed modelling, discrete modelling, parameterised verifications and the state explosion problem, and advanced topics in the use of FDR; examines the ability of CSP to describe and enable reasoning about parallel systems modelled in other paradigms; covers a broad variety of concurrent systems, including combinatorial, timed, priority-based, mobile, shared variable, statecharts, buffered and asynchronous systems; contains exercises and case studies to support the text; supplies further tools and information at the associated website: http://www.comlab.ox.ac.uk/ucs/. From undergraduate students of computer science in need of an introduction to the area, to researchers and practitioners desiring a more in-depth understanding of theory and practice of concurrent systems, this broad-ranging text/reference is essential reading for anyone interested in Hoares CSP.

348 citations


Proceedings ArticleDOI
14 Dec 2010
TL;DR: This work focuses on streaming applications: i.e. applications that can be modeled as data-flow graphs that allow a designer to describe circuits in a more natural and concise way than possible with the language elements found in the traditional hardware description languages.
Abstract: Today the hardware for embedded systems is often specified in VHDL However, VHDL describes the system at a rather low level, which is cumbersome and may lead to design faults in large real life applications There is a need of higher level abstraction mechanisms In the embedded systems group of the University of Twente we are working on systematic and transformational methods to design hardware architectures, both multi core and single core The main line in this approach is to start with a straightforward (often mathematical) specification of the problem The next step is to find some adequate transformations on this specification, in particular to find specific optimizations, to be able to distribute the application over different cores The result of these transformations is then translated into the functional programming language Haskell since Haskell is close to mathematics and such a translation often is straightforward Besides, the Haskell code is executable, so one immediately has a simulation of the intended system Next, the resulting Haskell specification is given to a compiler, called CeaSH (for CAES LAnguage for Synchronous Hardware) which translates the specification into VHDL The resulting VHDL is synthesizable, so from there on standard VHDL-tooling can be used for synthesis In this work we primarily focus on streaming applications: ie applications that can be modeled as data-flow graphs At the moment the CeaSH system is ready in prototype form and in the presentation we will give several examples of how it can be used In these examples it will be shown that the specification code is clear and concise Furthermore, it is possible to use powerful abstraction mechanisms, such as polymorphism, higher order functions, pattern matching, lambda abstraction, partial application These features allow a designer to describe circuits in a more natural and concise way than possible with the language elements found in the traditional hardware description languages In addition we will give some examples of transformations that are possible in a mathematical specification, and which do not suffer from the problems encountered in, eg, automatic parallelization of nested for-loops in C-programs

340 citations


Proceedings ArticleDOI
17 Jan 2010
TL;DR: The proposed technique synthesizes programs for complicated arithmetic algorithms including Strassen's matrix multiplication and Bresenham's line drawing; several sorting algorithms; and several dynamic programming algorithms using verification tools built in the VS3 project.
Abstract: This paper describes a novel technique for the synthesis of imperative programs. Automated program synthesis has the potential to make programming and the design of systems easier by allowing programs to be specified at a higher-level than executable code. In our approach, which we call proof-theoretic synthesis, the user provides an input-output functional specification, a description of the atomic operations in the programming language, and a specification of the synthesized program's looping structure, allowed stack space, and bound on usage of certain operations. Our technique synthesizes a program, if there exists one, that meets the input-output specification and uses only the given resources.The insight behind our approach is to interpret program synthesis as generalized program verification, which allows us to bring verification tools and techniques to program synthesis. Our synthesis algorithm works by creating a program with unknown statements, guards, inductive invariants, and ranking functions. It then generates constraints that relate the unknowns and enforces three kinds of requirements: partial correctness, loop termination, and well-formedness conditions on program guards. We formalize the requirements that program verification tools must meet to solve these constraint and use tools from prior work as our synthesizers.We demonstrate the feasibility of the proposed approach by synthesizing programs in three different domains: arithmetic, sorting, and dynamic programming. Using verification tools that we previously built in the VS3 project we are able to synthesize programs for complicated arithmetic algorithms including Strassen's matrix multiplication and Bresenham's line drawing; several sorting algorithms; and several dynamic programming algorithms. For these programs, the median time for synthesis is 14 seconds, and the ratio of synthesis to verification time ranges between 1x to 92x (with an median of 7x), illustrating the potential of the approach.

322 citations


Proceedings ArticleDOI
12 Jul 2010
TL;DR: This paper presents TESLA, a complex event specification language that provides high expressiveness and flexibility in a rigorous framework, by offering content and temporal filters, negations, timers, aggregates, and fully customizable policies for event selection and consumption.
Abstract: The need for timely processing large amounts of information, flowing from the peripheral to the center of a system, is common to different application domains, and it has justified the development of several languages to describe how such information has to be processed. In this paper, we analyze such languages showing how most approaches lack the expressiveness required for the applications we target, or do not provide the precise semantics required to clearly state how the system should behave. Moving from these premises, we present TESLA, a complex event specification language. Each TESLA rule considers incoming data items as notifications of events and defines how certain patterns of events cause the occurrence of others, said to be "complex". TESLA has a simple syntax and a formal semantics, given in terms of a first order, metric temporal logic. It provides high expressiveness and flexibility in a rigorous framework, by offering content and temporal filters, negations, timers, aggregates, and fully customizable policies for event selection and consumption. The paper ends by showing how TESLA rules can be interpreted by a processing system, introducing an efficient event detection algorithm based on automata.

214 citations


Journal ArticleDOI
TL;DR: A computational framework for automatic synthesis of decentralized communication and control strategies for a robotic team from global specifications, which are given as temporal and logic statements about visiting regions of interest in a partitioned environment is presented.
Abstract: We present a computational framework for automatic synthesis of decentralized communication and control strategies for a robotic team from global specifications, which are given as temporal and logic statements about visiting regions of interest in a partitioned environment We consider a purely discrete scenario, where the robots move among the vertices of a graph However, by employing recent results on invariance and facet reachability for dynamical systems in environments with polyhedral partitions, the framework from this paper can be directly implemented for robots with continuous dynamics While allowing for a rich specification language and guaranteeing the correctness of the solution, our approach is conservative in the sense that we might not find a solution, even if one exists The overall amount of required computation is large However, most of it is performed offline before the deployment Illustrative simulations and experimental results are included

189 citations


Proceedings ArticleDOI
05 Jul 2010
TL;DR: This paper presents USDL (Unified Service Description Language), a specification language to describe services from a business, operational and technical perspective, which plays a major role in the Internet of Services to describe tradable services which are advertised in electronic marketplaces.
Abstract: Service-oriented Architectures (SOA) and Web services leverage the technical value of solutions in the areas of distributed systems and cross-enterprise integration. The emergence of Internet marketplaces for business services is driving the need to describe services, not only from a technical level, but also from a business and operational perspective. While, SOA and Web services reside in an IT layer, organizations owing Internet marketplaces are requiring advertising and trading business services which reside in a business layer. As a result, the gap between business and IT needs to be closed. This paper presents USDL (Unified Service Description Language), a specification language to describe services from a business, operational and technical perspective. USDL plays a major role in the Internet of Services to describe tradable services which are advertised in electronic marketplaces. The language has been tested using two service marketplaces as use cases.

151 citations


Proceedings ArticleDOI
22 Mar 2010
TL;DR: It is argued that for modern object-oriented languages, using an embedding of contracts as code is a better approach and the numerous advantages and the technical challenges as well as the status of tools that consume the embedded contracts.
Abstract: Specifying application interfaces (APIs) with information that goes beyond method argument and return types is a long-standing quest of programming language researchers and practitioners. The number of type system extensions or specification languages is a testament to that. Unfortunately, the number of such systems is also roughly equal to the number of tools that consume them. In other words, every tool comes with its own specification language.In this paper we argue that for modern object-oriented languages, using an embedding of contracts as code is a better approach. We exemplify our embedding of Code Contracts on the Microsoft managed execution platform (.NET) using the C# programming language. The embedding works as well in Visual Basic. We discuss the numerous advantages of our approach and the technical challenges, as well as the status of tools that consume the embedded contracts.

131 citations


Journal ArticleDOI
TL;DR: A new interactive approach to prune and filter discovered rules and proposes the Rule Schema formalism extending the specification language proposed by Liu et al. for user expectations in order to improve the integration of user knowledge in the postprocessing task.
Abstract: In Data Mining, the usefulness of association rules is strongly limited by the huge amount of delivered rules. To overcome this drawback, several methods were proposed in the literature such as itemset concise representations, redundancy reduction, and postprocessing. However, being generally based on statistical information, most of these methods do not guarantee that the extracted rules are interesting for the user. Thus, it is crucial to help the decision-maker with an efficient postprocessing step in order to reduce the number of rules. This paper proposes a new interactive approach to prune and filter discovered rules. First, we propose to use ontologies in order to improve the integration of user knowledge in the postprocessing task. Second, we propose the Rule Schema formalism extending the specification language proposed by Liu et al. for user expectations. Furthermore, an interactive framework is designed to assist the user throughout the analyzing task. Applying our new approach over voluminous sets of rules, we were able, by integrating domain expert knowledge in the postprocessing step, to reduce the number of rules to several dozens or less. Moreover, the quality of the filtered rules was validated by the domain expert at various points in the interactive process.

Book ChapterDOI
01 Jan 2010
TL;DR: SysML is a UML profile that not only reuses a subset of UML 2.1.1 but also provides additional extensions to better fit SE’s specific needs.
Abstract: Systems modeling language (SysML) [187] is a modeling language dedicated to systems engineering applications . It is a UML profile that not only reuses a subset of UML 2.1.1 [186] but also provides additional extensions to better fit SE’s specific needs. These extensions are mainly meant to address the requirements stated in the UML for SE request for proposal (RFP) [177]. It is intended to help specify and architect complex systems and their components and enable their analysis, design, and verification and validation . These systems may consist of heterogeneous components such as hardware , software , information, processes, personnel, and facilities [187].

Journal ArticleDOI
TL;DR: It is demonstrated how to support rich visualizations without requiring a toolkit-specific data model and extend Protovis to enable declarative specification of animated transitions, and optimizations such as runtime compilation of visualization specifications, parallelized execution, and hardware-accelerated rendering are explored.
Abstract: We investigate the design of declarative, domain-specific languages for constructing interactive visualizations. By separating specification from execution, declarative languages can simplify development, enable unobtrusive optimization, and support retargeting across platforms. We describe the design of the Protovis specification language and its implementation within an object-oriented, statically-typed programming language (Java). We demonstrate how to support rich visualizations without requiring a toolkit-specific data model and extend Protovis to enable declarative specification of animated transitions. To support cross-platform deployment, we introduce rendering and event-handling infrastructures decoupled from the runtime platform, letting designers retarget visualization specifications (e.g., from desktop to mobile phone) with reduced effort. We also explore optimizations such as runtime compilation of visualization specifications, parallelized execution, and hardware-accelerated rendering. We present benchmark studies measuring the performance gains provided by these optimizations and compare performance to existing Java-based visualization tools, demonstrating scalability improvements exceeding an order of magnitude.

Proceedings ArticleDOI
03 Oct 2010
TL;DR: A declarative EMF model query framework using the graph pattern formalism as the query specification language, which can be executed instantly, independently of the complexity of the constraint and the size of the model.
Abstract: Model-driven development tools built on industry standard platforms, such as the Eclipse Modeling Framework (EMF), heavily utilize model queries in model transformation, well-formedness constraint validation and domain-specific model execution. As these queries are executed rather frequently in interactive modeling applications, they have a significant impact on runtime performance and end user experience. However, due to their complexity, these queries can be time consuming to implement and optimize on a case-by-case basis. Consequently, there is a need for a model query framework that combines an easy-touse and concise declarative query formalism with high runtime performance.In this paper, we propose a declarative EMF model query framework using the graph pattern formalism as the query specification language. These graph patterns describe the arrangement and properties of model elements that correspond to, e.g. a well-formedness constraint, or an application context of a model transformation rule.For improved runtime performance, we employ incremental pattern matching techniques: matches of patterns are stored and incrementally maintained upon model manipulation. As a result, query operations can be executed instantly, independently of the complexity of the constraint and the size of the model. We demonstrate our approach in an industrial (AUTOSAR) model validation context and compare it against other solutions.

Proceedings Article
27 Apr 2010
TL;DR: This paper proposes OpenSAFE, a system for enabling the arbitrary direction of traffic for security monitoring applications at line rates, and describes ALARMS, a flow specification language that greatly simplifies management of network monitoring appliances.
Abstract: Administrators of today's networks are highly interested in monitoring traffic for purposes of collecting statistics, detecting intrusions, and providing forensic evidence. Unfortunately, network size and complexity can make this a daunting task. Aside from the problems in analyzing network traffic for this information--an extremely difficult task itself--a more fundamental problem exists: how to route the traffic for network analysis in a robust, high performance manner that does not impact normal network traffic. Current solutions fail to address these problems in a manner that allows high performance and easy management. In this paper, we propose OpenSAFE, a system for enabling the arbitrary direction of traffic for security monitoring applications at line rates. Additionally, we describe ALARMS, a flow specification language that greatly simplifies management of network monitoring appliances. Finally, we describe a proof-of-concept implementation that we are currently undertaking to monitor traffic across our network.

Journal ArticleDOI
TL;DR: In this paper, it is shown that for many systems, in-place logging provides a satisfactory basis for postmortem “runtime” verification of logs, where the overhead is already included in system design.
Abstract: Runtime verification as a field faces several challenges. One key challenge is how to keep the overheads associated with its application low. This is especially important in real-time critical embedded applications, where memory and CPU resources are limited. Another challenge is that of devising expressive and yet user-friendly specification languages that can attract software engineers. In this paper, it is shown that for many systems, in-place logging provides a satisfactory basis for postmortem “runtime” verification of logs, where the overhead is already included in system design. Although this approach prevents an online reaction to detected errors, possible with traditional runtime verification, it provides a powerful tool for test automation and debugging—in this case, analysis of spacecraft telemetry by ground operations teams at NASA’s Jet Propulsion Laboratory. The second challenge is addressed in the presented work through a temporal specification language, designed in collaboration with Jet Propulsion Laboratory test engineers. The specification language allows for descriptions of relationships between data-rich events (records) common in logs, and is translated into a form of automata supporting data-parameterized states. The automaton language is inspired by the rule-based language of the RULER runtime verification system.A case study is presented illustrating the use of our LOGSCOPE tool by software test engineers for the 2011 Mars Science Laboratory mission.

Book ChapterDOI
15 Jul 2010
TL;DR: In this paper, the authors present an approach to monitoring system policies using an expressive fragment of a temporal logic, which can be effectively monitored and reported on case studies in security and compliance monitoring and use these to show the adequacy of their specification language for naturally expressing complex, realistic policies.
Abstract: We present an approach to monitoring system policies As a specification language, we use an expressive fragment of a temporal logic, which can be effectively monitored We report on case studies in security and compliance monitoring and use these to show the adequacy of our specification language for naturally expressing complex, realistic policies and the practical feasibility of monitoring these policies using our monitoring algorithm

Proceedings Article
01 May 2010
TL;DR: This paper demonstrates that there exists a theoretical model that describes most NLP approaches adeptly and introduces the concept of data driven compilation, a translation process in which the efficiency of the generated code benefits from the data given as input to the learning algorithms.
Abstract: Today's natural language processing systems are growing more complex with the need to incorporate a wider range of language resources and more sophisticated statistical methods. In many cases, it is necessary to learn a component with input that includes the predictions of other learned components or to assign simultaneously the values that would be assigned by multiple components with an expressive, data dependent structure among them. As a result, the design of systems with multiple learning components is inevitably quite technically complex, and implementations of conceptually simple NLP systems can be time consuming and prone to error. Our new modeling language, Learning Based Java (LBJ), facilitates the rapid development of systems that learn and perform inference. LBJ has already been used to build state of the art NLP systems. In this paper, we first demonstrate that there exists a theoretical model that describes most NLP approaches adeptly. Second, we show how our improvements to the LBJ language enable the programmer to describe the theoretical model succinctly. Finally, we introduce the concept of data driven compilation, a translation process in which the efficiency of the generated code benefits from the data given as input to the learning algorithms.

Proceedings ArticleDOI
20 Sep 2010
TL;DR: This work integrates scenario-based specification mining, which uses data-mining algorithms to suggest ordering constraints in the form of live sequence charts, with mining of value-based invariants, which detects likely invariants holding at specific program points.
Abstract: Specification mining takes execution traces as input and extracts likely program invariants, which can be used for comprehension, verification, and evolution related tasks. In this work we integrate scenario-based specification mining, which uses data-mining algorithms to suggest ordering constraints in the form of live sequence charts, an inter-object, visual, modal, scenario-based specification language, with mining of value-based invariants, which detects likely invariants holding at specific program points. The key to the integration is a technique we call scenario-based slicing, running on top of the mining algorithms to distinguish the scenario-specific invariants from the general ones. The resulting suggested specifications are rich, consisting of modal scenarios annotated with scenario-specific value-based invariants, referring to event parameters and participating object properties.An evaluation of our work over a number of case studies shows promising results in extracting expressive specifications from real programs, which could not be extracted previously. The more expressive the mined specifications, the higher their potential to support program comprehension and testing.

Patent
28 May 2010
TL;DR: In this article, the authors propose a system and methods for the delivery of user-controlled resources in cloud environments via a resource specification language wrapper, such as an XML (extensible markup language) wrapper.
Abstract: Embodiments relate to systems and methods for the delivery of user-controlled resources in cloud environments via a resource specification language wrapper. In embodiments, the user of a client machine may wish to contribute resources from that machine to a cloud-based network via a network connection over a limited or defined period. To expose the user-controlled resources to one or more clouds for use the user may transmit a contribution request encoding the user-controlled resources in a specification language wrapper, such as an XML (extensible markup language) wrapper. The specification language wrapper can embed the set of user-controlled resources, such as processor time, memory, and/or other resources, in an XML or other format to transmit to a marketplace engine which can place the set of user-controlled resources into a resource pool, for selection by marketplace clouds. The specification language wrapper can indicate access controls or restrictions on the contributed resources.

Journal ArticleDOI
TL;DR: The Service-Centric Monitoring Language (SECMOL), a general monitoring specification language, clearly separates concerns between data collection, data computation, and data analysis, allowing for high flexibility and scalability.
Abstract: Service-oriented systems' distributed ownership has led to an increasing focus on runtime management solutions. Service-oriented systems can change greatly after deployment, hampering their quality and reliability. Their service bindings can change, and providers can modify the internals of their services. Monitoring is critical for these systems to keep track of behavior and discover whether anomalies have occurred. The Service-Centric Monitoring Language (SECMOL), a general monitoring specification language, clearly separates concerns between data collection, data computation, and data analysis, allowing for high flexibility and scalability. SECMOL also presents a concrete projection of the model onto three monitoring frameworks.

Book ChapterDOI
20 Sep 2010
TL;DR: A comprehensive specification language and a compiler for zero-knowledge proofs of knowledge (ZK-PoK) protocols based on Σ-protocols are presented in this paper.
Abstract: Zero-knowledge proofs of knowledge (ZK-PoK) are important building blocks for numerous cryptographic applications. Although ZK-PoK have a high potential impact, their real world deployment is typically hindered by their significant complexity compared to other (non-interactive) crypto primitives. Moreover, their design and implementation are time-consuming and error-prone. We contribute to overcoming these challenges as follows: We present a comprehensive specification language and a compiler for ZK-PoK protocols based on Σ-protocols. The compiler allows the fully automatic translation of an abstract description of a proof goal into an executable implementation. Moreover, the compiler overcomes various restrictions of previous approaches, e.g., it supports the important class of exponentiation homomorphisms with hidden-order co-domain, needed for privacy-preserving applications such as DAA. Finally, our compiler is certifying, in the sense that it automatically produces a formal proof of the soundness of the compiled protocol for a large class of protocols using the Isabelle/HOL theorem prover.

Proceedings ArticleDOI
30 Sep 2010
TL;DR: This paper describes the Preach implementation including the various features that are necessary for the large models the authors target, and uses Preach to model check an industrial cache coherence protocol with approximately 30 billion states, the largest number published for a distributed explicit state model checker.
Abstract: We present Preach, an industrial strength distributed explicit state model checker based on Murphi. The goal of this project was to develop a reliable, easy to maintain, scalable model checker that was compatible with the Murphi specification language. Preach is implemented in the concurrent functional language Erlang, chosen for its parallel programming elegance. We use the original Murphifront-end to parse the model description, a layer written in Erlang to handle the communication aspects of the algorithm, and also use Murphias a back-end for state expansion and to store the hash table. This allowed a clean and simple implementation, with the core parallel algorithms written in under 1000 lines of code. This paper describes the Preach implementation including the various features that are necessary for the large models we target. We have used Preach to model check an industrial cache coherence protocol with approximately 30 billion states. To our knowledge, this is the largest number published for a distributed explicit state model checker. Preach has been released to the public under an open source BSD license.

Journal ArticleDOI
01 Feb 2010
TL;DR: This paper presents a methodology to synthesize model editors equipped with automatic completion from a modeling language’s declarative specification consisting of a meta-model with a visual syntax, powered by a first-order relational logic engine implemented in ALLOY.
Abstract: Integrated development environments such as Eclipse allow users to write programs quickly by presenting a set of recommendations for code completion. Similarly, word processing tools such as Microsoft Word present corrections for grammatical errors in sentences. Both of these existing structure editors use a set of constraints expressed in the form of a natural language grammar to restrict/correct the user ( syntax-directed editing) or formal grammar (language-directed editing ) to aid document completion. Taking this idea further, in this paper we present an integrated software system capable of generating recommendations for model completion of partial models built in editors for domain-specific modeling languages. We present a methodology to synthesize model editors equipped with automatic completion from a modeling language’s declarative specification consisting of a meta-model with a visual syntax. This meta-model directed completion feature is powered by a first-order relational logic engine implemented in ALLOY. We incorporate automatic completion in the generative tool AToM3. We use the finite state machines modeling language as a concise running example. Our approach leverages a correct by construction philosophy that renders subsequent simulation of models considerably less error-prone.

Proceedings ArticleDOI
04 Oct 2010
TL;DR: This work realizes a translator from a convenient specification language to standard Horn clauses and use the verifier ProVerif and the theorem prover SPASS to solve them, and formally proves that the abstraction is sound.
Abstract: The abstraction and over-approximation of protocols and web services by a set of Horn clauses is a very successful method in practice. It has however limitations for protocols and web services that are based on databases of keys, contracts, or even access rights, where revocation is possible, so that the set of true facts does not monotonically grow with state transitions. We extend the scope of these over-approximation methods by defining a new way of abstraction that can handle such databases, and we formally prove that the abstraction is sound. We realize a translator from a convenient specification language to standard Horn clauses and use the verifier ProVerif and the theorem prover SPASS to solve them. We show by a number of examples that this approach is practically feasible for wide variety of verification problems of security protocols and web services

Book ChapterDOI
20 Sep 2010
TL;DR: This work presents a high-level access control specification language that allows fine-grained specification of access control permissions (at triple level) and formally define its semantics and adopts an annotation-based enforcement model, where a user can explicitly associate data items with annotations specifying whether the item is accessible or not.
Abstract: One of the current barriers towards realizing the huge potential of Future Internet is the protection of sensitive information, i.e., the ability to selectively expose (or hide) information to (from) users depending on their access privileges. Given that RDF has established itself as the de facto standard for data representation over the Web, our work focuses on controlling access to RDF data. We present a high-level access control specification language that allows fine-grained specification of access control permissions (at triple level) and formally define its semantics. We adopt an annotation-based enforcement model, where a user can explicitly associate data items with annotations specifying whether the item is accessible or not. In addition, we discuss the implementation of our framework, propose a set of dimensions that should be considered when defining a benchmark to evaluate the different access control enforcement models and present the results of our experiments conducted on different Semantic Web platforms.

DissertationDOI
01 Jan 2010
TL;DR: The communication protocol of the service is focused on and the design of correct service compositions can be systematically supported and an algorithm is presented to deduce local service descriptions from the choreography which—by design—conforms to the specification.
Abstract: Service-oriented computing (SOC) is an emerging paradigm of system design and aims at replacing complex monolithic systems by a composition of interacting systems, called services. A service encapsulates self-contained functionality and offers it over a well-defined, standardized interface. This modularization may reduce both complexity and cost. At the same time, new challenges arise with the distributed execution of services in dynamic compositions. In particular, the correctness of a service composition depends not only on the local correctness of each participating service, but also on the correct interaction between them. Unlike in a centralized monolithic system, services may change and are not completely controlled by a single party. We study correctness of services and their composition and investigate how the design of correct service compositions can be systematically supported. We thereby focus on the communication protocol of the service and approach these questions using formal methods and make contributions to three scenarios of SOC. The correctness of a service composition depends on the correctness of the participating services. To this end, we (1) study correctness criteria which can be expressed and checked with respect to a single service. We validate services against behavioral specifications and verify their satisfaction in any possible service composition. In case a service is incorrect, we provide diagnostic information to locate and fix the error. In case every participating service of a service composition is correct, their interaction can still introduce problems. We (2) automatically verify correctness of service compositions. We further support the design phase of service compositions and present algorithms to automatically complete partially specified compositions and to fix incorrect compositions. A service composition can also be derived from a specification, called choreography. A choreography globally specifies the observable behavior of a composition. We (3) present an algorithm to deduce local service descriptions from the choreography which—by design—conforms to the specification. All results have been expressed in terms of a unifying formal model. This not only allows to formally prove correctness, but also makes results independent of the specifics of concrete service description languages. Furthermore, all presented algorithms have been prototypically implemented and validated in experiments based on case studies involving industrial services.

Journal ArticleDOI
TL;DR: A formal model, an architecture and a prototype implementation for usage control on GRID systems, which uses as policy specification language a process description language and shows how this is suitable to model the usage policy models of the original UCON model.

Book ChapterDOI
12 Nov 2010
TL;DR: A sound and complete distributed heuristic search algorithm for allocating the individual tasks in a TST to platforms and instantiates the parameters of the tasks such that all the constraints of the TST are satisfied.
Abstract: Unmanned aircraft systems (UAS's) are now becoming technologically mature enough to be integrated into civil society. An essential issue is principled mixed-initiative interaction between UAS's and human operators. Two central problems are to specify the structure and requirements of complex tasks and to assign platforms to these tasks. We have previously proposed Task Specification Trees (TST's) as a highly expressive specification language for complex multi-agent tasks that supports mixed-initiative delegation and adjustable autonomy. The main contribution of this paper is a sound and complete distributed heuristic search algorithm for allocating the individual tasks in a TST to platforms. The allocation also instantiates the parameters of the tasks such that all the constraints of the TST are satisfied. Constraints are used to model dependencies between tasks, resource usage as well as temporal and spatial requirements on complex tasks. Finally, we discuss a concrete case study with a team of unmanned aerial vehicles assisting in a challenging emergency situation.

Patent
19 Apr 2010
TL;DR: In this article, a system modeling meta-model language model for a system is extracted from a natural language specification of the system, where syntactic structure represents a set of at least one syntactic subject.
Abstract: A system modeling meta-model language model for a system is extracted from a natural language specification of the system. Syntactic structure is extracted from the specification of a system. The syntactic structure represents a set of at least one syntactic subject. A first mapping is created between a predetermined set of the at least one syntactic subject and respective meta-model elements for a system modeling meta-model language. At least one of the meta-model elements is constructed in accordance with the mapping for each identified syntactic subject. The created meta-model structural elements are created for conversion into a model of the system.