scispace - formally typeset
Search or ask a question

Showing papers on "Formal language published in 2015"


Journal ArticleDOI
TL;DR: The state-of-the art dynamic sign language recognition (DSLR) system for smart home interactive applications is presented, which is not only able to rule out ungrammatical sentences, but it can also make predictions about missing gestures, which increases the accuracy of the recognition task.
Abstract: This paper presents the state-of-the art dynamic sign language recognition (DSLR) system for smart home interactive applications Our novel DSLR system comprises two main subsystems: an image processing (IP) module and a stochastic linear formal grammar (SLFG) module Our IP module enables us to recognize the individual words of the sign language (ie, a single gesture) In this module, we used the bag-of-features (BOFs) and a local part model approach for bare hand dynamic gesture recognition from a video We used dense sampling to extract local 3-D multiscale whole-part features We adopted 3-D histograms of a gradient orientation descriptor to represent features The $k$ -means++ method was applied to cluster the visual words Dynamic hand gesture classification was conducted using the BOFs and nonlinear support vector machine methods We used a multiscale local part model to preserve temporal context The SLFG module analyzes the sentences of the sign language (ie, sequences of gestures) and determines whether or not they are syntactically valid Therefore, the DSLR system is not only able to rule out ungrammatical sentences, but it can also make predictions about missing gestures, which, in turn, increases the accuracy of our recognition task Our IP module alone seals the accuracy of 97% and outperforms any existing bare hand dynamic gesture recognition system However, by exploiting syntactic pattern recognition, the SLFG module raises this accuracy by 165% This makes the aggregate performance of the DSLR system as accurate as 9865%

74 citations


01 Jan 2015
TL;DR: In this paper, a survey of state complexity of individual regularity preserving language operations on regular and some subregular languages is presented, along with methods of estimation and approximation of more complex combined operations.
Abstract: Descriptional complexity is the study of the conciseness of the various models representing formal languages. The state complexity of a regular language is the size, measured by the number of states of the smallest, either deterministic or nondeterministic, finite automaton that recognises it. Operational state complexity is the study of the state complexity of operations over languages. In this survey, we review the state complexities of individual regularity preserving language operations on regular and some subregular languages. Then we revisit the state complexities of the combination of individual operations. We also review methods of estimation and approximation of state complexity of more complex combined operations.

54 citations


Book ChapterDOI
15 Sep 2015
TL;DR: The main finding of the questionnaire was that a majority of the students think of grammar as a valuable asset in language learning, but at the same time have somewhat different understandings of grammar.
Abstract: This article presents and discusses views on grammar and its role in formal language learning amongst Finnish university students. The results are based on a questionnaire which was distributed to students at the University of Jyvaskyla as part of institutional action research. The background to the project was a feeling amongst some teachers of increased divergence between student respectively language teacher understandings of the role of grammar in language teaching. This concern raised the need to find out how students view grammar. The knowledge about thoughts on grammar amongst students would then help teachers to adjust and adept the way grammar is used in language teaching. The main finding of the questionnaire was that a majority of the students think of grammar as a valuable asset in language learning, but at the same time have somewhat different understandings of grammar. In this context grammar is understood as a metalinguistic set of (also normative) statements of regularities in language which is the way most students think of grammar. Three different student perspectives on grammar are distinguished. These include a normative, functional and structural perspective. Since all answers in the questionnaire couldn’t be placed within these categories, a fourth category, “other” was also included.

47 citations


Journal ArticleDOI
TL;DR: This article provides a formalization of the NautiLOD semantics, which captures both nodes and fragments of the Web of Linked Data, and presents algorithms to implement such semantics and study their computational complexity.
Abstract: The Web of Linked Data is a huge graph of distributed and interlinked datasources fueled by structured information. This new environment calls for formal languages and tools to automatize navigation across datasources (nodes in such graph) and enable semantic-aware and Web-scale search mechanisms. In this article we introduce a declarative navigational language for the Web of Linked Data graph called NautiLOD. NautiLOD enables one to specify datasources via the intertwining of navigation and querying capabilities. It also features a mechanism to specify actions (e.g., send notification messages) that obtain their parameters from datasources reached during the navigation. We provide a formalization of the NautiLOD semantics, which captures both nodes and fragments of the Web of Linked Data. We present algorithms to implement such semantics and study their computational complexity. We discuss an implementation of the features of NautiLOD in a tool called swget, which exploits current Web technologies and protocols. We report on the evaluation of swget and its comparison with related work. Finally, we show the usefulness of capturing Web fragments by providing examples in different knowledge domains.

45 citations


Book ChapterDOI
24 Nov 2015
TL;DR: The fundamental ambition of the paper is to displace the burden of moral reasoning from the programmer to the program itself, moving away from current computational ethics that too easily embed moral reasoning within computational engines, thereby feeding atomic answers that fail to truly represent underlying dynamics.
Abstract: In this paper, we investigate the use of high-level action languages for representing and reasoning about ethical responsibility in goal specification domains. First, we present a simplified Event Calculus formulated as a logic program under the stable model semantics in order to represent situations within Answer Set Programming. Second, we introduce a model of causality that allows us to use an answer set solver to perform reasoning over the agent's ethical responsibility. We then extend and test this framework against the Trolley Problem and the Doctrine of Double Effect. The overarching aim of the paper is to propose a general and adaptable formal language that may be employed over a variety of ethical scenarios in which the agent's responsibility must be examined and their choices determined. Our fundamental ambition is to displace the burden of moral reasoning from the programmer to the program itself, moving away from current computational ethics that too easily embed moral reasoning within computational engines, thereby feeding atomic answers that fail to truly represent underlying dynamics.

43 citations


Posted Content
TL;DR: In this paper, a survey of state complexity of individual regularity preserving language operations on regular and some subregular languages is presented, along with methods of estimation and approximation of more complex combined operations.
Abstract: Descriptional complexity is the study of the conciseness of the various models representing formal languages. The state complexity of a regular language is the size, measured by the number of states of the smallest, either deterministic or nondeterministic, finite automaton that recognises it. Operational state complexity is the study of the state complexity of operations over languages. In this survey, we review the state complexities of individual regularity preserving language operations on regular and some subregular languages. Then we revisit the state complexities of the combination of individual operations. We also review methods of estimation and approximation of state complexity of more complex combined operations.

37 citations


Proceedings ArticleDOI
17 Dec 2015
TL;DR: In this paper, the authors present a graphical tool for the development and visualization of formal specifications by people that do not have training in formal logic, which enables users to develop specifications using a graphical formalism which is then automatically translated to Metric Temporal Logic (MTL).
Abstract: One of the main barriers preventing widespread use of formal methods is the elicitation of formal specifications. Formal specifications facilitate the testing and verification process for safety critical robotic systems. However, handling the intricacies of formal languages is difficult and requires a high level of expertise in formal logics that many system developers do not have. In this work, we present a graphical tool designed for the development and visualization of formal specifications by people that do not have training in formal logic. The tool enables users to develop specifications using a graphical formalism which is then automatically translated to Metric Temporal Logic (MTL). In order to evaluate the effectiveness of our tool, we have also designed and conducted a usability study with cohorts from the academic student community and industry. Our results indicate that both groups were able to define formal requirements with high levels of accuracy. Finally, we present applications of our tool for defining specifications for operation of robotic surgery and autonomous quadcopter safe operation.

35 citations


Journal ArticleDOI
TL;DR: The trace semantics represents the behaviour of protected assembly code with simple abstractions, unburdened by low-level details, at the maximum degree of precision and captures the capabilities of attackers to protected code and simplifies the formulation of a secure compiler targeting PMA-enhanced assembly language.

30 citations


Posted Content
TL;DR: In this article, it was shown that equivalence of deterministic top-down tree-to-string transducers is decidable, thus solving a long standing open problem in formal language theory.
Abstract: We show that equivalence of deterministic top-down tree-to-string transducers is decidable, thus solving a long standing open problem in formal language theory. We also present efficient algorithms for subclasses: polynomial time for total transducers with unary output alphabet (over a given top-down regular domain language), and co-randomized polynomial time for linear transducers; these results are obtained using techniques from multi-linear algebra. For our main result, we prove that equivalence can be certified by means of inductive invariants using polynomial ideals. This allows us to construct two semi-algorithms, one searching for a proof of equivalence, one for a witness of non-equivalence. Furthermore, we extend our result to deterministic top-down tree-to-string transducers which produce output not in a free monoid but in a free group.

29 citations


Proceedings ArticleDOI
09 Mar 2015
TL;DR: This paper presents a requirement-consistency maintenance framework to produce consistent representations and extends pure syntactic parsing by adding semantic reasoning and the support of partitioning input and output variables.
Abstract: Early stages of system development involve outlining desired features such as functionality, availability, or usability. Specifications are derived from these features that concretize vague ideas presented in natural languages. The challenge for the verification and validation of specifications arises from the syntax and semantic gap between different representations and the need of automatic tools. In this paper, we present a requirement-consistency maintenance framework to produce consistent representations. The first part is the automatic translation from natural languages describing functionalities to formal logic with an abstraction of time. It extends pure syntactic parsing by adding semantic reasoning and the support of partitioning input and output variables. The second part is the use of synthesis techniques to examine if the requirements are consistent in terms of realizability. When the process fails, the formulas that cause the inconsistency are reported to locate the problem.

29 citations


Journal ArticleDOI
TL;DR: A methodology that has proven itself in building a state-of-the art implementation of Multi-Paxos and other distributed protocols used in a deployed database system is discussed.
Abstract: Distributed programs are known to be extremely difficult to implement, test, verify, and maintain. This is due in part to the large number of possible unforeseen interactions among components, and to the difficulty of precisely specifying what the programs should accomplish in a formal language that is intuitively clear to the programmers. We discuss here a methodology that has proven itself in building a state of the art implementation of Multi-Paxos and other distributed protocols used in a deployed database system. This article focuses on the basic ideas of formal EventML programming illustrated by implementing a fault-tolerant consensus protocol and showing how we prove its safety properties with the Nuprl proof assistant.

Journal ArticleDOI
TL;DR: A detailed syntax of GeoSpelling is proposed in this paper and is based on instructions used in computer programming language: call to functions and flow control by condition and loop.
Abstract: In order to tackle the ambiguities of Geometrical Product Specification (GPS), GeoSpelling language has been developed to express the semantics of specifications. A detailed syntax of GeoSpelling is proposed in this paper. A specification is defined as a sequence of operations on the skin model. The syntax is based on instructions used in computer programming language: call to functions and flow control by condition and loop. In GeoSpelling, the call to functions corresponds to the declaration of operations; loops make it possible to manage a set of features with rigor and conditions to select features from a set.

Journal ArticleDOI
TL;DR: In this article, the authors analyzed Polish children's requests and found that children who are positioned by their parents as equals at the dinner table adopt a more formal language style by using conventionally indirect requests.

Book ChapterDOI
24 Aug 2015
TL;DR: This paper focuses on the parsing mechanisms used in the large ITP libraries and the advanced mechanisms instrumenting the proof assistants to correctly understand the formal expressions in the presence of ubiquitous overloading.
Abstract: One of the first big hurdles that mathematicians encounter when considering writing formal proofs is the necessity to get acquainted with the formal terminology and the parsing mechanisms used in the large ITP libraries. This includes the large number of formal symbols, the grammar of the formal languages and the advanced mechanisms instrumenting the proof assistants to correctly understand the formal expressions in the presence of ubiquitous overloading.

Journal ArticleDOI
TL;DR: A formal design process structure called design for AM-facilitated personalisation that merges design for personalisation and design for additive manufacturing is proposed and formal representations for an artefact–user interaction using finite state automata and goal-oriented user behaviours in the artefact use as formal language sets based on discrete event systems are proposed.
Abstract: This research aims to develop a set of new formal design representations that captures design requirements to improve the level of personalisation while taking advantage of the additive manufacturing (AM)-enabled design freedom. We propose a formal design process structure called design for AM-facilitated personalisation that merges design for personalisation and design for additive manufacturing. Also, we propose formal representations for an artefact–user interaction using finite state automata and goal-oriented user behaviours in the artefact use as formal language sets based on discrete event systems. By adopting the relational properties of affordance, effectivity, and preference, the proposed formal representations systemically link user behaviour-related artefact properties and user preferences to the design requirements for additive manufactured artefacts. This paper presents a case study of personalised chair design and a simulation of user behaviours in the chair–user interaction.

Posted Content
TL;DR: This chapter provides an introduction to some basic concepts of epistemic logic, basic formal languages, their se- mantics, and proof systems.
Abstract: This chapter provides an introduction to some basic concepts of epistemic logic, basic formal languages, their se- mantics, and proof systems It also contains an overview of the handbook, and a brief history of epistemic logic and pointers to the literature

Journal ArticleDOI
TL;DR: A fairly complete theory of operator precedence languages is provided, introducing a class of automata with the same recognizing power as the generative power of their grammars and characterization of their sentences in terms of monadic second-order logic.
Abstract: Operator precedence languages were introduced half a century ago by Robert Floyd to support deterministic and efficient parsing of context-free languages. Recently, we renewed our interest in this class of languages thanks to a few distinguishing properties that make them attractive for exploiting various modern technologies. Precisely, their local parsability enables parallel and incremental parsing, whereas their closure properties make them amenable to automatic verification techniques, including model checking. In this paper we provide a fairly complete theory of this class of languages: we introduce a class of automata with the same recognizing power as the generative power of their grammars; we provide a characterization of their sentences in terms of monadic second-order logic as has been done in previous literature for more restricted language classes such as regular, parenthesis, and input-driven ones; we investigate preserved and lost properties when extending the language sentences from finite ...

Book
01 Oct 2015
TL;DR: This book provides a thorough introduction to the subfield of theoretical computer science known asgrammatical inference from a computational linguistic perspective and summarizes the major lessons and open questions that grammatical inference brings to computational linguistics.
Abstract: This book provides a thorough introduction to the subfield of theoretical computer science known as grammatical inference from a computational linguistic perspective. Grammatical inference provides principled methods for developing computationally sound algorithms that learn structure from strings of symbols. The relationship to computational linguistics is natural because many research problems in computational linguistics are learning problems on words, phrases, and sentences: What algorithm can take as input some finite amount of data (for instance a corpus, annotated or otherwise) and output a system that behaves "correctly" on specific tasks? Throughout the text, the key concepts of grammatical inference are interleaved with illustrative examples drawn from problems in computational linguistics. Special attention is paid to the notion of "learning bias." In the context of computational linguistics, such bias can be thought to reflect common (ideally universal) properties of natural languages. This bias can be incorporated either by identifying a learnable class of languages which contains the language to be learned or by using particular strategies for optimizing parameter values. Examples are drawn largely from two linguistic domains (phonology and syntax) which span major regions of the Chomsky Hierarchy (from regular to context-sensitive classes). The conclusion summarizes the major lessons and open questions that grammatical inference brings to computational linguistics.

Journal ArticleDOI
TL;DR: It is claimed that syntactic objects are not computationally uniform, and therefore the computational system in charge of establishing dependencies between symbolic objects within the mind is not uniform as well.
Abstract: This paper argues that the theory of phrase structure a certain linguistic approach assumes implies taking a stance on the formal nature of the computational procedures that generate that phrase structure. We will proceed by critically evaluating theories of phrase structure and labeling -which implies taking a structure as a unit for the purposes of further computations-, and building on and opposing to the proposals we review, we will claim that syntactic objects are not computationally uniform, and therefore the computational system in charge of establishing dependencies between symbolic objects within the mind is not uniform as well. We argue in favor of a linguistic-cognitive model which dynamically chooses different grammars based on the complexity of the input, and is capable of assigning a mixed phrase marker to an object that presents more than one computational pattern. Empirical evidence is provided in favor of our approach to phrase structure building, and further implications for a theory of labeling and predication are discussed as prolegomena to further research.

Journal ArticleDOI
TL;DR: A generic model where transitions are labeled by elements of a finite partition of the alphabet is defined and Angluin's L* algorithm is extended for learning regular languages from examples for such automata.
Abstract: This work is concerned with regular languages defined over large alphabets, either infinite or just too large to be expressed enumeratively. We define a generic model where transitions are labeled by elements of a finite partition of the alphabet. We then extend Angluin's L* algorithm for learning regular languages from examples for such automata. We have implemented this algorithm and we demonstrate its behavior where the alphabet is a subset of the natural or real numbers. We sketch the extension of the algorithm to a class of languages over partially ordered alphabets.

Proceedings ArticleDOI
01 Jul 2015
TL;DR: The NL2KR platform to build systems that can translate text to different formal languages is presented, freely available, customizable, and comes with an Interactive GUI support that is useful in the development of a translation system.
Abstract: This paper presents the NL2KR platform to build systems that can translate text to different formal languages. It is freelyavailable1, customizable, and comes with an Interactive GUI support that is useful in the development of a translation system. Our key contribution is a userfriendly system based on an interactive multistage learning algorithm. This effective algorithm employs Inverse-λ, Generalization and user provided dictionary to learn new meanings of words from sentences and their representations. Using the learned meanings, and the Generalization approach, it is able to translate new sentences. NL2KR is evaluated on two standard corpora, Jobs and GeoQuery and it exhibits state-of-the-art performance on both of them.

Journal ArticleDOI
Ke Dou1, Xi Wang1, Chong Tang1, Adam Ross1, Kevin Sullivan1 
TL;DR: It is hypothesized that this approach can accelerate convergence on models that are precise and validated enough for rigorous systems engineering, and an early case study on applying this method to the Ross et al. semantic approach is presented.

10 Sep 2015
TL;DR: The aim of the current paper is to provide a basic ontology for the upcoming data protection legislation, highlighting the duties of the data controller, to ease the transition of systems and services from the existing legislation to the new one.
Abstract: Knowledge theory has made its way into modern computing, through the use of models and annotations to organize it. The bottom layer of knowledge organizations makes use of ontologies, which are models based on a formal language structure and designed to express the concepts pertaining to a domain and the relationships between them. The use of ontologies is popular also in the legal domain to organize legal documents and as a support to legal reasoning. A legal topic which is currently under the limelight at the European level is data protection. Under the pressure of the last years’ technological developments, the data protection legislation has shown its weaknesses, and is currently undergoing a long and complex reform that is finally approaching its completion. The reform will urge businesses dealing with personal data to comply with the new Regulation. The aim of the current paper is to provide a basic ontology for the upcoming data protection legislation, highlighting the duties of the data controller, to ease the transition of systems and services from the existing legislation to the new one.

Book ChapterDOI
14 Oct 2015
TL;DR: This paper provides a direct formalisation of the operational semantics of the OMG standard BPMN 2.0 in terms of Labelled Transition Systems LTS, paving the way for the use of consolidated formal reasoning techniques based on LTS e.g., model checking.
Abstract: In the last years we are observing a growing interest in formalising the execution semantics of business process modelling languages that, despite their lack of formal characterisation, are widely adopted in industry and academia. In this paper, we focus on the OMG standard BPMN 2.0. Specifically, we provide a direct formalisation of its operational semantics in terms of Labelled Transition Systems LTS. This approach permits both to avoid possible miss-interpretations due to the usage of the natural language in the specification of the standard, and to overcome issues due to the mapping of BPMN to other formal languages, which are equipped with their own semantics. In addition, it paves the way for the use of consolidated formal reasoning techniques based on LTS e.g., model checking. Our operational semantics is given for a relevant subset of BPMN elements focusing on the capability to model collaborations among organisations via message exchange. Moreover, one of its distinctive aspects is the suitability to model business processes with arbitrary topology. This allows designers to freely specify their processes according to the reality without the need of defining well-structured models. We illustrate our approach through a simple, yet realistic, running example about commercial transactions.

Dissertation
24 Jul 2015
TL;DR: This thesis presents some model-based methodologies for modelling and verification of the French railway interlocking systems (RIS) and an event-based concept is brought into the modelling process of low-level part of RIS to better describe internal interactions of relay-based logic.
Abstract: Development and application of formal languages are a long-standing challenge within the computer science domain. One particular challenge is the acceptance of industry. This thesis presents some model-based methodologies for modelling and verification of the French railway interlocking systems (RIS). The first issue is the modellization of interlocking system by coloured Petri nets (CPNs). A generic and compact modelling framework is introduced, in which the interlocking rules are modelled in a hierarchical structure while the railway layout is modelled in a geographical perspective. Then, a modelling pattern is presented, which is a parameterized model respecting the French national rules. It is a reusable solution that can be applied in different stations. Then, an event-based concept is brought into the modelling process of low-level part of RIS to better describe internal interactions of relay-based logic. The second issue is the transformation of coloured Petri nets into B machines, which can help designers on the way from analysis to implementation. Firstly, a detailed mapping methodology from non-hierarchical CPNs to abstract B machine notations is presented. Then the hierarchy and the transition priority of CPNs are successively integrated into the mapping process, in order to enrich the adaptability of the transformation. This transformation is compatible with various types of colour sets and the transformed B machines can be automatically proved by Atelier B. All these works at different levels contribute towards a global safe analysis framework

Journal ArticleDOI
TL;DR: The canonic inverse of directed extension is used in order to obtain the optimal solution the minimal primer language to the question under what conditions can a given language of target strings be generated from a given template language when the primer language is unknown.
Abstract: We propose and investigate a formal language operation inspired by the naturally occurring phenomenon of DNA primer extension by a DNA-template-directed DNA Polymerase enzyme. Given two DNA strings u and v, where the shorter string v called primer is Watson-Crick complementary and can thus bind to a substring of the longer string u called template the result of the primer extension is a DNA string that is complementary to a suffix of the template which starts at the binding position of the primer. The operation of DNA primer extension can be abstracted as a binary operation on two formal languages: a template language L1 and a primer language L2. We call this language operation L1-directed extension of L2 and study the closure properties of various language classes, including the classes in the Chomsky hierarchy, under directed extension. Furthermore, we answer the question under what conditions can a given language of target strings be generated from a given template language when the primer language is unknown. We use the canonic inverse of directed extension in order to obtain the optimal solution the minimal primer language to this question.

Dissertation
31 Mar 2015
TL;DR: This thesis proves that two encodings of the lambda-calculus in the pi-calculi are in fact equivalent, and describes higher-order languages as first-order systems, by proving correct the translations and up-to techniques that are specific to each language.
Abstract: The behaviours of concurrent processes can be expressed using process calculi, which are simple formal languages that let us establish precise mathematical results on the behaviours and interactions between processes. A very simple example is CCS, another one is the pi-calculus, which is more expressive thanks to a name-passing mechanism. The pi-calculus supports the addition of type systems (to refine the analysis to more subtle environments) and the encoding of the lambda-calculus (which represents sequential computations).Some of these calculi, like CCS or variants of the pi-calculus such as fusion calculi, enjoy a property of symmetry. First, we use this symmetry as a tool to prove that two encodings of the lambda-calculus in the pi-calculus are in fact equivalent.This proof using a type system and a form of symmetry, we wonder if other existing symmetric calculi can support the addition of type systems. We answer negatively to this question with an impossibility theorem.Investigating this theorem leads us to a fundamental constraint of these calculi that forbids types: they induce an equivalence relation on names. Relaxing this constraint to make it a preorder relation yields another calculus that recovers important notions of the pi-calculus, that fusion calculi do not satisfy: the notions of types and of privacy of names. The first part of this thesis focuses on the study of this calculus, a pi-calculus with preorders on names.The second part of this thesis focuses on bisimulation, a proof method for equivalence of agents in higher-order languages, like the pi- or the lambda-calculi. An enhancement of this method is the powerful theory of bisimulations up to, which unfortunately only applies for first-order systems, like automata or CCS.We then proceed to describe higher-order languages as first-order systems. This way, we inherit the general theory of up-to techniques for these languages, by proving correct the translations and up-to techniques that are specific to each language. We give details on the approach, to provide the necessary tools for future applications of this method to other higher-order languages.

Book ChapterDOI
24 Aug 2015
TL;DR: This work presents differential bisimulation, a behavioral equivalence developed as the ODE counterpart of bisimulations for languages with probabilistic or stochastic semantics, and provides an efficient partition-refinement algorithm to compute the coarsest ODE aggregation of a model according to differential bisIMulation.
Abstract: Formal languages with semantics based on ordinary differential equations (ODEs) have emerged as a useful tool to reason about large-scale distributed systems. We present differential bisimulation, a behavioral equivalence developed as the ODE counterpart of bisimulations for languages with probabilistic or stochastic semantics. We study it in the context of a Markovian process algebra. Similarly to Markovian bisimulations yielding an aggregated Markov process in the sense of the theory of lumpability, differential bisimulation yields a partition of the ODEs underlying a process algebra term, whereby the sum of the ODE solutions of the same partition block is equal to the solution of a single (lumped) ODE. Differential bisimulation is defined in terms of two symmetries that can be verified only using syntactic checks. This enables the adaptation to a continuous-state semantics of proof techniques and algorithms for finite, discrete-state, labeled transition systems. For instance, we readily obtain a result of compositionality, and provide an efficient partition-refinement algorithm to compute the coarsest ODE aggregation of a model according to differential bisimulation.

Proceedings ArticleDOI
Thomas Place1
06 Jul 2015
TL;DR: In this paper, the authors investigated the quantifier alternation hierarchy of first-order logic over finite words and proposed an algorithm for the level a#x03A3; 3 (formulas having at most 2 alternations beginning with an existential block) and showed that one can decide whether a regular language is definable by such a formula.
Abstract: We investigate the quantifier alternation hierarchy of first-order logic over finite words To do so, we rely on the separation problem For each level in the hierarchy, this problem takes two regular languages as input and asks whether there exists a formula of the level that accepts all words in the first language and no word in the second one Usually, obtaining an algorithm that solves this problem requires a deep understanding of the level under investigation We present such an algorithm for the level a#x03A3; 3 (formulas having at most 2 alternations beginning with an existential block) We also obtain as a corollary that one can decide whether a regular language is definable by a a#x03A3; 4 formula (formulas having at most 3 alternations beginning with an existential block)

Book ChapterDOI
15 Jul 2015
TL;DR: A formal language for representing reasons and claims, and a framework for inferencing with the arguments and counterarguments in this formal language are proposed.
Abstract: This paper presents a target language for representing arguments mined from natural language. The key features are the connection between possible reasons and possible claims and recursive embedding of such connections. Given a base of these arguments and counterarguments mined from texts or dialogues, we want be able combine them, deconstruct them, and to analyse them (for instance to check whether the set is inconsistent). To address these needs, we propose a formal language for representing reasons and claims, and a framework for inferencing with the arguments and counterarguments in this formal language.