scispace - formally typeset
Search or ask a question

Showing papers on "Formal grammar published in 2015"


Journal ArticleDOI
TL;DR: The state-of-the art dynamic sign language recognition (DSLR) system for smart home interactive applications is presented, which is not only able to rule out ungrammatical sentences, but it can also make predictions about missing gestures, which increases the accuracy of the recognition task.
Abstract: This paper presents the state-of-the art dynamic sign language recognition (DSLR) system for smart home interactive applications Our novel DSLR system comprises two main subsystems: an image processing (IP) module and a stochastic linear formal grammar (SLFG) module Our IP module enables us to recognize the individual words of the sign language (ie, a single gesture) In this module, we used the bag-of-features (BOFs) and a local part model approach for bare hand dynamic gesture recognition from a video We used dense sampling to extract local 3-D multiscale whole-part features We adopted 3-D histograms of a gradient orientation descriptor to represent features The $k$ -means++ method was applied to cluster the visual words Dynamic hand gesture classification was conducted using the BOFs and nonlinear support vector machine methods We used a multiscale local part model to preserve temporal context The SLFG module analyzes the sentences of the sign language (ie, sequences of gestures) and determines whether or not they are syntactically valid Therefore, the DSLR system is not only able to rule out ungrammatical sentences, but it can also make predictions about missing gestures, which, in turn, increases the accuracy of our recognition task Our IP module alone seals the accuracy of 97% and outperforms any existing bare hand dynamic gesture recognition system However, by exploiting syntactic pattern recognition, the SLFG module raises this accuracy by 165% This makes the aggregate performance of the DSLR system as accurate as 9865%

74 citations


Book
14 Apr 2015
TL;DR: Bettelou Los draws on explanations from both formal and functional approaches to explore how syntactic changes are the product of the interaction of many internal and external factors.
Abstract: Aimed at advanced students, this book discusses a number of approaches to charting the major developments in the syntax of English. It does not assume any knowledge of Old or Middle English or of formal syntax, although students should be familiar with traditional syntactic concepts such as verbs and nouns, subjects and objects, and linguistic concepts such as morphology and case. Bettelou Los draws on explanations from both formal and functional approaches to explore how syntactic changes are the product of the interaction of many internal and external factors.

20 citations


Journal ArticleDOI
17 Apr 2015-PLOS ONE
TL;DR: Previous findings in demonstrating learning effects for nested and cross-serial dependencies with more natural stimulus materials in a classical AGL paradigm are extended, taking as a starting point for further exploring the degree to which the Chomsky Hierarchy reflects cognitive processes.
Abstract: This study investigated whether formal complexity, as described by the Chomsky Hierarchy, corresponds to cognitive complexity during language learning. According to the Chomsky Hierarchy, nested dependencies (context-free) are less complex than cross-serial dependencies (mildly context-sensitive). In two artificial grammar learning (AGL) experiments participants were presented with a language containing either nested or cross-serial dependencies. A learning effect for both types of dependencies could be observed, but no difference between dependency types emerged. These behavioral findings do not seem to reflect complexity differences as described in the Chomsky Hierarchy. This study extends previous findings in demonstrating learning effects for nested and cross-serial dependencies with more natural stimulus materials in a classical AGL paradigm after only one hour of exposure. The current findings can be taken as a starting point for further exploring the degree to which the Chomsky Hierarchy reflects cognitive processes.

15 citations


Proceedings ArticleDOI
10 Jun 2015
TL;DR: ML-Rules is extended to also support constraints using functions on multi-sets of species, i.e., solutions, and its expressiveness is illustrated based on a model of the cell cycle and proliferation.
Abstract: The domain specific modeling and simulation language ML-Rules makes it possible to describe cell biological systems at different levels of organization. A model is formed by attributed and dynamically nested species, with reactions that are constrained by functions on attributes. In this paper, we extend ML-Rules to also support constraints using functions on multi-sets of species, i.e., solutions. Further, we present the formal syntax and semantics of ML-Rules, we define its stochastic simulator and we illustrate its expressiveness based on a model of the cell cycle and proliferation.

15 citations


Journal ArticleDOI
TL;DR: In this article, the ideas of algebraic dynamic programming (ADP) are generalized to a much wider scope of data structures by relaxing the concept of parsing, which allows us to formalize the conceptual complementarity of inside and outside variables in a natural way.
Abstract: Dynamic programming algorithms provide exact solutions to many problems in computational biology, such as sequence alignment, RNA folding, hidden Markov models (HMMs), and scoring of phylogenetic trees. Structurally analogous algorithms compute optimal solutions, evaluate score distributions, and perform stochastic sampling. This is explained in the theory of Algebraic Dynamic Programming (ADP) by a strict separation of state space traversal (usually represented by a context free grammar), scoring (encoded as an algebra), and choice rule. A key ingredient in this theory is the use of yield parsers that operate on the ordered input data structure, usually strings or ordered trees. The computation of ensemble properties, such as a posteriori probabilities of HMMs or partition functions in RNA folding, requires the combination of two distinct, but intimately related algorithms, known as the inside and the outside recursion. Only the inside recursions are covered by the classical ADP theory. The ideas of ADP are generalized to a much wider scope of data structures by relaxing the concept of parsing. This allows us to formalize the conceptual complementarity of inside and outside variables in a natural way. We demonstrate that outside recursions are generically derivable from inside decomposition schemes. In addition to rephrasing the well-known algorithms for HMMs, pairwise sequence alignment, and RNA folding we show how the TSP and the shortest Hamiltonian path problem can be implemented efficiently in the extended ADP framework. As a showcase application we investigate the ancient evolution of HOX gene clusters in terms of shortest Hamiltonian paths. The generalized ADP framework presented here greatly facilitates the development and implementation of dynamic programming algorithms for a wide spectrum of applications.

13 citations


Journal ArticleDOI
TL;DR: This essay argues that the sort of compositional meaning theory that would verify GGH would not only be quite different from the theories formal semanticists construct, but would be a more fundamental theory that supersedes those theories in that it would explain why they aretrue when they are true, but their truth wouldn’t explain its truth.
Abstract: A generative grammar for a language L generates one or more syntactic structures for each sentence of L and interprets those structures both phonologically and semantically. A widely accepted assumption in generative linguistics dating from the mid-60s, the Generative Grammar Hypothesis (GGH), is that the ability of a speaker to understand sentences of her language requires her to have tacit knowledge of a generative grammar of it, and the task of linguistic semantics in those early days was taken to be that of specifying the form that the semantic component of a generative grammar must take. Then in the 70s linguistic semantics took a curious turn. Without rejecting GGH, linguists turned away from the task of characterizing the semantic component of a generative grammar to pursue instead the Montague-inspired project of providing for natural languages the same kind of model-theoretic semantics that logicians devise for the artificial languages of formal systems of logic, and “formal semantics” continues to dominate semantics in linguistics. This essay argues that the sort of compositional meaning theory that would verify GGH would not only be quite different from the theories formal semanticists construct, but would be a more fundamental theory that supersedes those theories in that it would explain why they are true when they are true, but their truth wouldn’t explain its truth. Formal semantics has undoubtedly made important contributions to our understanding of such phenomena as anaphora and quantification, but semantics in linguistics is supposed to be the study of meaning. This means that the formal semanticist can’t be unconcerned that the kind of semantic theory for a natural language that interests her has no place in a theory of linguistic competence; for if GGH is correct, then the more fundamental semantic theory is the compositional meaning theory that is the semantic component of the internally represented generative grammar, and if that is so, then linguistic semantics has so far ignored what really ought to be its primary concern.

13 citations


Book ChapterDOI
25 Jun 2015
TL;DR: The central result of this paper is the construction of an incompressible sequence of finite word languages that is shown to transfer to tree languages and also to formal proofs in first-order predicate logic.
Abstract: We consider the problem of simultaneously compressing a finite set of words by a single grammar. The central result of this paper is the construction of an incompressible sequence of finite word languages. This result is then shown to transfer to tree languages and (via a previously established connection between proof theory and formal language theory) also to formal proofs in first-order predicate logic.

12 citations


Proceedings ArticleDOI
21 Oct 2015
TL;DR: This paper presents an FLA teaching-learning methodology based on the development of simulators as an approach to clarify the formalism for the students.
Abstract: Formal languages and automata (FLA) theory have fundamental relevance to the base of knowledge in the computer science area, especially focusing on scientific education. Usually presented by a discipline, the teaching-learning process of FLA is characterized by the high level of abstraction, and it is considered difficult due to the complexity of language formalisms. As support for the learning process, tools have been used to simulate language formalisms. However, the simulation is not enough to reinforce the construction of an abstract concept. In this paper, we present an FLA teaching-learning methodology based on the development of simulators as an approach to clarify the formalism for the students. Through developing their simulators, students are exposed to the data structure and algorithms to handle the formalism. Consequently, students have the opportunity to make the concept concrete.

10 citations


Journal ArticleDOI
TL;DR: This review discusses a wide range of current approaches with particular reference to African languages, as these have been playing a crucial role in advancing knowledge about the diversity of and recurring patterns in both meaning and form of information structural notions.
Abstract: Information structure has been one of the central topics of recent linguistic research. This review discusses a wide range of current approaches with particular reference to African languages, as these have been playing a crucial role in advancing our knowledge about the diversity of and recurring patterns in both meaning and form of information structural notions. We focus on cross-linguistic functional frameworks, the investigation of prosody, formal syntactic theories, and relevant effects of semantic interpretation. Information structure is a thriving research domain that promises to yield important advances in our general understanding of human language.

10 citations


Journal ArticleDOI
TL;DR: In this paper, a longitudinal study was carried out on a single matriculation student, where data was collected on the errors in the use of prepositions in various speaking and writing tasks over a period of two years.
Abstract: An oft-debated issue is whether or not English grammar has to be taught formally. One group insists that students will not be able to learn grammar unless they are taught formal grammar rules, while the other maintains that students will pick up grammar on their own in due course. To determine the extent of acquisition of English prepositions in the absence of formal teaching of prepositions, a longitudinal study was carried out on a single matriculation student. Data was collected on the errors in the use of prepositions in various speaking and writing tasks – essays, journals, interviews and presentations - at six monthly intervals, over a period of two years. An analysis of the student’s use of prepositions was carried out to determine whether or not there had been any changes over this period. The results of this study show that there were indeed improvements in the use of prepositions by this student for both speaking and writing tasks. Regarding the types of errors, there were more errors of commission than errors of omission. A common error was the unnecessary use of the phrase involving a preposition, ‘for me’. With respect to the progress made, in speaking tasks, most improvement was seen in the prepositions ‘for’, ‘in’ and ‘about’, while in writing tasks, the best results were with the prepositions ‘to, ‘of’ and ‘in’. These findings imply that, grammar should be taught in a way that is compatible with the natural processes of acquisition, rather than with the use of formal grammar rules.

10 citations


01 Jan 2015
TL;DR: The authors show that disfluencies, non-sentential utterances, gestures, and many other phenomena that are ubiquitous in spoken language are rule governed in much the same way as phenomena captured by conventional grammars.
Abstract: Much of contemporary mainstream formal grammar theory is unable to provide analyses for language as it occurs in actual spoken interaction. Its analyses are developed for a cleaned up version of language which omits the disfluencies, non-sentential utterances, gestures, and many other phenomena that are ubiquitous in spoken language. Using evidence from linguistics, conversation analysis, multimodal communication, psychology, language acquisition, and neuroscience, we show these aspects of language use are rule governed in much the same way as phenomena captured by conventional grammars. Furthermore, we argue that over the past few years the theoretical tools required to provide a precise characterizations of such phenomena have begun to emerge in theoretical and computational linguistics; hence, there is no reason for treating them as ‘second class citizens’ other than pre-theoretical assumptions about what should fall under the purview of grammar. Finally, we suggest that grammar formalisms covering such phenomena would provide a better foundation not just for linguistic analysis of face-to-face interaction, but also for sister disciplines, such as research on spoken dialogue systems and /or psychological work on language acquisition.

Proceedings ArticleDOI
13 Jan 2015
TL;DR: This work on formalization of language theory proves formally in the Agda dependently typed programming language that each of these transformations is correct in the sense of making progress toward normality and preserving the language of the given grammar.
Abstract: Every context-free grammar can be transformed into an equivalent one in the Chomsky normal form by a sequence of four transformations. In this work on formalization of language theory, we prove formally in the Agda dependently typed programming language that each of these transformations is correct in the sense of making progress toward normality and preserving the language of the given grammar. Also, we show that the right sequence of these transformations leads to a grammar in the Chomsky normal form (since each next transformation preserves the normality properties established by the previous ones) that accepts the same language as the given grammar. As we work in a constructive setting, soundness and completeness proofs are functions converting between parse trees in the normalized and original grammars.

Journal ArticleDOI
TL;DR: This paper conducted an exploratory-interpretive study focusing on the role and effectiveness of grammar instruction and corrective feedback as controversial areas of language instruction in SLA research and L2 pedagogy.
Abstract: This research study of an exploratory-interpretive nature mainly focuses on the role and effectiveness of grammar instruction and corrective feedback as controversial areas of language instruction of considerable debate in SLA research and L2 pedagogy. Since the question today is no longer whether or not to teach grammar but rather how grammar could be best taught in classrooms, the results revealed that although most learners acknowledged the importance and potential usefulness of grammar instruction and corrective feedback for L2 acquisition, the fact is that they prioritised communication over grammar. Emphasis was mainly placed on using the language communicatively rather than focusing on grammar accuracy. Learners expect and/ or wish to transfer their grammatical knowledge into communicative language use. The results suggest the need for further opportunities for practising communicative language use rather than practising formal grammar in a very controlled way in class. While negative attit...

Book ChapterDOI
12 Oct 2015
TL;DR: This paper proposes a novel formalism that covers the core wayfinding processes, yet is modular in nature by allowing for open slots for those spatial cognitive processes that are modifiable, or not yet well understood.
Abstract: Wayfinding models can be helpful in describing, understanding, and technologically supporting the processes involved in navigation. However, current models either lack a high degree of formalization, or they are not holistic and perceptually grounded, which impedes their use for cognitive engineering. In this paper, we propose a novel formalism that covers the core wayfinding processes, yet is modular in nature by allowing for open slots for those spatial cognitive processes that are modifiable, or not yet well understood. Our model is based on a formal grammar grounded in spatial reference systems and is both interpretable in terms of observable behavior and executable to allow for empirical testing as well as the simulation of wayfinding.

Proceedings ArticleDOI
01 Jul 2015
TL;DR: An extension of OCL is proposed, named Reconfigurable OCL, in order to optimize the specification and validation of constraints related to different execution scenarios of a flexible system and gains in term of the validation time and the quick expression of constraints.
Abstract: The paper deals with the verification of reconfigurable real-time systems to be validated by using the Object Constraint Language (abbrev, OCL). A reconfiguration scenario is assumed to be any adaptation of the execution to the system environment according to user requirements. Nevertheless, since several behaviors can be redundant from an execution to another, the use of OCL is insufficient to specify the constraints to be satisfied by this kind of systems. We propose an extension of OCL, named Reconfigurable OCL, in order to optimize the specification and validation of constraints related to different execution scenarios of a flexible system. A metamodel of the new ROCL is proposed with formal syntax and semantics. This solution gains in term of the validation time and the quick expression of constraints. The paper's contribution is applied to a case study that we propose to show the originality of this new language.

Proceedings ArticleDOI
01 Apr 2015
TL;DR: The results of this study show the superiority of the proposed method in comparison with geometrical and fractal approaches in case of the absolute and relative complexity in word production as well as the simplicity of the extracted rules.
Abstract: L-System is a parallel rewriting system and a type of formal grammar, which was introduced to be used in describing the behavior of plant cells, modeling the growth processes of plant development, the morphology of organisms, and generating self-similar fractals. The mentioned applications lie in the field of pattern formation. However, to the best of our knowledge, visual forming of a language’s words using L-Systems has not been studied yet. This paper aims to fill the gap by introducing grammars and rules of a Persian poem from Rumi, so-called Neyname, in which 108 words were generated. The main reason for selecting the Persian language is its nature in terms of complexity, dealing with the baseline, and being cursive. The results of this study show the superiority of the proposed method in comparison with geometrical and fractal approaches in case of the absolute and relative complexity in word production as well as the simplicity of the extracted rules.

Posted Content
TL;DR: This work is focused on the reimplementation of the resolution rules from Fern\'andez (2006) with a probabilistic account of the dialogue state to provide a principled account of ambiguities in the NSU resolution process.
Abstract: Non-sentential utterances (NSUs) are utterances that lack a complete sentential form but whose meaning can be inferred from the dialogue context, such as "OK", "where?", "probably at his apartment". The interpretation of non-sentential utterances is an important problem in computational linguistics since they constitute a frequent phenomena in dialogue and they are intrinsically context-dependent. The interpretation of NSUs is the task of retrieving their full semantic content from their form and the dialogue context. The first half of this thesis is devoted to the NSU classification task. Our work builds upon Fernandez et al. (2007) which present a series of machine-learning experiments on the classification of NSUs. We extended their approach with a combination of new features and semi-supervised learning techniques. The empirical results presented in this thesis show a modest but significant improvement over the state-of-the-art classification performance. The consecutive, yet independent, problem is how to infer an appropriate semantic representation of such NSUs on the basis of the dialogue context. Fernandez (2006) formalizes this task in terms of "resolution rules" built on top of the Type Theory with Records (TTR). Our work is focused on the reimplementation of the resolution rules from Fernandez (2006) with a probabilistic account of the dialogue state. The probabilistic rules formalism Lison (2014) is particularly suited for this task because, similarly to the framework developed by Ginzburg (2012) and Fernandez (2006), it involves the specification of update rules on the variables of the dialogue state to capture the dynamics of the conversation. However, the probabilistic rules can also encode probabilistic knowledge, thereby providing a principled account of ambiguities in the NSU resolution process.

Proceedings ArticleDOI
09 Nov 2015
TL;DR: A hybrid approach is introduced which parallelly uses formal analogy in the language domain and in the more general word sequences domain in order to separately tackle linguistic and parametric variations.
Abstract: In this paper, we explore the use of proportional analogy reasoning to address the problem of language transfer, that is the production of an utterance in a defined target language, given another in the source language. The application we focused on is the transfer of requests in a natural language (French) into commands in a formal language (bash). We introduce a hybrid approach which parallelly uses formal analogy in the language domain and in the more general word sequences domain in order to separately tackle linguistic and parametric variations. On the observation that parametric variations are the most unpredictable, we also apply analogy to automatically extend the available data by generating new requests from the existing ones. Our approach is language independent, it was designed with the help of analogical reasoning to be as generic as possible.

Proceedings ArticleDOI
21 May 2015
TL;DR: This paper investigates an alternative approach to inferring grammars via pattern languages and elementary formal system frameworks and summarizes inferability results for subclasses of both frameworks and discusses how they map to the Chomsky hierarchy.
Abstract: Formal Language Theory for Security (LANGSEC) has proposed that formal language theory and grammars be used to define and secure protocols and parsers. The assumption is that by restricting languages to lower levels of the Chomsky hierarchy, it is easier to control and verify parser code. In this paper, we investigate an alternative approach to inferring grammars via pattern languages and elementary formal system frameworks. We summarize inferability results for subclasses of both frameworks and discuss how they map to the Chomsky hierarchy. Finally, we present initial results of pattern language learning on logged HTTP sessions and suggest future areas of research.

Posted Content
TL;DR: This paper presents a formalization, using the Coq proof assistant, of fundamental results related to context-free grammars and languages, including closure properties, grammar simplification, and the existence of a Chomsky Normal Form.
Abstract: Context-free language theory is a subject of high importance in computer language processing technology as well as in formal language theory. This paper presents a formalization, using the Coq proof assistant, of fundamental results related to context-free grammars and languages. These include closure properties (union, concatenation and Kleene star), grammar simplification (elimination of useless symbols inaccessible symbols, empty rules and unit rules) and the existence of a Chomsky Normal Form for context-free grammars.

01 Jan 2015
TL;DR: Evaluated prospects of implementing International Technology Alliance Controlled English (ITACE) as a middleware for ontology editing as well as a prototype of a natural language conversational interface application designed to facilitate ontology edited via the formulation of CNL constructs are evaluated.
Abstract: Ontologies formally represent reality in a way that limits ambiguity and facilitates automated reasoning and data fusion, but is often daunting to the non-technical user. Thus, many researchers have endeavored to hide the formal syntax and semantics of ontologies behind the constructs of Controlled Natural Languages (CNLs), which retain the formal properties of ontologies while simultaneously presenting that information in a comprehensible natural language format. In this paper, we build upon previous work in this field by evaluating prospects of implementing International Technology Alliance Controlled English (ITACE) as a middleware for ontology editing. We also discuss at length a prototype of a natural language conversational interface application designed to facilitate ontology editing via the formulation of CNL constructs. Keywords—Ontology; Controlled English; Intelligence Collection

Patent
19 May 2015
TL;DR: In this paper, a human expert creates sentences in a formal grammar to describe the state of a physical system through aspects of the behavior of such systems, and a software process combines these sentences with historical data about physical systems of the same type and uses machine learning to generate a model that detects this state in such systems.
Abstract: A human expert creates sentences in a formal grammar to describe the state of a physical system through aspects of the behavior of such systems. A software process combines these sentences with historical data about physical systems of the same type and uses machine learning to generate a model that detects this state in such systems. These models are able to detect important states of physical systems, such as states that are predictive of future failures, without needing precise guidance from a human user.

Book ChapterDOI
08 Jun 2015
TL;DR: The fast changing world around dictates need for the same speed to change structure of the enterprise to save clients and compliance to world around, and existence of a large number of various services in the Cloud does potentially possible both fast changing and assembly of the new enterprises of atomic services.
Abstract: The fast changing world around dictates need for the same speed to change structure of the enterprise to save clients and compliance to world around. The virtual enterprises provide such opportunity. Existence of a large number of various services in the Cloud does potentially possible both fast changing of the enterprise, and assembly of the new enterprises of atomic services. It is known that algorithms of automated planning are important part of such synthesis.

01 Jan 2015
TL;DR: A new learning algorithm is proposed which automatically generates custom attribute grammars from a small set of NLAs and formal properties which are used in the NLA translation system to create formal properties from NLAs taken from two design documents for designs in the same product family.
Abstract: Author(s): Harris, Christopher Bryant | Advisor(s): Harris, Ian G | Abstract: Verification of modern digital systems can consume up to 70% of the design cycle. Verification engineers must create formal properties which reflect correct operation from design specifications or other documents. This process of creating formal correctness properties from textual descriptions is a laborious task which is difficult to automate.In this work I investigate the creation of formal verification properties from textual descriptions written in natural language. I present two approaches that utilize natural language processing (NLP) and machine learning techniques for the automatic generation of verification properties. In the first approach a set of correctness properties expressed in natural language, called natural language assertions (NLAs), is divided into subsets of structurally similar sentences. A generalized formal property template for each subset is used to provide a mapping from each sentence to a well specified verification property. Experimental results show that this methodology reduces the number of formal properties which must be manually created by up to an order of magnitude.In the second approach I create a custom attributed formal grammar which captures the English semantics of a temporal tree logic. A translation system is implemented which uses this attribute grammar to perform a semantic parsing of NLAs. Attributes for each grammatical production in the parse tree are then used to build a fully specified formal property for each NLA. Experimental results show that valid formal properties are generated from English NLAs in over 90% of test cases. In evaluating the translation system it was observed that translation rates for NLAs are strongly dependent on the quality of the formal grammar used. High translation rates require a finely tuned custom grammar. To facilitate the creation of such a grammar I propose a new learning algorithm which automatically generates custom attribute grammars from a small set of NLAs and formal properties. This machine generated grammar is used in the NLA translation system to create formal properties from NLAs taken from two design documents for designs in the same product family. Experimental results show that the learned attribute grammar is of sufficient quality to successfully translate up to 88% of NLAs.

BookDOI
31 Jan 2015
TL;DR: The authors focus on the interplay of syntactic and semantic factors in language change, an issue so far largely neglected both in (mostly lexical) historical semantics as well as historical syntax, but recently brought into focus by grammaticalization theory and minimalist diachronic syntax.
Abstract: Bringing together diachronic research from a variety of perspectives, notably typology, formal syntax and semantics, this volume focuses on the interplay of syntactic and semantic factors in language change - an issue so far largely neglected both in (mostly lexical) historical semantics as well as historical syntax, but recently brought into focus by grammaticalization theory as well as Minimalist diachronic syntax. The contributions draw on data from numerous Indo-European languages including Vedic Sanskrit, Middle Indic, Greek as well as English and German, and discuss a range of phenomena such as change in negation markers, indefinite articles, quantifiers, modal verbs, argument structure among others. The papers analyze diachronic evidence in the light of contemporary syntactic and semantic theory, addressing the crucial question of how syntactic and semantic change are linked, and whether both are governed by similar constraints, principles and systematic mechanisms. The volume will appeal to scholars in historical linguistics and formal theories of syntax and semantics.


01 Jan 2015
TL;DR: The volume comprises papers that were presented at the 14th European conference on "Formal Description of Slavic Languages" at Masaryk University in Brno, Czech Republic, and describes interesting data patterns found in Slavic languages from the perspective of formal grammar.
Abstract: The volume comprises papers that were presented at the 14th European conference on "Formal Description of Slavic Languages" at Masaryk University in Brno, Czech Republic. The conference focuses on formal approaches to Slavic phonology, morphology, syntax and semantics. The present contributions describe interesting data patterns found in Slavic languages and analyze them from the perspective of formal grammar, including generative syntax, Distributed Morphology, formal semantics and others.


Book ChapterDOI
01 Jan 2015
TL;DR: In this paper, it was shown that a mental notion of meaning is very different from a semantic theory of meaning, and that the absence of syntax and semantics does not prevent Aristotle from developing a theory of deduction based on universal proofs, so that scientific explanations are made possible in the context of demonstrations.
Abstract: This paper shows that Aristotle’s conception of language is incompatible with a divide between syntax and semantics. Language for Aristotle is not reducible to a formal syntax of names and sentences, since these are linguistic entities if and only if they have linguistic meanings by convention. Nor is there an abstract semantics, in so far as Aristotle’s De Interpretatione resorts to a distinction between two kinds of meaning. On the one hand, linguistic meanings are by convention, so that names, verbs, and sentences are meaningful spoken sounds, and Aristotle concludes that linguistic meanings cannot be the same for all. On the other hand, non-linguistic thoughts are about mental contents, and when they are related to actual things, they are the same for all, because actual things are the same for all. For instance, the non-linguistic thought of snow is the same for all, but the linguistic meaning of the name ‘snow’ is not the same for all. While linguistic meanings rely on the learning of linguistic conventions, non-linguistic meanings are mental contents, derived from perceptions. In that respect, a mental conception of meaning is very different from a semantic theory of meaning. Nevertheless, the absence of syntax and semantics does not prevent Aristotle from developing a theory of deduction based on universal proofs, so that scientific explanations are made possible in the context of demonstrations. Accordingly, demonstrative knowledge is about explanatory middle terms, whose formalization allows Aristotle to establish a universal discourse for science.

Journal ArticleDOI
TL;DR: ModelCC is introduced, a model-based parser generator that decouples language specification from language processing, avoiding some of the problems caused by grammar-driven parser generators.
Abstract: Syntax-directed translation tools require the specification of a language by means of a formal grammar. This grammar must conform to the specific requirements of the parser generator to be used. This grammar is then annotated with semantic actions for the resulting system to perform its desired function. In this paper, we introduce ModelCC, a model-based parser generator that decouples language specification from language processing, avoiding some of the problems caused by grammar-driven parser generators. ModelCC receives a conceptual model as input, along with constraints that annotate it. It is then able to create a parser for the desired textual syntax and the generated parser fully automates the instantiation of the language conceptual model. ModelCC also includes a reference resolution mechanism so that ModelCC is able to instantiate abstract syntax graphs, rather than mere abstract syntax trees.