scispace - formally typeset
Search or ask a question

Showing papers on "Chomsky hierarchy published in 2019"


Journal ArticleDOI
01 May 2019-Synthese
TL;DR: This paper argues that a central misconstrual of formal apparatus of recursive operations such as the set-theoretic operation merge has led to a mathematisation of the object of inquiry, producing a strong analogy with discrete mathematics and especially arithmetic.
Abstract: The concept of linguistic infinity has had a central role to play in foundational debates within theoretical linguistics since its more formal inception in the mid-twentieth century. The conceptualist tradition, marshalled in by Chomsky and others, holds that infinity is a core explanandum and a link to the formal sciences. Realism/Platonism takes this further to argue that linguistics is in fact a formal science with an abstract ontology. In this paper, I argue that a central misconstrual of formal apparatus of recursive operations such as the set-theoretic operation merge has led to a mathematisation of the object of inquiry, producing a strong analogy with discrete mathematics and especially arithmetic. The main product of this error has been the assumption that natural, like some formal, languages are discretely infinite. I will offer an alternative means of capturing the insights and observations related to this posit in terms of scientific modelling. My chief aim will be to draw from the larger philosophy of science literature in order to offer a position of grammars as models compatible with various foundational interpretations of linguistics while being informed by contemporary ideas on scientific modelling for the natural and social sciences.

15 citations


Journal ArticleDOI
TL;DR: This work proposes a theoretical explanation in terms of a new concept of “Merge-generability,” that is, whether the structural basis for a given dependency is provided by the fundamental operation Merge, and results indicate that Merge is indeed a fundamental operation, which comes into play especially under the Natural conditions.
Abstract: Ever since the inception of generative linguistics, various dependency patterns have been widely discussed in the literature, particularly as they pertain to the hierarchy based on "weak generation" - the so-called Chomsky Hierarchy. However, humans can make any possible dependency patterns by using artificial means on a sequence of symbols (e.g., computer programing). The differences between sentences in human language and general symbol sequences have been routinely observed, but the question as to why such differences exist has barely been raised. Here, we address this problem and propose a theoretical explanation in terms of a new concept of "Merge-generability," that is, whether the structural basis for a given dependency is provided by the fundamental operation Merge. In our functional magnetic resonance imaging (fMRI) study, we tested the judgments of noun phrase (NP)-predicate (Pred) pairings in sentences of Japanese, an SOV language that allows natural, unbounded nesting configurations. We further introduced two pseudo-adverbs, which artificially force dependencies that do not conform to structures generated by Merge, i.e., non-Merge-generable; these adverbs enable us to manipulate Merge-generability (Natural or Artificial). By employing this novel paradigm, we obtained the following results. Firstly, the behavioral data clearly showed that an NP-Pred matching task became more demanding under the Artificial conditions than under the Natural conditions, reflecting cognitive loads that could be covaried with the increased number of words. Secondly, localized activation in the left frontal cortex, as well as in the left middle temporal gyrus and angular gyrus, was observed for the [Natural - Artificial] contrast, indicating specialization of these left regions in syntactic processing. Any activation due to task difficulty was completely excluded from activations in these regions, because the Natural conditions were always easier than the Artificial ones. And finally, the [Artificial - Natural] contrast resulted in the dorsal portion of the left frontal cortex, together with wide-spread regions required for general cognitive demands. These results indicate that Merge-generable sentences are processed in these specific regions in contrast to non-Merge-generable sentences, demonstrating that Merge is indeed a fundamental operation, which comes into play especially under the Natural conditions.

6 citations


Journal ArticleDOI
TL;DR: A syntactic characterization of languages that are accepted online by 1ANNs in terms of so-called cut languages which are combined in a certain way by usual operations is achieved, proving that languages accepted by 1 ANNs with rational weights are context-sensitive (Chomsky level 1).

6 citations


Book ChapterDOI
08 Oct 2019
TL;DR: This work provides a fixpoint characterization of the languages recognized by an indexed grammar and studies possible ways to abstract, in the abstract interpretation sense, these languages and their grammars into context-free and regular languages.
Abstract: Indexed grammars are a generalization of context-free grammars and recognize a proper subset of context-sensitive languages. The class of languages recognized by indexed grammars are called indexed languages and they correspond to the languages recognized by nested stack automata. For example indexed grammars can recognize the language Open image in new window which is not context-free, but they cannot recognize Open image in new window which is context-sensitive. Indexed grammars identify a set of languages that are more expressive than context-free languages, while having decidability results that lie in between the ones of context-free and context-sensitive languages. In this work we study indexed grammars in order to formalize the relation between indexed languages and the other classes of languages in the Chomsky hierarchy. To this end, we provide a fixpoint characterization of the languages recognized by an indexed grammar and we study possible ways to abstract, in the abstract interpretation sense, these languages and their grammars into context-free and regular languages.

3 citations


Journal ArticleDOI
TL;DR: The authors argue that the ability to compute phrase structure grammars is indicative of a particular kind of thought, a type of thought that is only available to cognitive systems that have access to the computations that allow the generation and interpretation of the structural descriptions of phrase structure grammarmars.

3 citations


Book ChapterDOI
22 Jul 2019
TL;DR: It is possible to prove a proper counter hierarchy depending on the alphabet size, and the undecidability of the emptiness problem is derived for input-driven two-counter automata.
Abstract: The model of deterministic input-driven multi-counter automata is introduced and studied. On such devices, the input letters uniquely determine the operations on the underlying data structure that is consisting of multiple counters. We study the computational power of the resulting language families and compare them with known language families inside the Chomsky hierarchy. In addition, it is possible to prove a proper counter hierarchy depending on the alphabet size. This means that any input alphabet induces an upper bound which depends on the alphabet size only, such that \(k+1\) counters are more powerful than k counters as long as k is less than this bound. The hierarchy interestingly collapses at the level of the bound. Furthermore, we investigate the closure properties of the language families. Finally, the undecidability of the emptiness problem is derived for input-driven two-counter automata.

2 citations


Book ChapterDOI
17 Sep 2019
TL;DR: In this article, it was shown that the deterministic (context-free) language containing the words of n zeros followed by n ones, cannot be recognized offline by any 1-ANN with real weights.
Abstract: We refine the analysis of binary-state neural networks with \(\alpha \) extra analog neurons (\(\alpha \)ANNs). For rational weights, it has been known that online 1ANNs accept context-sensitive languages including examples of non-context-free languages, while offline 3ANNs are Turing complete. We now prove that the deterministic (context-free) language containing the words of n zeros followed by n ones, cannot be recognized offline by any 1ANN with real weights. Hence, the offline 1ANNs are not Turing complete. On the other hand, we show that any deterministic language can be accepted by a 2ANN with rational weights. Thus, two extra analog units can count to any number which is not the case of one analog neuron.

2 citations


Posted Content
TL;DR: This article brought together accounts of the principles of structure building in language, music and animal song, relating them to the corresponding models in formal language theory, with a special focus on evaluating the benefits of using the Chomsky hierarchy (CH).
Abstract: Human language, music and a variety of animal vocalisations constitute ways of sonic communication that exhibit remarkable structural complexity. While the complexities of language and possible parallels in animal communication have been discussed intensively, reflections on the complexity of music and animal song, and their comparisons are underrepresented. In some ways, music and animal songs are more comparable to each other than to language, as propositional semantics cannot be used as as indicator of communicative success or well-formedness, and notions of grammaticality are less easily defined. This review brings together accounts of the principles of structure building in language, music and animal song, relating them to the corresponding models in formal language theory, with a special focus on evaluating the benefits of using the Chomsky hierarchy (CH). We further discuss common misunderstandings and shortcomings concerning the CH, as well as extensions or augmentations of it that address some of these issues, and suggest ways to move beyond.

1 citations


Journal ArticleDOI
TL;DR: This paper considers whether Optimality Theory grammars might be constrained to generate only regular languages, and also whether the tools of formal language theory might be used for constructing phonological theories similar to those withinoptimality Theory.
Abstract: Much recent work has studied phonological typology in terms of Formal Language Theory (e.g. the Chomsky hierarchy). This paper considers whether Optimality Theory grammars might be constrained to generate only regular languages, and also whether the tools of formal language theory might be used for constructing phonological theories similar to those within Optimality Theory. It offers reasons to be optimistic about the first possibility, and sceptical about the second.

1 citations


Book ChapterDOI
27 Jan 2019
TL;DR: This work considers briefly one-stranded systems and obtains that they describe a subregular language family and explores the relations with one-way multi-head finite automata to show an infinite, dense, and strict strand hierarchy.
Abstract: Classical string assembling systems form computational models that generate strings from copies out of a finite set of assembly units. The underlying mechanism is based on piecewise assembly of a double-stranded sequence of symbols, where the upper and lower strand have to match. The generative power of such systems is driven by the power of the matching of the two strands. Here we generalize this approach to multi-stranded systems. The generative capacities and the relative power are our main interest. In particular, we consider briefly one-stranded systems and obtain that they describe a subregular language family. Then we explore the relations with one-way multi-head finite automata and show an infinite, dense, and strict strand hierarchy. Moreover, we consider the relations with the linguistic language families of the Chomsky Hierarchy and consider the unary variants of k-stranded string assembling systems.

1 citations


Journal ArticleDOI
TL;DR: For non-Abelian free groups with finitely many generators, the following is shown: the level of a language in the Chomsky hierarchy is independent of the automatic representation; the context-free verbal languages are only the full group and the language of the empty word.

Book ChapterDOI
03 Jun 2019
TL;DR: This talk will summarize the works towards finding a particular boundary between different self- assembly classes and some trade-offs between different types of self-assembly instructions.
Abstract: In the theory of computation, the Chomsky hierarchy provides a way to characterize the complexity of different formal languages. For each class in the hierarchy, there is a specific type of automaton which recognizes all languages in the class. There different types of automaton can be viewed as standard requirements in order to recognize languages in these classes. In self-assembly, the main task is to generate patterns and shapes within certain resource limitations. Is it possible to separate these tasks into different classes? If yes, can we find a standard set of self-assembly instructions capable of performing all tasks in each class? In this talk, I will summarize the works towards finding a particular boundary between different self-assembly classes and some trade-offs between different types of self-assembly instructions.