scispace - formally typeset
Search or ask a question
Topic

Chomsky hierarchy

About: Chomsky hierarchy is a research topic. Over the lifetime, 601 publications have been published within this topic receiving 31067 citations. The topic is also known as: Chomsky–Schützenberger hierarchy.


Papers
More filters
Journal ArticleDOI
TL;DR: In searching for universal constraints on the class of natural languages, linguists have investigated a number of formal properties, including that of context-freeness, which is interpreted strongly and weakly both as a way of characterizing structure sets and even weakly for characterizing string sets.
Abstract: In searching for universal constraints on the class of natural languages, linguists have investigated a number of formal properties, including that of context-freeness. Soon after Chomsky’s categorization of languages into his well-known hierarchy (Chomsky, 1963), the common conception of the context-free class of languages as a tool for describing natural languages was that it was too restrictive a class — interpreted strongly (as a way of characterizing structure sets) and even weakly (as a way of characterizing string sets).

638 citations

Journal ArticleDOI
27 Apr 2006-Nature
TL;DR: It is shown that European starlings (Sturnus vulgaris) accurately recognize acoustic patterns defined by a recursive, self-embedding, context-free grammar, and this finding opens a new range of complex syntactic processing mechanisms to physiological investigation.
Abstract: Noam Chomsky's work on ‘generative grammar’ led to the concept of a set of rules that can generate a natural language with a hierarchical grammar, and the idea that this represents a uniquely human ability. In a series of experiments with European starlings, in which several types of ‘warble’ and ‘rattle’ took the place of words in a human language, the birds learnt to classify phrase structure grammars in a way that met the same criteria. Their performance can be said to be almost human on this yardstick. So if there are language processing capabilities that are uniquely human, they may be more context-free or at a higher level in the Chomsky hierarchy. Or perhaps there is no single property or processing capacity that differentiates human language from non-human communication systems. Humans regularly produce new utterances that are understood by other members of the same language community1. Linguistic theories account for this ability through the use of syntactic rules (or generative grammars) that describe the acceptable structure of utterances2. The recursive, hierarchical embedding of language units (for example, words or phrases within shorter sentences) that is part of the ability to construct new utterances minimally requires a ‘context-free’ grammar2,3 that is more complex than the ‘finite-state’ grammars thought sufficient to specify the structure of all non-human communication signals. Recent hypotheses make the central claim that the capacity for syntactic recursion forms the computational core of a uniquely human language faculty4,5. Here we show that European starlings (Sturnus vulgaris) accurately recognize acoustic patterns defined by a recursive, self-embedding, context-free grammar. They are also able to classify new patterns defined by the grammar and reliably exclude agrammatical patterns. Thus, the capacity to classify sequences from recursive, centre-embedded grammars is not uniquely human. This finding opens a new range of complex syntactic processing mechanisms to physiological investigation.

510 citations

Proceedings ArticleDOI
01 Jan 2004
TL;DR: PEGs address frequently felt expressiveness limitations of CFGs and REs, simplifying syntax definitions and making it unnecessary to separate their lexical and hierarchical components, and are here proven equivalent in effective recognition power.
Abstract: For decades we have been using Chomsky's generative system of grammars, particularly context-free grammars (CFGs) and regular expressions (REs), to express the syntax of programming languages and protocols The power of generative grammars to express ambiguity is crucial to their original purpose of modelling natural languages, but this very power makes it unnecessarily difficult both to express and to parse machine-oriented languages using CFGs Parsing Expression Grammars (PEGs) provide an alternative, recognition-based formal foundation for describing machine-oriented syntax, which solves the ambiguity problem by not introducing ambiguity in the first place Where CFGs express nondeterministic choice between alternatives, PEGs instead use prioritized choice PEGs address frequently felt expressiveness limitations of CFGs and REs, simplifying syntax definitions and making it unnecessary to separate their lexical and hierarchical components A linear-time parser can be built for any PEG, avoiding both the complexity and fickleness of LR parsers and the inefficiency of generalized CFG parsing While PEGs provide a rich set of operators for constructing grammars, they are reducible to two minimal recognition schemas developed around 1970, TS/TDPL and gTS/GTDPL, which are here proven equivalent in effective recognition power

467 citations

Book
01 Aug 1996
TL;DR: In this paper, the notions of union, intersection, concatenation, Kleene closure and grammar for fuzzy languages are defined as extensions of the corresponding notions in the theory of formal languages.
Abstract: : A fuzzy language is defined to be a fuzzy subset of the set of strings over a finite alphabet. The notions of union, intersection, concatenation, Kleene closure and grammar for such languages are defined as extensions of the corresponding notions in the theory of formal languages. An explicit expression for the membership function of the language L(G) generated by a fuzzy grammar G is given and it is shown that any context-sensitive fuzzy grammar is recursive. For fuzzy context-free grammars, procedures for constructing the Chomsky and Greibach normal forms are outlined and illustrated by examples. (Author)

324 citations


Network Information
Related Topics (5)
Rule-based machine translation
8.8K papers, 240.5K citations
72% related
Syntax
16.7K papers, 518.6K citations
71% related
Time complexity
36K papers, 879.5K citations
71% related
Type (model theory)
38.9K papers, 670.5K citations
70% related
Semantics
24.9K papers, 653K citations
70% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20232
20223
20219
20208
201912
201810