scispace - formally typeset
Search or ask a question
Topic

Chomsky hierarchy

About: Chomsky hierarchy is a research topic. Over the lifetime, 601 publications have been published within this topic receiving 31067 citations. The topic is also known as: Chomsky–Schützenberger hierarchy.


Papers
More filters
Dissertation
01 Jan 1965

22 citations

01 Jan 2004
TL;DR: This chapter simplifies the analysis of implementations of the table ADT by treating each query or update of a list element or tree node, or comparison of two of them, as an elementary operation.
Abstract: ly, a table is a mapping (function) from keys to values. Given a search key k, table search has to find the table entry (k,v) containing that key. The found entry may be retrieved, or removed (deleted) from the table, or its value, v, may be updated. If the table has no such entry, a new entry with key k may be created and inserted in the table. Operations on a table also initialize a table to the empty one or indicate that an entry with the given key is absent. Insertions and deletions modify the mapping of keys onto values specified by the table. Example 3.2. Table 3.1 presents a very popular (at least in textbooks on algorithms and data structures) table having three-letter identifiers of airports as keys and associated data such as airport locations, as values. Each identifier has a unique integer representation k = 26c0 + 26c1 + c2 where the ci; i = 0,1,2, are ordinal numbers of letters in the English alphabet (A corresponds to 0, B to 1, . . . , Z to 25). For example, AKL corresponds to 262 ·0 + 26 ·10 + 11 = 271. In total, there are 263 = 17576 possible different keys and entries. Table 3.1: A map between airport codes and locations. Key Associated value v Code k City Country State / Place AKL 271 Auckland New Zealand DCA 2080 Washington USA District Columbia (D.C.) FRA 3822 Frankfurt Germany Rheinland-Pfalz GLA 4342 Glasgow UK Scotland HKG 4998 Hong Kong China LAX 7459 Los Angeles USA California SDF 12251 Louisville USA Kentucky ORY 9930 Paris France As can be seen from this example, we may map the keys to integers. We deal with both static (where the database is fixed in advance and no insertions, deletions or updates are done) and dynamic (where insertions, deletions or updates are allowed) implementations of the table ADT. In all our implementations of the table ADT, we may simplify the analysis as follows. We use lists and trees as our basic containers. We treat each query or update of a list element or tree node, or comparison of two of them, as an elementary operation. The following lemma summarizes some obvious relationships. Lemma 3.3. Suppose that a table is built up from empty by successive insertions, and we then search for a key k uniformly at random. Let Tss(k) (respectively Tus(k)) be the time to perform successful (respectively unsuccessful) search for k. Then • the time taken to retrieve, delete, or update an element with key k is at least Tss(k); • the time taken to insert an element with key k is at least Tus(k); • Tss(k)≤ Tus(k). Chapter 3: Efficiency of Searching 55 In addition • the worst case value for Tss(k) equals the worst case value for Tus(k); • the average value of Tss(k) equals 1 plus the average of the times for the unsuccessful searches undertaken while building the table. Proof. To insert a new element, we first try to find where it would be if it were contained in the data structure, and then perform a single insert operation into the container. To delete an element, we first find it, and then perform a delete operation on the container. Analogous statements hold for updating and retrieval. Thus for a given state of the table formed by insertions from an empty table, the time for successful search for a given element is the time that it took for unsuccessful search for that element, as we built the table, plus 1. This means that the time for unsuccessful search is always at least the time for successful search for a given element (the same in the worst case), and the average time for successful search for an element in a table is 1 more than the average of all the times for unsuccessful searches. If the data structure used to implement a table arranges the records in a list, the efficiency of searching depends on whether the list is sorted. In the case of the telephone book, we quickly find the desired phone number (data record) by name (key). But it is almost hopeless to search directly for a phone number unless we have a special reverse directory where the phone number serves as a key. We discuss unsorted lists in the Exercises below, and sorted lists in the next section. Exercises Exercise 3.1.1. The sequential search algorithm simply starts at the head of a list and examines elements in order until it finds the desired key or reaches the end of the list. An array-based version is shown in Figure 3.1. algorithm sequentialSearch Input: array a[0..n−1]; key k begin for i← 0 while i< n step i← i+ 1 do if a[i] = k then return i end for return not found end Figure 3.1: A sequential search algorithm. Show that both successful and unsuccessful sequential search in a list of size n have worst-case and average-case time complexity Θ(n). Exercise 3.1.2. Show that sequential search is slightly more efficient for sorted lists than unsorted ones. What is the time complexity of successful and unsuccessful search? 3.2 Sorted lists and binary search A sorted list implementation allows for much better search method that uses the divide-and-conquer paradigm. The basic idea of binary search is simple. Let k be the desired key for which we want to search. 56 Section 3.2: Sorted lists and binary search • If the list is empty, return “not found”. Otherwise: • Choose the key m of the middle element of the list. If m= k, return its record; if m > k, make a recursive call on the head sublist; if m < k, make a recursive call on the tail sublist. Example 3.4. Figure 3.2 illustrates binary search for the key k= 42 in a list of size 16. At the first iteration, the search key 42 is compared to the key a[7] = 53 in the middle position m= (0 + 15)/2 = 7. 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 7 14 27 33 42 49 51 53 67 70 77 81 89 94 95 99

22 citations

Book
01 Jan 2007
TL;DR: The Basic Design of Language: Levels of Representation and Interaction with Interfaces and the Minimalist Program for linguistic Theory, by Noam Chomsky and Howard Lasnik is reviewed.
Abstract: Acknowledgments. Introduction. 1. The Basic Design of Language: Levels of Representation and Interaction with Interfaces. 1.1. General background. Minimalist Inquiries: The Framework (Noam Chomsky). Derivation by Phase (Noam Chomsky). 1.2.Levels of Representation. D-Structure, Theta-Criterion and Movement into Theta-positions ( eljko Boskovi ). A Minimalist Program for Linguistic Theory (Noam Chomsky). 1.3. Recent Developments: Multiple Spell-Out. A Derivational Approach to Syntactic Relations (Samuel D Epstein, Erich M. Groat, Ruriko Kawashima, and Hisatsugu Kitahara). Minimalist Inquiries: The Framework (Noam Chomsky). Beyond Explanatory Adequacy (Noam Chomsky). 2. Eliminating Government. 2.1 Case. On the Subject of Infinitives (Howard Lasnik, with Mamoru Saito). A Minimalist Program for linguistic Theory (Noam Chomsky). 2.1.1 Recent Developments. Minimalist Inquiries: The Framework (Noam Chomsky). 2.2. PRO. 2.2.1 Null Case. The Syntax of Nonfinite Complementation: An Economy Approach ( eljko Boskovi ). 2.2.2 Eliminating PRO: Movement into &theta -positions. Movement and Control (Norbert Hornstein). 2.3 Locality. The Theory of Principles and Parameters (Noam Chomsky). Economy of Derivation and the Generalized Proper Binding Condition (Chris Collins). Elementary Operations and Optimal Derivations (Hisatsugu Kitahara). A Minimalist Program for Linguistic Theory (Noam Chomsky). Categories and Transformation (Noam Chomsky). Local Economy (Chris Collins). Move or Attract? (Masao Ochi). A-movement and the EPP ( eljko Boskovi ). 2.3.1 Recent Developments: Phases. Minimalist Inquiries: The Framework (Noam Chomsky). Derivation by Phase (Noam Chomsky). Successive Cyclicity, Anti-locality, and Adposition Stranding (Klaus Abels). 3. Structure Building and Lexical Insertion. 3.1 Bare Phrase Structure. Categories and Transformation (Noam Chomsky). Beyond Explanatory Adequacy (Noam Chomsky). 3.2 Numeration and the Merge-over-Move Preference. Minimalist Inquiries: the Framework (Noam Chomsky). 3.3 Cycle. Movement in Language: Interactions and Architectures (Norvin Richards). Minimalist Inquiries: the Framework (Noam Chomsky). 3.4 Covert Lexical Insertion. LF Movement and the Minimalist Program ( eljko Boskovi ). 3.5 Eliminating Agr. Categories and Transformation (Noam Chomsky). 4. Verbal Morphology. 4.1 Head movement and/or Affix Hopping?. Verbal Morphology (Howard Lasnik). 4.2 Head Movement as a PF Phenomenon. Derivation by Phase (Noam Chomsky). Head-ing Toward PF (Cedric Boeckx and Sandra Stjepanovic). 5. LCA/C-Command Related Issues. The Asymmetry of Syntax (Richard S. Kayne). Categories and Transformation (Noam Chomsky). Un-principled Syntax: The Derivation of Syntactic Relations (Samuel D. Epstein). Multiple SpellOout (Juan Uriagereka). Cyclicity and Extraction Domains (Jairo Nunes and Juan Uriagereka). 6. Copy Theory of Movement. Linearization of Chains and Sideward Movement (Jairo Nunes). Morphosyntax: The Syntax of Verbal Inflection (Jonathan Bobaljik). 7. Existential Constructions. A Minimalist Program for Linguistic Theory (Noam Chomsky). Categories and Transformation (Noam Chomsky). Last Resort (Howard Lasnik). 7.1 Recent Developments. Derivation by Phase (Noam Chomsky). Minimalist Inquiries: The Framework (Noam Chomsky). Beyond Explanatory Adequacy (Noam Chomsky). 8. Syntax/Semantics Interface. Economy and Scope (Danny Fox). Reconstruction, Binding Theory, and the Interpretation of Chains (Danny Fox). Minimalism and Quantifier Raising (Norbert Hornstein). Index.

22 citations

Proceedings ArticleDOI
10 Jul 1986
TL;DR: The absence of mirror-image constructions in human languages means that it is not enough to extend Context-free Grammars in the direction of context-sensitivity, and a class of grammars must be found which handles (context-sensitive) copying but not ( context-free) mirror images, suggesting that human linguistic processes use queues rather than stacks.
Abstract: The documentation of (unbounded-length) copying and cross-serial constructions in a few languages in the recent literature is usually taken to mean that natural languages are slightly context-sensitive. However, this ignores those copying constructions which, while productive, cannot be easily shown to apply to infinite sublanguages. To allow such finite copying constructions to be taken into account in formal modeling, it is necessary to recognize that natural languages cannot be realistically represented by formal languages of the usual sort. Rather, they must be modeled as families of formal languages or as formal languages with indefinite vocabularies. Once this is done, we see copying as a truly pervasive and fundamental process in human language. Furthermore, the absence of mirror-image constructions in human languages means that it is not enough to extend Context-free Grammars in the direction of context-sensitivity. Instead, a class of grammars must be found which handles (context-sensitive) copying but not (context-free) mirror images. This suggests that human linguistic processes use queues rather than stacks, making imperative the development of a hierarchy of Queue Grammars as a counterweight to the Chomsky Grammars. A simple class of Context-free Queue Grammars is introduced and discussed.

22 citations

Journal ArticleDOI
TL;DR: This paper deals with closure properties of such “slender” languages with respect to a number of operations, some of them introduced very recently.

22 citations


Network Information
Related Topics (5)
Rule-based machine translation
8.8K papers, 240.5K citations
72% related
Syntax
16.7K papers, 518.6K citations
71% related
Time complexity
36K papers, 879.5K citations
71% related
Type (model theory)
38.9K papers, 670.5K citations
70% related
Semantics
24.9K papers, 653K citations
70% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20232
20223
20219
20208
201912
201810