scispace - formally typeset
Search or ask a question

Showing papers on "Context-sensitive grammar published in 2006"


Journal ArticleDOI
27 Apr 2006-Nature
TL;DR: It is shown that European starlings (Sturnus vulgaris) accurately recognize acoustic patterns defined by a recursive, self-embedding, context-free grammar, and this finding opens a new range of complex syntactic processing mechanisms to physiological investigation.
Abstract: Noam Chomsky's work on ‘generative grammar’ led to the concept of a set of rules that can generate a natural language with a hierarchical grammar, and the idea that this represents a uniquely human ability. In a series of experiments with European starlings, in which several types of ‘warble’ and ‘rattle’ took the place of words in a human language, the birds learnt to classify phrase structure grammars in a way that met the same criteria. Their performance can be said to be almost human on this yardstick. So if there are language processing capabilities that are uniquely human, they may be more context-free or at a higher level in the Chomsky hierarchy. Or perhaps there is no single property or processing capacity that differentiates human language from non-human communication systems. Humans regularly produce new utterances that are understood by other members of the same language community1. Linguistic theories account for this ability through the use of syntactic rules (or generative grammars) that describe the acceptable structure of utterances2. The recursive, hierarchical embedding of language units (for example, words or phrases within shorter sentences) that is part of the ability to construct new utterances minimally requires a ‘context-free’ grammar2,3 that is more complex than the ‘finite-state’ grammars thought sufficient to specify the structure of all non-human communication signals. Recent hypotheses make the central claim that the capacity for syntactic recursion forms the computational core of a uniquely human language faculty4,5. Here we show that European starlings (Sturnus vulgaris) accurately recognize acoustic patterns defined by a recursive, self-embedding, context-free grammar. They are also able to classify new patterns defined by the grammar and reliably exclude agrammatical patterns. Thus, the capacity to classify sequences from recursive, centre-embedded grammars is not uniquely human. This finding opens a new range of complex syntactic processing mechanisms to physiological investigation.

510 citations


Proceedings ArticleDOI
08 Jun 2006
TL;DR: This work presents a new model of the translation process: quasi-synchronous grammar (QG), and evaluates the cross-entropy of QGs on unseen text and shows that a better fit to bilingual data is achieved by allowing greater syntactic divergence.
Abstract: Many syntactic models in machine translation are channels that transform one tree into another, or synchronous grammars that generate trees in parallel. We present a new model of the translation process: quasi-synchronous grammar (QG). Given a source-language parse tree T1, a QG defines a monolingual grammar that generates translations of T1. The trees T2 allowed by this monolingual grammar are inspired by pieces of substructure in T1 and aligned to T1 at those points. We describe experiments learning quasi-synchronous context-free grammars from bitext. As with other monolingual language models, we evaluate the cross-entropy of QGs on unseen text and show that a better fit to bilingual data is achieved by allowing greater syntactic divergence. When evaluated on a word alignment task, QG matches standard baselines.

112 citations


Book ChapterDOI
25 Sep 2006
TL;DR: An arc-consistency algorithm for context-free grammars, an investigation of when logic combinations of grammar constraints are tractable, and when the boundaries run between regular, context- free, and context-sensitive grammar filtering are studied.
Abstract: By introducing the Regular Membership Constraint, Gilles Pesant pioneered the idea of basing constraints on formal languages. The paper presented here is highly motivated by this work, taking the obvious next step, namely to investigate constraints based on grammars higher up in the Chomsky hierarchy. We devise an arc-consistency algorithm for context-free grammars, investigate when logic combinations of grammar constraints are tractable, show how to exploit non-constant size grammars and reorderings of languages, and study where the boundaries run between regular, context-free, and context-sensitive grammar filtering.

51 citations


Journal ArticleDOI
TL;DR: It is shown how nearly all of these methods to model RNA and protein structure are based on the same core principles and can be converted into equivalent approaches in the framework of tree-adjoining grammars and related formalisms.
Abstract: Since the first application of context-free grammars to RNA secondary structures in 1988, many researchers have used both ad hoc and formal methods from computational linguistics to model RNA and protein structure. We show how nearly all of these methods are based on the same core principles and can be converted into equivalent approaches in the framework of tree-adjoining grammars and related formalisms. We also propose some new approaches that extend these core principles in novel ways.

33 citations


Journal Article
TL;DR: Adaptive star grammars as mentioned in this paper are an extension of node and hyperedge replacement grammar, and they have been shown to be capable of generating every type-0 string language.
Abstract: We propose an extension of node and hyperedge replacement grammars, called adaptive star grammars, and study their basic properties. A rule in an adaptive star grammar is actually a rule schema which, via the so-called cloning operation, yields a potentially infinite number of concrete rules. Adaptive star grammars are motivated by application areas such as modeling and refactoring object-oriented programs. We prove that cloning can be applied lazily. Unrestricted adaptive star grammars are shown to be capable of generating every type-0 string language. However, we identify a reasonably large subclass for which the membership problem is decidable.

31 citations


Book ChapterDOI
17 Sep 2006
TL;DR: It is proved that cloning can be applied lazily, and a reasonably large subclass for which the membership problem is decidable is identified.
Abstract: We propose an extension of node and hyperedge replacement grammars, called adaptive star grammars, and study their basic properties. A rule in an adaptive star grammar is actually a rule schema which, via the so-called cloning operation, yields a potentially infinite number of concrete rules. Adaptive star grammars are motivated by application areas such as modeling and refactoring object-oriented programs. We prove that cloning can be applied lazily. Unrestricted adaptive star grammars are shown to be capable of generating every type-0 string language. However, we identify a reasonably large subclass for which the membership problem is decidable.

31 citations


Journal ArticleDOI
TL;DR: The generalized LR parsing algorithm for context-free grammars is extended for the case of Boolean Grammars, which are a generalization of the context- free grammARS with logical connectives added to the formalism of rules.
Abstract: The generalized LR parsing algorithm for context-free grammars is extended for the case of Boolean grammars, which are a generalization of the context-free grammars with logical connectives added to the formalism of rules. In addition to the standard LR operations, Shift and Reduce, the new algorithm uses a third operation called Invalidate, which reverses a previously made reduction. This operation makes the mathematical justification of the algorithm significantly different from its prototype. On the other hand, the changes in the implementation are not very substantial, and the algorithm still works in time O(n4).

27 citations


Proceedings ArticleDOI
13 Nov 2006
TL;DR: An induction method is given to infer node replacement graph grammars from various structural representations and the correctness of an inferred grammar is verified by parsing graphs not present in the training set.
Abstract: Computer programs that can be expressed in two or more dimensions are typically called visual programs. The underlying theories of visual programming languages involve graph grammars. As graph grammars are usually constructed manually, construction can be a time-consuming process that demands technical knowledge. Therefore, a technique for automatically constructing graph grammars - at least in part - is desirable. An induction method is given to infer node replacement graph grammars. The method operates on labeled graphs of broad applicability. It is evaluated by its performance on inferring graph grammars from various structural representations. The correctness of an inferred grammar is verified by parsing graphs not present in the training set

26 citations


Book ChapterDOI
20 Sep 2006
TL;DR: This work presents the first polynomial-time algorithm for inferring Simple External Context Grammars, a class of mildly context-sensitive grammars from positive examples.
Abstract: Natural languages contain regular, context-free, and context-sensitive syntactic constructions, yet none of these classes of formal languages can be identified in the limit from positive examples Mildly context-sensitive languages are able to represent some context-sensitive constructions, those most common in natural languages, such as multiple agreement, crossed agreement, and duplication These languages are attractive for natural language applications due to their expressiveness, and the fact that they are not fully context-sensitive should lead to computational advantages as well We realize one such computational advantage by presenting the first polynomial-time algorithm for inferring Simple External Context Grammars, a class of mildly context-sensitive grammars, from positive examples

23 citations


Proceedings ArticleDOI
17 Jul 2006
TL;DR: This paper proposes a generic mathematical formalism for the combination of various structures: strings, trees, dags, graphs and products of them that is both elementary and powerful enough to strongly simulate many grammar formalisms.
Abstract: This paper proposes a generic mathematical formalism for the combination of various structures: strings, trees, dags, graphs and products of them. The polarization of the objects of the elementary structures controls the saturation of the final structure. This formalism is both elementary and powerful enough to strongly simulate many grammar formalisms, such as rewriting systems, dependency grammars, TAG, HPSG and LFG.

22 citations


Journal ArticleDOI
TL;DR: Two results extending classical language properties into 2D are proved: non-recursive tile writing grammars (TRG) coincide with tiling systems (TS) and non-self-embedding TRG are suitably defined as corner Grammars, showing that they generate TS languages.

Book ChapterDOI
26 Jun 2006
TL;DR: In this paper, Okhotin et al. proposed a new semantics for boolean grammars, which applies to all such grammar models, independently of their syntax, based on the well-founded approach to negation.
Abstract: Boolean grammars [A. Okhotin, Information and Computation 194 (2004) 19-48] are a promising extension of context-free grammars that supports conjunction and negation. In this paper we give a novel semantics for boolean grammars which applies to all such grammars, independently of their syntax. The key idea of our proposal comes from the area of negation in logic programming, and in particular from the so-called well-founded semantics which is widely accepted in this area to be the “correct” approach to negation. We show that for every boolean grammar there exists a distinguished (three-valued) language which is a model of the grammar and at the same time the least fixed point of an operator associated with the grammar. Every boolean grammar can be transformed into an equivalent (under the new semantics) grammar in normal form. Based on this normal form, we propose an ${\mathcal{O}(n^3)}$ algorithm for parsing that applies to any such normalized boolean grammar. In summary, the main contribution of this paper is to provide a semantics which applies to all boolean grammars while at the same time retaining the complexity of parsing associated with this type of grammars.


Journal ArticleDOI
15 Jan 2006
TL;DR: It is shown that for a given LRG, there exists an LA such that they accept the same languages, and vice versa, and the equivalence between deterministic lattice-valued regular grammars and deterministic associative finite automata is shown.
Abstract: In this study, we introduce the concept of lattice-valued regular grammars. Such grammars have become a necessary tool for the analysis of fuzzy finite automata. The relationship between lattice-valued finite automata (LA) and lattice-valued regular grammars (LRG) are discussed and we get the following results, for a given LRG, there exists an LA such that they accept the same languages, and vice versa. We also show the equivalence between deterministic lattice-valued regular grammars and deterministic lattice-valued finite automata.

Journal ArticleDOI
TL;DR: The algorithm computes a canonical representation of a simple language, converting its arbitrary simple grammar into prime normal form (PNF); a simple grammar is in PNF if all its nonterminals define primes.

Proceedings ArticleDOI
28 Mar 2006
TL;DR: The relationship between grammar-based compression of strings over unbounded and bounded alphabets is investigated and may provide a first step towards solving the long standing open question whether minimum grammar- based compression of binary strings is NP-complete.
Abstract: Given a string, the task of grammar-based compression is to find a small context-free grammar that generates exactly that string. We investigate the relationship between grammar-based compression of strings over unbounded and bounded alphabets. Specifically, we show how to transform a grammar for a string over an unbounded alphabet into a grammar for a block coding of that string over a fixed bounded alphabet and vice versa. From these constructions, we obtain asymptotically tight relationships between the minimum grammar sizes for strings and their block codings. Furthermore, we exploit an improved bound of our construction for overlap-free block codings to show that a polynomial time algorithm for approximating the minimum grammar for binary strings within a factor of c yields a polynomial time algorithm for approximating the minimum grammar for strings over arbitrary alphabets within a factor of 24c + /spl isin/ (for arbitrary /spl isin/ > 0). Currently, the latter problem is known to be NP-hard to approximate within a factor of 8569/8568. Since there is some hope to prove a nonconstant lower bound, our results may provide a first step towards solving the long standing open question whether minimum grammar-based compression of binary strings is NP-complete.


Proceedings ArticleDOI
17 Jul 2006
TL;DR: This work reflects on the experience with the Russian resource grammar trying to answer the questions: how well Russian fits into the common interface and where the line between language-independent and language-specific should be drawn.
Abstract: A resource grammar is a standard library for the GF grammar formalism. It raises the abstraction level of writing domain-specific grammars by taking care of the general grammatical rules of a language. GF resource grammars have been built in parallel for eleven languages and share a common interface, which simplifies multilingual applications. We reflect on our experience with the Russian resource grammar trying to answer the questions: how well Russian fits into the common interface and where the line between language-independent and language-specific should be drawn.

Proceedings Article
01 Jan 2006
TL;DR: A new application area for grammar inference is proposed which intends to make domain-specific language development easier and finds a second application in renovation tools for legacy software systems.
Abstract: Grammatical inference (or grammar inference) has been applied to various problems in areas such as computational biology, and speech and pattern recognition but its application to the programming language problem domain has been limited. We propose a new application area for grammar inference which intends to make domain-specific language development easier and finds a second application in renovation tools for legacy software systems. We discuss the improvements made to our core incremental approach to inferring context-free grammars. The approach affords a number of advancements over our previous geneticprogramming based inference system. We discuss the beam search heuristic for improved searching in the solution space of all grammars, the Minimum Description Length heuristic to direct the search towards simpler grammars, and the right-hand-side subset constructor operator.

Journal ArticleDOI
01 Dec 2006
TL;DR: This theoretical paper studies how to translate finite state automata into categorial grammars and back, and shows that the generalization operators employed in both domains can be compared and that their result can always be represented by generalized automata, called "recursive automata ".
Abstract: In this theoretical paper, we compare the "classical" learning techniques used to infer regular grammars from positive examples with the ones used to infer categorial grammars. To this aim, we first study how to translate finite state automata into categorial grammars and back. We then show that the generalization operators employed in both domains can be compared, and that their result can always be represented by generalized automata, called "recursive automata ". The relation between these generalized automata and categorial grammars is studied in detail. Finally, new learnable subclasses of categorial grammars are defined, for which learning from strings is nearly not more expensive than from structures.

Journal Article
TL;DR: A parsing methodology to recognize a set of symbols represented by an adjacency grammar, a grammar that describes a symbol in terms of the primitives that form it and the relations among these primitives.
Abstract: Syntactic approaches on structural symbol recognition are characterized by defining symbols using a grammar. Following the grammar productions a parser is constructed to recognize symbols: given an input, the parser detects whether it belongs to the language generated by the grammar, recognizing the symbol, or not. In this paper, we describe a parsing methodology to recognize a set of symbols represented by an adjacency grammar. An adjacency grammar is a grammar that describes a symbol in terms of the primitives that form it and the relations among these primitives. These relations are called constraints, which are validated using a defined cost function. The cost function approximates the distortion degree associated to the constraint. When a symbol has been recognized the cost associated to the symbol is like a similarity value. The evaluation of the method has been realized from a qualitative point of view, asking some users to draw some sketches. From a quantitative point of view a benchmarking database of sketched symbols has been used.

01 Jan 2006
TL;DR: A new type of constraint-based grammars, Lexicalized Well-Founded Grammars (LWFGs), which allow deep language understanding and are learnable are defined, and the learnability theorem is proved, which extends significantly the class of problems learnable by Inductive Logic Programming methods.
Abstract: Computationally efficient models for natural language understanding can have a wide variety of applications starting from text mining and question answering, to natural language interfaces to databases. Constraint-based grammar formalisms have been widely used for deep language understanding. Yet, one serious obstacle for their use in real world applications is that these formalisms have overlooked an important requirement: learnability. Currently, there is a poor match between these grammar formalisms and existing learning methods. This dissertation defines a new type of constraint-based grammars, Lexicalized Well-Founded Grammars (LWFGs), which allow deep language understanding and are learnable. These grammars model both syntax and semantics and have constraints at the rule level for semantic composition and semantic interpretation: The interpretation constraints allow access to meaning during language processing. They establish links between linguistic expressions and the entities they refer to in the real world. We use an ontology-based interpretation, proposing a semantic representation that can be conceived as an ontology query language. This representation is sufficiently expressive to represent many aspects of language and yet sufficiently restrictive to support learning and tractable inferences. In this thesis, we propose a new relational learning model for LWFG induction. The learner is presented with a small set of positive representative examples, which consist of utterances paired with their semantic representations. We have proved that the search space for grammar induction is a complete grammar lattice, which allows the construction and generalization of the hypotheses and guarantees the uniqueness of the solution, regardless of the order of learning. We have proved a learnability theorem and leave provided polynomial algorithms for LWFG induction, proving their soundness. The learnability theorem extends significantly the class of problems learnable by Inductive Logic Programming methods. In this dissertation; we have implemented a system that represents an experimental platform for all the theoretical algorithms. The system has the practical advantage of implementing sound grammar revision and grammar merging, which allow an incremental coverage of natural language fragments. We have provided qualitative evaluations that cover the following issues: coverage of diverse and complex linguistic phenomena; terminological knowledge acquisition from natural language definitions; and handling of both precise and vague questions with precise answers at the concept level.

Proceedings Article
01 Jan 2006
TL;DR: It is advocated that two-dimensional context-free grammars can be successfully used in the analysis of images containing objects that exhibit structural relations and demonstrated on a pilot study concerning recognition of off-line hand written mathematical formulae that they have the potential to deal with real-life noisy images.
Abstract: This contribution advocates that two-dimensional context-free grammars can be successfully used in the analysis of images containing objects that exhibit structural relations. The idea of structural construction is further developed. The approach can be made computationally efficient, practical and be able to cope with noise. We have developed and tested the method on a pilot study aiming at recognition of offline mathematical formulae. The other novelty is not treating symbol segmentation in the image and structural analysis as two separate processes. This allows the system to recover from errors made in initial symbol segmentation. 1 Motivation and Taxonomy of Approaches The paper serves two main purposes. First, it intends to point the reader’s attention to the theory of two-dimensional (2D) languages. It focuses on context-free grammars having the potential to cope with structural relations in images. Second, the paper demonstrates on a pilot study concerning recognition of off-line hand written mathematical formulae that the 2D context-free grammars have the potential to deal with real-life noisy images. The enthusiasm for grammar-based methods in pattern recognition from the 1970’s [6] has gradually faded down due to inability to cope with errors and noise. Even mathematical linguistics, in which the formal grammar approach was pioneered [4], has tended to statistical methods since the 1990s. M.I. Schlesinger from the Ukrainian Academy of Sciences in Kiev has been developing the 2D grammar-based pattern recognition theory in the context of engineering drawings analysis since the late 1970s. His theory was explicated in the 10th chapter of the monograph [17] in English for the first time. The first author of this paper studied independently the theoretical limits of 2D grammars [14] and proved them to be rather restrictive. The main motivation of the authors of the reported work is to discover to what extent the 2D grammars are applicable to practical image analysis. This paper provides insight into an ongoing work on a pilot study aiming at offline recognition of mathematical formulae. We have chosen this application domain because there is a clear structure in formulae and works of others exist which can be used for comparison. Let us categorize the approaches to mathematical formulae recognition along two directions: – on-line recognition (the timing of the pen strokes is available) versus off-line recognition (only an image is available). Proceedings of the Prague Stringology Conference ’06 – printed versus hand-written formulae. We deal with off-line recognition of hand-written formulae in this contribution. Of course, the approach can be also applied to printed formulae.

Journal ArticleDOI
TL;DR: The analogical conception of Chomsky normal form and Greibach normal form for linear, monadic context-free tree grammars (LM-CFTGs) is presented, which will provide deeper analyses of the class of languages generated by mildly context-sensitive Grammars.
Abstract: This paper presents the analogical conception of Chomsky normal form and Greibach normal form for linear, monadic context-free tree grammars (LM-CFTGs). LM-CFTGs generate the same class of languages as four well-known mildly context-sensitive grammars. It will be shown that any LM-CFTG can be transformed into equivalent ones in both normal forms. As Chomsky normal form and Greibach normal form for context-free grammars (CFGs) play a very important role in the study of formal properties of CFGs, it is expected that the Chomsky-like normal form and the Greibach-like normal form for LM-CFTGs will provide deeper analyses of the class of languages generated by mildly context-sensitive grammars.

01 Jan 2006
TL;DR: A device for safe removal of abnormal deposits, primarily in the human body, where differential cutting properties provide for fracture of hard matter while safely preserving soft tissue.
Abstract: A device for safe removal of abnormal deposits, primarily in the human body, where differential cutting properties provide for fracture of hard matter while safely preserving soft tissue. The device includes a rotating cutting tool having spirally shaped cutting flutes having hardness-differential cutting properties. The tool is driven from outside the body by means of a flexible drive shaft at greater than 2000 revolutions per minute. A channel is incorporated in the drive shaft for adding or removing chemical matter from the internal body space to provide systematic removal of cutting debris from abnormal deposits. The cutting tool has fluid ports which communicate with the drive shaft channel. The fluid ports may be at the nose of the tool as well as around its cicumference. In the latter instance, the ports extend from in front of a cutting flute into a center cavity with a circumferential component so that rotation tends to force blood into these ports. External mass transfer machines aid in injecting or withdrawing material, such as cutting debris. Auxiliary instrumentation, such as fluoroscopy and pressure measuring apparatus, are helpful in the clinical procedure. Multiple operational cycles are easily implemented to gradually dissect the deposit while periodically reestablishing physiological viability, such as blood flow in vessels.

Book ChapterDOI
20 Sep 2006
TL;DR: An algorithm is presented that identifies right-unique simple grammars in the limit from positive data and computes a conjecture in polynomial time in the size of the input data if the authors regard the cardinality of the alphabet as a constant.
Abstract: The class of very simple grammars is known to be polynomial-time identifiable in the limit from positive data. This paper introduces an extension of very simple grammars called right-unique simple grammars, and presents an algorithm that identifies right-unique simple grammars in the limit from positive data. The learning algorithm possesses the following three properties. It computes a conjecture in polynomial time in the size of the input data if we regard the cardinality of the alphabet as a constant. It always outputs a grammar which is consistent with the input data. It never changes the conjecture unless the newly provided example contradicts the previous conjecture. The algorithm has a sub-algorithm that solves the inclusion problem for a superclass of right-unique simple grammars, which is also presented in this paper.

Journal ArticleDOI
TL;DR: This paper introduces a slight variant of DGGs, called persistent dynamic graph grammars (PDGGs), that can be encoded in PGGs preserving concurrency and is exploited to define a concurrent semantics for the Join calculus enriched with persistent messaging.
Abstract: Dynamic graph grammars (DGGs) are a reflexive extension of Graph Grammars that have been introduced to represent mobile reflexive systems and calculi at a convenient level of abstraction. Persistent graph grammars (PGGs) are a class of graph grammars that admits a fully satisfactory concurrent semantics thanks to the fact that all so-called asymmetric conflicts (between items that are read by some productions and consumed by other) are avoided. In this paper we introduce a slight variant of DGGs, called persistent dynamic graph grammars (PDGGs), that can be encoded in PGGs preserving concurrency. Finally, PDGGs are exploited to define a concurrent semantics for the Join calculus enriched with persistent messaging (if preferred, the latter can be naively seen as dynamic nets with read arcs).

Journal ArticleDOI
TL;DR: This paper considers generating and accepting programmed grammars with bounded degree of non-regulation with maximum number of elements in success or in failure fields of the underlying grammar to shed new light on some longstanding open problem in the theory of computational complexity.
Abstract: We consider generating and accepting programmed grammars with bounded degree of non-regulation, that is, the maximum number of elements in success or in failure fields of the underlying grammar. In particular, it is shown that this measure can be restricted to two without loss of descriptional capacity, regardless of whether arbitrary derivations or left-most derivations are considered. Moreover, in some cases, precise characterizations of the linear bounded automaton problem in terms of programmed grammars are obtained. Thus, the results presented in this paper shed new light on some longstanding open problem in the theory of computational complexity.

Journal Article
TL;DR: In this article, a few families of context-free grammars were investigated with respect to their descriptional complexity, i.e., the number of nonterminal symbols and rules π(n) of a given grammar as functions of n. These ν and π happen to be functions bounded by low-degree polynomials.
Abstract: Let {a1, a2,..., an} be an alphabet of n symbols and let Cn be the language of circular or cyclic shifts of the word a1a2 ... an; so Cn = {a1a2 ... an-1an, a2a3 ... ana1, ..., ana1..., an-2an-1}. We discuss a few families of context-free grammars Gn (n ≥ 1) in Chomsky normal form such that Gn generates Cn. The grammars in these families are investigated with respect to their descriptional complexity, i.e., we determine the number of nonterminal symbols ν(n) and the number of rules π(n) of Gn as functions of n. These ν and π happen to be functions bounded by low-degree polynomials, particularly when we focus our attention to unambiguous grammars. Finally, we introduce a family of minimal unambiguous grammars for which ν and π are linear.

01 Jan 2006
TL;DR: The concept of "mildly context-sensitive" grammar formalisms, which are full-fetched and efficient for syntactic parsing, are presented and a number of these formalisms' definitions are summarized, together with the relations between one another, and, most importantly, a survey of known equivalences.
Abstract: The present work is set in the field of natural language syntactic parsing. We present the concept of "mildly context-sensitive" grammar formalisms, which are full-fetched and efficient for syntactic parsing. We summarize a number of these formalisms' definitions, together with the relations between one another, and, most importantly, a survey of known equivalences. The conversion of Edward Stabler's Minimalist Grammars into Multiple Context-Free Grammars (MCFG) is presented in particular detail, along with a study of the complexity of this procedure and of its implications for parsing. This report is an adaptation of the French Master thesis that bears the same name, from Bordeaux 1 University, June 2006.