scispace - formally typeset
Search or ask a question

Showing papers in "Grammars in 1998"


Journal ArticleDOI
01 Jan 1998-Grammars
TL;DR: First, the idea of associating a tree to a derivation in such a grammar is considered, which can be done in a natural way, by associating parentheses to the contexts of the grammar, which obtains a restriction on the derivations in a contextual grammar, as well as a direct manner of defining the ambiguity of contextual grammars.
Abstract: The aim of this paper is to start investigations on the possibilities of introducing a structure in the strings generated by internal contextual grammars. First, we consider the idea of associating a tree to a derivation in such a grammar. This can be done in a natural way, by associating parentheses to the contexts of the grammar. In this way we obtain a restriction on the derivation in a contextual grammar, as well as a direct manner of defining the ambiguity of contextual grammars. Then, we consider a relation on the set of symbols appearing in a string, in the sense already used in descriptive linguistics. By starting from a set of axioms which are structured strings and adjoining to them contexts as usual in contextual grammars, but having prescribed dependences between their symbols, we obtain a set of structured strings. By imposing conditions on the structure of the strings (crossed-noncrossed dependences, a tree structure, a link structure in the sense of link grammars, etc), we obtain a restriction on the derivation in a contextual grammar, as well as a direct manner of defining the structure of languages generated by contextual grammars. The linguistic relevance of these structures associated to strings generated by contextual grammars remains to be further explored.

18 citations


Journal ArticleDOI
01 May 1998-Grammars
TL;DR: It will be shown that some subclasses of such grammars are strictly included in the context-free languages and that there are regular languages which cannot be generated by any bracketed contextual grammar.
Abstract: Bracketed contextual grammars are contextual grammars with an induced Dyck-structure to control the derivation process and to provide derivation trees. In this paper, we study the generative capacity and closure properties of bracketed and fully bracketed contextual grammars. It will be shown that some subclasses of such grammars are strictly included in the context-free languages and that there are regular languages which cannot be generated by any bracketed contextual grammar.

9 citations


Journal ArticleDOI
01 Jan 1998-Grammars
TL;DR: Sampson (1987, 1992, and 1995) argues that there is no grammatical/ungrammatical distinction, based on a study of the distribution of noun phrases in the Lancaster-Oslo/Bergen corpus of British English.
Abstract: Sampson (1987, 1992, and 1995) argues that there is no grammatical/ungrammatical distinction, based on a study (Sampson, 1987) of the distribution of noun phrases in the Lancaster-Oslo/Bergen (LOB) corpus of British English (Garside et al., 1987). As many phrases occur rarely, it is impossible to make a principled distinction between grammatical and ungrammatical phrases, Sampson claims. This paper examines Sampson's evidence against the grammatical/ungrammatical distinction. It will first be argued that another putative counter-argument to Sampson's claim (Taylor et al., 1989) is incorrect. It will then be shown that Sampson's evidence does not at all bear on the issue of the grammatical/ungrammatical distinction.

6 citations


Journal ArticleDOI
András Kornai1
01 May 1998-Grammars
TL;DR: This paper develops some figures of merit that measure how well a formal language approximates an actual language and argues that from the statistical perspective developed here even some classical results of mathematical linguistics, such as Chomsky's (1957) demonstration of the inadequacy of finite state models, are highly suspect.
Abstract: From the perspective of the linguist, the theory of formal languages serves as an abstract model to address issues such as complexity, learnability, information content, etc. which are hard to investigate directly on natural languages. One question that has not been sufficiently addressed in the literature is to what extent can a result proved on an abstract model be presumed to hold for the concrete languages that are, after all, the real object of interest in linguistics. In this paper we attempt to remedy this defect by developing some figures of merit that measure how well a formal language approximates an actual language. We will review and refine some standard notions of mathematical density to arrive at a numerical figure that shows the degree to which one language approximates another, and show how such a figure can be computed between some formal languages and empirically measured between a real language and its formal model. In the concluding section of the paper we will argue that from the statistical perspective developed here even some classical results of mathematical linguistics, such as Chomsky's (1957) demonstration of the inadequacy of finite state models, are highly suspect.

5 citations


Journal ArticleDOI
01 Jan 1998-Grammars
TL;DR: This paper develops and presents a set of formalisms based on the Marker Hypothesis that natural languages are “marked” for complex syntactic structure at surface form and demonstrates that the class of strongly marked languages can be demonstrated not to admit all finite languages and thus not be subject to the hangman's noose of Gold's learnability proofs.
Abstract: It has long been known that language acquisition is only possible if information is available above and beyond the mere presence of a set of strings in the language. One commonly postulated source of such information is a (possibly innate) constraint on the syntactic forms that a grammar can take. This paper develops and presents a set of formalisms based on the Marker Hypothesis that natural languages are “marked” for complex syntactic structure at surface form. It further compares the expressivity and restrictedness of these formalisms and shows that, first, not all constraints are actually restrictive, and second, that the Marker Hypothesis, and its implicit function/content word distinction, provide strong restrictions on the form of allowable grammars. These restrictions may in turn provide evidence about its actual psychological reality and salience. In particular, the class of strongly marked languages can be demonstrated not to admit all finite languages and thus not be subject to the hangman's noose of Gold's learnability proofs, and it is conjectured that these languages may provide a computable method of inferring human-like languages.

4 citations


Journal ArticleDOI
01 May 1998-Grammars
TL;DR: Two strategies of parallel adjoining of contexts are considered for contextual grammars with choice and Chomsky-Schutzenberger type characterizations of context-free and recursively enumerable languages are provided.
Abstract: Two strategies of parallel adjoining of contexts are considered for contextual grammars with choice. After a short comparison between them, there are provided Chomsky-Schutzenberger type characterizations of context-free and recursively enumerable languages.

3 citations


Journal ArticleDOI
01 Jan 1998-Grammars
TL;DR: This work proposes a new approach to unification-based Mathematical and Computational Linguistics: the Lexical Object Theory, a detailed description of the specification formalism, the computational model it is based on, and the inference rules on lexical objects at the specification level.
Abstract: Unification has become a major paradigm in Mathematical and Computational Linguistics. The research done in this area may be classified in four main streams: feature structures as an adequate model for the description of linguistic phenomena, typed unification, representation of feature structures, and unification algorithms. This work proposes a new approach to unification-based Mathematical and Computational Linguistics: the Lexical Object Theory. The main design criteria are based on linguistic motivation, computational efficiency and formal soundness. The first part of the work outlines the main characteristics of the Lexical Object Theory, its comprehensive orientation, and its layered structure based on the separation of the following levels: specification, transformation, typification, representation and unification. The second part concentrates on the specification level of the Lexical Object Theory. The linguistic motivation of this model is presented, as well as a detailed description of the specification formalism, the computational model it is based on, and finally, the inference rules on lexical objects at the specification level.

1 citations