scispace - formally typeset
Search or ask a question

Showing papers on "L-attributed grammar published in 1992"


Book ChapterDOI
01 Jan 1992
TL;DR: The Left-to-Right Inside algorithm is introduced, which computes the probability that successive applications of the grammar rewriting rules produce a word string whose initial substring is a given one.
Abstract: In automatic speech recognition, language models can be represented by Probabilistic Context Free Grammars (PCFGs). In this lecture we review some known algorithms which handle PCFGs; in particular an algorithm for the computation of the total probability that a PCFG generates a given sentence (Inside), an algorithm for finding the most probable parse tree (Viterbi), and an algorithm for the estimation of the probabilities of the rewriting rules of a PCFG given a corpus (Inside-Outside). Moreover, we introduce the Left-to-Right Inside algorithm, which computes the probability that successive applications of the grammar rewriting rules (beginning with the sentence start symbol s) produce a word string whose initial substring is a given one.

218 citations


01 Mar 1992
TL;DR: This work presents a scheme for learning probabilistic dependency grammars from positive training examples plus constraints on rules plus results of two experiments, in which the constraints were minimal and the first experiment was unsuccessful.
Abstract: We present a scheme for learning probabilistic dependency grammars from positive training examples plus constraints on rules. In particular, we present the results of two experiments. The first, in which the constraints were minimal, was unsuccessful. The second, with significant constraints, was successful within the bounds of the task we had set.

190 citations


Proceedings ArticleDOI
23 Aug 1992
TL;DR: Preliminary experiments showing some of the advantages of SLTAG over stochastic context-free grammars are reported and an algorithm for computing the probability of a sentence generated by a SLTAG and an inside-outside-like iterative algorithm for estimating the parameters of a SL TAG are reported.
Abstract: The notion of stochastic lexicalized tree-adjoining grammar (SLTAG) is formally defined. The parameters of a SLTAG correspond to the probability of combining two structures each one associated with a word. The characteristics of SLTAG are unique and novel since it is lexieally sensitive (as N-gram models or Hidden Markov Models) and yet hierarchical (as stochastic context-free grammars).Then, two basic algorithms for SLTAG arc introduced: an algorithm for computing the probability of a sentence generated by a SLTAG and an inside-outside-like iterative algorithm for estimating the parameters of a SLTAG given a training corpus.Finally, we should how SLTAG enables to define a lexicalized version of stochastic context-free grammars and we report preliminary experiments showing some of the advantages of SLTAG over stochastic context-free grammars.

169 citations


Book ChapterDOI
05 Apr 1992
TL;DR: Standard Functional Unification Grammars provide a structurally guided top-down control regime for sentence generation but this regime is no longer appropriate for two reasons: the unification of non-lexicalized semantic input with an integrated lexico-grammar and lexical choice.
Abstract: Standard Functional Unification Grammars (FUGs) provide a structurally guided top-down control regime for sentence generation. When using FUGs to perform content realization as a whole, including lexical choice, this regime is no longer appropriate for two reasons: (1) the unification of non-lexicalized semantic input with an integrated lexico-grammar requires mapping “floating” semantic elements which can trigger extensive backtracking and (2) lexical choice requires accessing external constraint sources on demand to preserve the modularity between conceptual and linguistic knowledge.

96 citations


Proceedings ArticleDOI
15 Sep 1992
TL;DR: A subclass of unrestricted relational grammars called fringe relational Grammars is proposed along with an Earley-style recognition algorithm, which uses indexing methods based on fringe elements in order to take advantage of equivalence relations on parse table entries, thus avoiding redundant processing.
Abstract: Predictive, Earley-style parsing for unrestricted relational grammars faces a number of problems not present in a context-free string grammar counterpart. Here a subclass of unrestricted relational grammars called fringe relational grammars is proposed along with an Earley-style recognition algorithm. The grammar makes use of fringe elements (the minimal and maximal elements of partially ordered sets) in defining its productions. The parsing algorithm uses indexing methods based on fringe elements in order to take advantage of equivalence relations on parse table entries, thus avoiding redundant processing. >

79 citations


Journal ArticleDOI
TL;DR: This paper gives a progression of automata and shows that it corresponds exactly to the language hierarchy defined with control grammars, the first member of which is context-free languages.

63 citations


Journal ArticleDOI
TL;DR: A syntax-directed translation device is obtained that is equivalent to the attribute grammar with a context-free hypergraph grammar and a semantic domain.
Abstract: Context-free hypergraph grammars and attribute grammars generate the same class of term languages. Extending the context-free hypergraph grammar with a context-free grammar and a semantic domain, a syntax-directed translation device is obtained that is equivalent to the attribute grammar.

44 citations


Proceedings ArticleDOI
28 Jun 1992
TL;DR: It is shown that the class of string languages generated by linear context-free rewriting systems is equal to theclass of output languages of deterministic tree-walking transducers.
Abstract: We show that the class of string languages generated by linear context-free rewriting systems is equal to the class of output languages of deterministic tree-walking transducers. From equivalences that have previously been established we know that this class of languages is also equal to the string languages generated by context-free hypergraph grammars, multicomponent tree-adjoining grammars, and multiple context-free grammars and to the class of yields of images of the regular tree languages under finite-copying top-down tree transducers.

44 citations


Proceedings ArticleDOI
30 Aug 1992
TL;DR: Graph grammars provide a useful formalism for describing structural manipulations of multidimensional data and are used most successfully in application areas other than pattern recognition.
Abstract: Graph grammars provide a useful formalism for describing structural manipulations of multidimensional data. The authors review briefly theoretical aspects of graph grammars, particularly of the embedding problem, and then summarize graph-grammar applications. Currently graph grammars are used most successfully in application areas other than pattern recognition. Widespread application of graph grammars to picture processing tasks will require research into problems of large-scale grammars, readability of grammars, and grammatical processing of uncertain data. >

42 citations


Proceedings ArticleDOI
31 Mar 1992
TL;DR: The use of Common Lisp without its object language addition and the use of the X Window interface to Common Lisp (CLX) for the implementation of XTAG, a development of tree-adjoining grammars and their parsers, are described.
Abstract: We describe a workbench (XTAG) for the development of tree-adjoining grammars and their parsers, and discuss some issues that arise in the design of the graphical interface.Contrary to string rewriting grammars generating trees, the elementary objects manipulated by a tree-adjoining grammar are extended trees (i.e. trees of depth one or more) which capture syntactic information of lexical items. The unique characteristics of tree-adjoining grammars, its elementary objects found in the lexicon (extended trees) and the derivational history of derived trees (also a tree), require a specially crafted interface in which the perspective has shifted from a string-based to a tree-based system. XTAG provides such a graphical interface in which the elementary objects are trees (or tree sets) and not symbols (or strings).The kernel of XTAG is a predictive left to right parser for unification-based tree-adjoining grammar [Schabes, 1991]. XTAG includes a graphical editor for trees, a graphical tree printer, utilities for manipulating and displaying feature structures for unification-based tree-adjoining grammar, facilities for keeping track of the derivational history of TAG trees combined with adjoining and substitution, a parser for unification based tree-adjoining grammars, utilities for defining grammars and lexicons for tree-adjoining grammars, a morphological recognizer for English (75 000 stems deriving 280 000 inflected forms) and a tree-adjoining grammar for English that covers a large range of linguistic phenomena.Considerations of portability, efficiency, homogeneity and ease of maintenance, lead us to the use of Common Lisp without its object language addition and to the use of the X Window interface to Common Lisp (CLX) for the implementation of XTAG.XTAG without the large morphological and syntactic lexicons is public domain software. The large morphological and syntactic lexicons can be obtained through an agreement with ACL's Data Collection Initiative.XTAG runs under Common Lisp and X Window (CLX).

40 citations


Journal ArticleDOI
TL;DR: An introductory survey on graph grammars that provide rule-based mechanisms for generating, manipulating and analyzing graphs and two potential applications of graph-grammar concepts to semantic networks are indicated.
Abstract: In the first half of this paper, we give an introductory survey on graph grammars that provide rule-based mechanisms for generating, manipulating and analyzing graphs. In the second half, two potential applications of graph-grammar concepts to semantic networks are indicated.

Journal ArticleDOI
TL;DR: Dynamic parsers and growing grammars allow a syntactic-only parsing of programs written in powerful and problem adaptable programming languages and easily perform purely syntactic strong type checking and operator overloading.
Abstract: We define "evolving grammars" as successions of static grammars and dynamic parsers as parsers able to follow the evolution of a grammar during the source program parsing. A growing context-free grammar will progressively incorporate production rules specific for the source program under parsing and will evolve following the context created by the source program itself toward a program specific context-free grammar. Dynamic parsers and growing grammars allow a syntactic-only parsing of programs written in powerful and problem adaptable programming languages. Moreover dynamic parsers easily perform purely syntactic strong type checking and operator overloading. The language used to specify grammar evolution and residual semantic actions can be the evolving language itself. The user can introduce new syntactic operators using a bootstrap procedure supported by the previously defined syntax.A dynamic parser ("ZzParser") has been developed by us and has been successfully employed by the APE 100 INFN group to develop a programming language ("ApeseLanguage") and other system software tools for the 100 GigaFlops SIMD parallel machine under development.

01 Jan 1992
TL;DR: Suggestions are made as to how unification grammars can be developed in order to handle difficult problems such as partially free word order, bound variables for semantic interpretation and resolving feature clashes in agreement.
Abstract: In this dissertation, it is shown that declarative, feature-based, unification grammars can be used for efficiently both parsing and generation. It is also shown that radically different algorithms are not needed for these two modes of processing. Given this similarity between parsing and generation, it will be easier to maintain consistency between input and output in interactive natural language interfaces. A Prolog implementation of the unification-based parser and DAG unifier is provided. The DAG unifier includes extension to handle disjunction and negation. The parser presented in this thesis is based on Stuart Shieber's extensions of Earley's algorithm. This algorithm is further extended in order to incorporate traces and compound lexical items. Also, the algorithm is optimized by performing the subsumption test on restricted DAGs rather than on the full DAGs that are kept in the chart. Since the subsumption test can be very time consuming, this is a significant optimization, particularly for grammars with a considerable number of (nearly) left recursive rules. A grammar which handles quantifier scoping is presented as an example of such a grammar. For generation, the algorithm is modified in order to optimize the use of both top-down and bottom-up information. Sufficient top-down information is ensured by modifying the restriction procedure so that semantic information is not lost. Sufficient bottom-up information is ensured by making the algorithm head-driven. Generation also requires that the chart be modified so that identical phrases are not generated at different string positions. It is shown how readjustments to the chart can be made whenever a duplicate phrase is predicted. The generator in this thesis does not perform equally well with all types of grammars. Grammars employing type raising may cause the generator to go into an unconstrained search. However, given the independently motivated principles of minimal type assignment and type raising only as needed, it is shown how such unconstrained searches can be avoided. Finally, suggestions are made as to how unification grammars can be developed in order to handle difficult problems such as partially free word order, bound variables for semantic interpretation and resolving feature clashes in agreement.

Proceedings ArticleDOI
23 Mar 1992
TL;DR: The authors describe how speech recognition and language analysis can be tightly coupled by developing an APSG for the analysis component and deriving automatically from it a finite-state approximation that is used as the recognition language model.
Abstract: A problem with many speech understanding systems is that grammars that are more suitable for representing the relation between sentences and their meanings, such as context free grammars (CFGs) and augmented phrase structure grammars (APSGs), are computationally very demanding. On the other hand, finite state grammars are efficient, but cannot represent directly the sentence-meaning relation. The authors describe how speech recognition and language analysis can be tightly coupled by developing an APSG for the analysis component and deriving automatically from it a finite-state approximation that is used as the recognition language model. Using this technique, the authors have built an efficient translation system that is fast compared to others with comparably sized language models. >

Journal ArticleDOI
TL;DR: Attributes grammars provide a formal yet intuitive notation for specifying the static semantics of programming languages and consequently have been used in various compiler generation systems, but need not be limited to this.
Abstract: Attributes grammars provide a formal yet intuitive notation for specifying the static semantics of programming languages and consequently have been used in various compiler generation systems. Their use, however, need not be limited to this. With a little change in perspective, many programs may be regarded as interpreters and constructed as executable attributable grammars. This major advantage is that the resulting modular declarative structure facilitates various aspects of the software development process

Journal ArticleDOI
TL;DR: Sakakibara’s algorithm solves the structural grammatical inference problem for reversible grammars by finding out the reversible context-free grammar consistent with the sample.

Proceedings ArticleDOI
15 Sep 1992
TL;DR: A two dimensional extension of the Cocke-Kasami-Younger parser for context-free languages is used to parse figures using these grammars.
Abstract: Generalized two dimensional context free grammars an extension of context free grammars to two dimensions, is described. This extension is a generalization of Tomita's two dimensional context free grammars (M. Tomita, 1989), and better fits into the families of graph grammars described by Crimi (1990) Relation Grammars and by Flasinski (1988) edNLC Grammars, Figure Grammars are particularly useful for applications such as handwritten mathematical expressions. A two dimensional extension of the Cocke-Kasami-Younger parser for context-free languages is used to parse figures using these grammars. >


Proceedings ArticleDOI
23 Aug 1992
TL;DR: The method is easily adapted to allow incremental processing of Lambek grammars, a possibility that has hitherto been unavailable, and an extension proposed for handling locality phenomena.
Abstract: This paper describes a method for chart parsing Lambek grammars. The method is of particular interest in two regards. Firstly, it allows efficient processing of grammars which use necessity operators, an extension proposed for handling locality phenomena. Secondly, the method is easily adapted to allow incremental processing of Lambek grammars, a possibility that has hitherto been unavailable.


Journal ArticleDOI
TL;DR: It is shown that a hierarchy of non-context-free languages, called control language hierarchy (CLH), generated by control grammars can be recognized in polynomial time.

Proceedings ArticleDOI
23 Feb 1992
TL;DR: The notion of stochastic lexicalized tree-adjoining grammar (SLTAG) is defined and basic algorithms for SLTAG are designed and an iterative algorithm for estimating the parameters of a SLTAG given a training corpus is introduced.
Abstract: The notion of stochastic lexicalized tree-adjoining grammar (SLTAG) is defined and basic algorithms for SLTAG are designed. The parameters of a SLTAG correspond to the probability of combining two structures each one associated with a word. The characteristics of SLTAG are unique and novel since it is lexically sensitive (as N-gram models or Hidden Markov Models) and yet hierarchical (as stochastic context-free grammars). An algorithm for computing the probability of a sentence generated by a SLTAG is presented. Then, an iterative algorithm for estimating the parameters of a SLTAG given a training corpus is introduced.

Proceedings ArticleDOI
28 Jun 1992
TL;DR: In this paper, the precise formulation of derivation for tree-adjoining grammars has important ramifications for a wide variety of uses of the formalism, from syntactic analysis to semantic interpretation and statistical language modeling.
Abstract: The precise formulation of derivation for tree-adjoining grammars has important ramifications for a wide variety of uses of the formalism, from syntactic analysis to semantic interpretation and statistical language modeling. We argue that the definition of tree-adjoining derivation must be reformulated in order to manifest the proper linguistic dependencies in derivations. The particular proposal is both precisely characterizable, through a compilation to linear indexed grammars, and computationally operational, by virtue of an efficient algorithm for recognition and parsing.

Journal ArticleDOI
TL;DR: Examines the generation of parallel evaluators for attribute grammars, targeted to shared-memory MIMD computers, and shows how to automatically transform productions of the form X to Y X into list-productions of the shape of Y to Y/sup +/.
Abstract: Examines the generation of parallel evaluators for attribute grammars, targeted to shared-memory MIMD computers. Evaluation-time overhead due to process scheduling and synchronization is reduced by detecting coarse-grain parallelism (as opposed to the naive one-process-per-node approach). As a means to more clearly expose inherent parallelism, it is shown how to automatically transform productions of the form X to Y X into list-productions of the form X to Y/sup +/. This transformation allows for many simplifications to be applied to the semantic rules, which can expose a significant degree of inherent parallelism, and thus further increase the evaluator's performance. Effectively, this constitutes an extension of the concept of attribute grammars to the level of abstract syntax. >

Dissertation
01 Jan 1992
TL;DR: This thesis describes two examples of user defined syntax, one of which is a new datatype construction, the conctype, the elements of which have a very flexible syntax and another is user defined distfix operators which give a user possibility to extend the syntax for expressions in a programming language.
Abstract: This thesis describes two examples of user defined syntax. The first, and most thoroughly investigated, is a new datatype construction, the conctype, the elements of which have a very flexible syntax. An embedded language can easily be introduced into a programming language using conctypes and computations are easily expressed using the concrete syntax and a special pattern matching form. The second example is user defined distfix operators which give a user possibility to extend the syntax for expressions in a programming language. We describe both a user's view and the implementation of these two examples. In both cases, context-free grammars serve as a basis for the definition of the new syntax. A problem that is investigated is how to disambiguate grammars with precedences. To see how this should be done we investigate which language a grammar together with precedence rules defines. For a sub-class of context-free grammars we give a predicate that defines the precedence correct syntax trees according to some precedence rules. We also give an algorithm that transforms such a grammar to an ordinary unambiguous context-free grammar and prove the correctness of the algorithm. We use the algorithm in our implementation of distfix operators. For more general grammars, we isolate one kind of ambiguity which is suitable to resolve with precedence rules. We define the generated language for such a grammar by an attribute grammar. This approach of resolving ambiguity is used in the implementation of conctypes.

Proceedings ArticleDOI
01 Feb 1992
TL;DR: Attribute pattern sets provide a more expressive attribution system by using pattern matching, instead of grammar productions, to perform case analysis, and can be implemented in terms of attribute grammars in a way that maintains the dependency structure of the attribute system.
Abstract: Attribute grammars have been used for many language-oriented tasks, including the formal description of semantics and the implementation of compilation tasks from simple type checking through code generation. Despite their successful use, attribute grammars have some disadvantages, including the monolithic nature of the grammar and the fixed factoring of all attribute descriptions by a single set of grammar productions. Attribute pattern sets provide a more expressive attribution system by using pattern matching, instead of grammar productions, to perform case analysis. Attribute pattern sets can be implemented in terms of attribute grammars in a way that maintains the dependency structure of the attribute system, making it straightforward to convert many of the practical results from attribute grammar theory to similar results for attribute pattern sets.

Book ChapterDOI
01 Jan 1992
TL;DR: It is suggested that by generating designs which can be quickly assessed, and through being supplied with advice and assessment as the design proceeds, the designer can improve the design to product cycle.
Abstract: A method of specifying and generating a set of designed objects known to be assessable is proposed. It is suggested that by generating designs which can be quickly assessed, and through being supplied with advice and assessment as the design proceeds, the designer can improve the design to product cycle. The method is based upon attributed graph grammars which specify valid manipulations of feature models in feature-based design. Semantic functions compute and constrain the feature attributes, and generate a simultaneous assessment as the design progresses. Finally, an example within the domain of stress concentration prediction is presented.

Proceedings Article
30 Nov 1992
TL;DR: Here it is shown that formal languages too can be specified by Harmonic Grammars, rather than by conventional serial rewrite rule systems.
Abstract: Basic connectionist principles imply that grammars should take the form of systems of parallel soft constraints defining an optimization problem the solutions to which are the well-formed structures in the language. Such Harmonic Grammars have been successfully applied to a number of problems in the theory of natural languages. Here it is shown that formal languages too can be specified by Harmonic Grammars, rather than by conventional serial rewrite rule systems.

Proceedings ArticleDOI
23 Aug 1992
TL;DR: This work gives here an other kind of Proof-Nets which is much related to Dependency Structures similar to those in, for instance (Hudson 1984).
Abstract: Proof-Nets (Roorda 1990) are a good device for processing with categorial grammars, mainly because they avoid spurious ambiguities. Nevertheless, they do not provide easily readable structures and they hide the true proximity between Categorial Grammars and Dependency Grammars. We give here an other kind of Proof-Nets which is much related to Dependency Structures similar to those we meet in, for instance (Hudson 1984). These new Proof-Nets are called Connection Nets. We show that Connection Nets provide not only easily interpretable structures, but also that processing with them is more efficient.