scispace - formally typeset
Search or ask a question

Showing papers on "Rule-based machine translation published in 1990"


Journal ArticleDOI
TL;DR: The application of the statistical approach to translation from French to English and preliminary results are described and the results are given.
Abstract: In this paper, we present a statistical approach to machine translation. We describe the application of our approach to translation from French to English and give preliminary results.

1,860 citations


Proceedings ArticleDOI
20 Aug 1990
TL;DR: A variant of TAGs is presented, called synchronous TAGs, which characterize correspondences between languages, to allow TAGs to be used beyond their role in syntax proper.
Abstract: The unique properties of tree-adjoining grammars (TAG) present a challenge for the application of TAGs beyond the limited confines of syntax, for instance, to the task of semantic interpretation or automatic translation of natural language. We present a variant of TAGs, called synchronous TAGs, which characterize correspondences between languages. The formalism's intended usage is to relate expressions of natural languages to their associated semantics represented in a logical form language, or to their translates in another natural language; in summary, we intend it to allow TAGs to be used beyond their role in syntax proper. We discuss the application of synchronous TAGs to concrete examples, mentioning primarily in passing some computational issues that arise in its interpretation.

342 citations


Proceedings ArticleDOI
20 Aug 1990
TL;DR: An essential problem of example-based translation is how to utilize more than one translation example for translating one source sentence, and a method to solve this problem is proposed, called matching expression, which represents the combination of fragments of translation examples.
Abstract: An essential problem of example-based translation is how to utilize more than one translation example for translating one source sentence.This paper proposes a method to solve this problem. We introduce the representation, called matching expression, which represents the combination of fragments of translation examples. The translation process consists of three steps: (1) Make the source matching expression from the source sentence. (2) Transfer the source matching expression into the target matching expression. (3) Construct the target sentence from the target matching expression.This mechanism generates some candidates of translation. To select the best translation out of them, we define the score of a translation.

249 citations


Proceedings ArticleDOI
20 Aug 1990
TL;DR: This work presents a formalism to be used for parsing where the grammar statements are closer to real text sentences and more directly address some notorious parsing problems, especially ambiguity.
Abstract: 1. Outline Grammars which are used in parsers are often directly imported from autonomous grammar theory and descriptive practice that were not exercised for the explicit purpose of parsing. Parsers have been designed for English based on e.g. Government and Binding Theory, Generalized Phrase Structure Grammar, and LexicaI-Functional Grammar. We present a formalism to be used for parsing where the grammar statements are closer to real text sentences and more directly address some notorious parsing problems, especially ambiguity. The formalism is a linguistic one. It relies on transitional probabilities in an indirect way. The probabilities are not part of the description. The descriptive statements, constraints, do not have the ordinary task of defining the notion 'correct sentence in L'. They are less categorical in nature, more closely tied to morphological features, and more directly geared towards the basic task of parsing. We see this task as one of inferring surface structure from a stream of concrete tokens in a basically bottom-up mode. Constraints are formulated on the basis of extensive corpus studies. They may reflect absolute, ruleqike facts, or probabilistic tendencies where a certain risk is judged to be proper to take. Constraints of the former rule-like type are of course preferable. The ensemble of constraints for language L constitute a Constraint Grammar (CG) for L. A CG is intended to be used by the Constraint Grammar Parser CGP, implemented as a Lisp interpreter. Our input tokens to CGP are morphologically analyzed word-forms. One central idea is to maximize the use of morphological information for parsing purposes. All relevant structure is assigned directly via lexicon, morphology, and simple mappings from morphology to syntax. ]he task of the constraints is basically to discard as many alternatives as possible, the optimum being a fully disambiguated sentence with one syntactic reading only. The second central idea is to treat morphological disambiguation and syntactic labelling by the same mechanism of discarding improper alternatives.

231 citations


Journal ArticleDOI
TL;DR: An application of the syntactic method to electrocardiogram (ECG) pattern recognition and parameter measurement is presented and the performance of the resultant system has been evaluated using an annotated standard ECG library.
Abstract: An application of the syntactic method to electrocardiogram (ECG) pattern recognition and parameter measurement is presented. Solutions to the related problems of primitive pattern selection, primitive pattern extraction, linguistic representation, and pattern grammar formulation are given. Attribute grammars are used as the model for the pattern grammar because of their descriptive power, founded upon their ability to handle syntactic as well as semantic information. This approach has been implemented and the performance of the resultant system has been evaluated using an annotated standard ECG library. >

224 citations


Journal ArticleDOI
TL;DR: The proposed methods are illustrated through syntactic pattern recognition experiments in which a number of strings generated by ten given (source) non-k- TSSL grammars are used to infer ten k-TSSL stochastic automata, which are further used to classify newstrings generated by the same source Grammars.
Abstract: The inductive inference of the class of k-testable languages in the strict sense (k-TSSL) is considered. A k-TSSL is essentially defined by a finite set of substrings of length k that are permitted to appear in the strings of the language. Given a positive sample R of strings of an unknown language, a deterministic finite-state automation that recognizes the smallest k-TSSL containing R is obtained. The inferred automation is shown to have a number of transitions bounded by O(m) where m is the number of substrings defining this k-TSSL, and the inference algorithm works in O(kn log m) where n is the sum of the lengths of all the strings in R. The proposed methods are illustrated through syntactic pattern recognition experiments in which a number of strings generated by ten given (source) non-k-TSSL grammars are used to infer ten k-TSSL stochastic automata, which are further used to classify new strings generated by the same source grammars. The results of these experiments are consistent with the theory and show the ability of (stochastic) k-TSSLs to approach other classes of regular languages. >

218 citations


01 Jan 1990
TL;DR: The view that syntactic rules are not separated from lexical items is explored, and how lexicalized grammars suggest a natural two-step parsing strategy is shown.
Abstract: Most current linguistic theories give lexical accounts of several phenomena that used to be considered purely syntactic. The information put in the lexicon is thereby increased both in amount and complexity. We explore the view that syntactic rules are not separated from lexical items. In this approach, each elementary structure is associated with a lexical item called the anchor. These structures specify extended domains of locality (as compared to context-free grammars) over which constraints can be stated. The 'grammar' consists of a lexicon where each lexical item is associated with a finite number of structures for which that item is the anchor. There are 'rules' which tell us how these structures are composed. A grammar of this form will be said to be lexicalized. The process of lexicalization of context-free grammars (CFGs) constrained by linguistic requirements forces us to use operations for combining structures that make the formalism fall in the class of mildly context sensitive languages. We show that substitution, the combining operation corresponding to CFGs, does not allow one to lexicalize CFGs but the combination of substitution and adjunction does. We show how tree-adjoining grammar (TAG) is derived from the lexicalization process of CFGs. Then we show that TAGs are closed under lexicalization and we illustrate the main structures found in a lexicalized TAG for English. The properties of TAGs permit us to encapsulate diverse syntactic phenomena in a very natural way. TAG's extended domain of locality and its factoring of recursion from local dependencies enable us to localize many syntactic dependencies (such as filler-gap) as well as semantic dependencies (such as predicate-arguments). We investigate the processing of lexicalized TAGs. We first present two general practical parsers that follow Earley-style parsing. They are practical parsers for TAGs because, as for CFGs, the average behavior of Earley-type parsers is superior to its worst case complexity. They are both left to right bottom-up parsers that use top-down predictions but they differ in the way the top down prediction is used. Then we explain the building of a set of deterministic bottom-up left to right parsers which analyze a subset of tree-adjoining languages. The LR parsing strategy for CFGs is extended to TAG by using a machine, called Bottom-up Embedded Push Down Automaton (BEPDA), that recognizes in a bottom-up fashion the set of tree-adjoining languages (and exactly this set). Finally we show how lexicalized grammars suggest a natural two-step parsing strategy. We consider lexicalized TAGs as an instance of lexicalized grammar and we examine the effect of the two-step parsing strategy on main types of parsing algorithms.

207 citations


Proceedings ArticleDOI
03 Apr 1990
TL;DR: An approach to implementing spoken language systems that takes full advantage of syntactic and semantic constraints provided by a natural language processing component in the speed understanding task and provides a tractable search space is discussed.
Abstract: An approach to implementing spoken language systems is discussed. This approach takes full advantage of syntactic and semantic constraints provided by a natural language processing component in the speed understanding task and provides a tractable search space. The results indicate that the approach is a promising one for large-vocabulary spoken language systems. Parse times within a factor of 20 of real time are achieved for high-perplexity syntactic grammars with resulting hidden Markov model recognition computational requirements (2500 active words/frame) that are well within the capability of high-speed multiprocessor computers or special-purpose speech recognition hardware. >

195 citations


Book Chapter
01 Jan 1990
TL;DR: This paper has described a formalism, the linear context-free rewriting system (LCFR), as a first attempt to capture the closeness of the derivation structures of these formalisms, and shown that LCFRs are equivalent to muticomponent tree adjoining grammars (MCTAGs), and also briefly discussed some variants of TAG.
Abstract: Investigations of classes of grammars that are nontransformational and at the same time highly constrained are of interest both linguistically and mathematically. Context-free grammars (CFG) obviously form such a class. CFGs are not adequate (both weakly and strongly) to characterize some aspects of language structure. Thus how much more power beyond CFG is necessary to describe these phenomena is an important question. Based on certain properties of tree adjoining grammars (TAG) an approximate characterization of class of grammars, mildly context-sensitive grammars (MCSG), has been proposed earlier. In this paper, we have described the relationship between several different grammar formalisms, all of which belong to MCSG. In particular, we have shown that head grammars (HG), combinatory categorial grammars (CCG), and linear indexed grammars (LIG) and TAG are all weakly equivalent. These formalisms are all distinct from each other at least in the following aspects: (a) the formal objects and operations in each formalism, (b) the domain of locality over which dependencies are specified, (c) the degree to which recursion and the domain of dependencies are factored, and (d) the linguistic insights that are captured in the formal objects and operations in each formalism. A deeper understanding of this convergence is obtained by comparing these formalisms at the level of the derivation structures in each formalism. We have described a formalism, the linear context-free rewriting system (LCFR), as a first attempt to capture the closeness of the derivation structures of these formalisms. LCFRs thus make the notion of MCSGs more precise. We have shown that LCFRs are equivalent to muticomponent tree adjoining grammars (MCTAGs), and also briefly discussed some variants of TAGs, lexicalized TAGs, feature structure based TAGs, and TAGs in which local domination and linear precedence are factored TAG(LD/LP). Disciplines Computer Sciences Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-90-01. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/539 The Convergence Of Mildly Context-Sensitive Grammar Formalisms MS-CIS-90-01 LINC LAB 161 Aravind K. Joshi K. Vijay Shanker David Weir Department of Computer and Information Science School of Engineering and Applied Science University of Pennsylvania Philadelphia, PA 19104-6389

175 citations


Journal ArticleDOI
TL;DR: An algorithm for generating strings from logical form encodings that improves upon previous algorithms in that it places fewer restrictions on the class of grammars to which it is applicable, yet unlike top-down methods, it also permits left-recursion.
Abstract: We present an algorithm for generating strings from logical form encodings that improves upon previous algorithms in that it places fewer restrictions on the class of grammars to which it is applicable. In particular, unlike a previous bottom-up generator, it allows use of semantically nonmonotonic grammars, yet unlike top-down methods, it also permits left-recursion. The enabling design feature of the algorithm is its implicit traversal of the analysis tree for the string being generated in a semantic-head-driven fashion.

164 citations


Dissertation
01 Jan 1990
TL;DR: This thesis presents accounts of a range of linguistic phenomena in an extended categorial framework, and develops proposals for processing grammars set within this framework, which is a version of the Lambek calculus extended by the inclusion of additional type-forming operators whose logical behaviour allows for the characterization of some aspect of linguistic phenomenon.
Abstract: This thesis presents accounts of a range of linguistic phenomena in an extended categorial framework, and develops proposals for processing grammars set within this framework. Linguistic phenomena whose treatment we address include word order, grammatical relations and obliqueness, extraction and island constraints, and binding. The work is set within a exible categorial framework which is a version of the Lambek calculus (Lambek, 1958) extended by the inclusion of additional type-forming operators whose logical behaviour allows for the characterization of some aspect of linguistic phenomena. We begin with the treatment of extraction phenomena and island constraints. An account is developed in which there are many interrelated notions of boundary, and where the sensitivity of any syntactic process to a particular class of boundaries can be addressed within the grammar. We next present a new categorial treatment of word order which factors apart the specication of the order of a head's complements from the position of the head relative to them. This move has the advantage of allowing the incorporation of a treatment of grammatical relations and obliqueness, as well as providing for the treatment of Verb Second phenomena in Germanic languages. A categorial treatment of binding is then presented which integrates the preceding proposals of the thesis, handling command constraints on binding in terms of relative obliqueness and locality constraints using the account of linguistic boundaries. Attention is given to the treatment of long distance re exivization in Icelandic, a phenomenon of interest because of its unusual locality behaviour. Finally, a method is developed for parsing Lambek calculus grammars which avoids the e ciency problems presented by the occurrence of multiple equivalent proofs. The method involves developing a notion of normal form proof and adapting the parsing method to ensure that only normal form proofs are constructed. iii

Proceedings ArticleDOI
20 Aug 1990
TL;DR: This paper defines lexical transfer rules that avoid the defects of a mere word-to-word approach but still benefit from the simplicity and elegance of a lexical approach.
Abstract: Lexicalized Tree Adjoining Grammar (LTAG) is an attractive formalism for linguistic description mainly because of its extended domain of locality and its factoring recursion out from the domain of local dependencies (Joshi, 1985, Kroch and Joshi, 1985, Abeille, 1988). LTAG's extended domain of locality enables one to localize syntactic dependencies (such as filler-gap), as well as semantic dependencies (such as predicate-arguments). The aim of this paper is to show that these properties combined with the lexicalized property of LTAG are especially attractive for machine translation.The transfer between two languages, such as French and English, can be done by putting directly into correspondence large elementary units without going through some interlingual representation and without major changes to the source and target grammars. The underlying formalism for the transfer is "synchronous Tree Adjoining Grammars" (Shieber and Schabes [1990]). Transfer rules are stated as correspondences between nodes of trees of large domain of locality which are associated with words. We can thus define lexical transfer rules that avoid the defects of a mere word-to-word approach but still benefit from the simplicity and elegance of a lexical approach.We rely on the French and English LTAG grammars (Abeille [1988], Abeille [1990 (b)], Abeille et al. [1990], Abeille and Schabes [1989, 1990]) that have been designed over the past two years jointly at University of Pennsylvania and University of Paris 7-Jussieu.

Proceedings ArticleDOI
20 Aug 1990
TL;DR: TFS, a computer formalism in the class of logic formalisms which integrates a powerful type system, is introduced, and it is shown how to make use of the typing system to enforce general constraints and modularize linguistic descriptions.
Abstract: We introduce TFS, a computer formalism in the class of logic formalisms which integrates a powerful type system. Its basic data structures are typed feature structures. The type system encourages an object-oriented approach to linguistic description by providing a multiple inheritance mechanism and an inference mechanism which allows the specification of relations between levels of linguistic description defined as classes of objects. We illustrate this approach starting from a very simple DCG, and show how to make use of the typing system to enforce general constraints and modularize linguistic descriptions, and how further abstraction leads to a HPSG-like grammar.

Proceedings ArticleDOI
20 Aug 1990
TL;DR: A compendium of many of the heuristics devised for choosing the preferred parses in the DIALOGIC system is presented and two principles that seem to underlie them are proposed.
Abstract: The DIALOGIC system for syntactic analysis and semantic translation has been under development for over ten years, and during that time it has been used in a number of domains in both database interface and message-processing applications. In addition, it has been tested on a number of sentences of linguistic interest. Built into the system are facilities for ranking parses according to syntactic and selectional considerations, and over the years, as various kinds of ambiguity have become apparent, heuristics have been devised for choosing the preferred parses. Our aim in this paper is first to present a compendium of many of these heuristics and second to propose two principles that seem to underlie the heuristics. The first will be useful to researchers engaged in building grammars of similarly broad coverage. The second is of psychological interest and may be a guide for estimating parse preferences for newly discovered ambiguities for which we lack the experience to decide among on a more empirical basis.

Journal ArticleDOI
TL;DR: The system presented here automatically generates object-oriented, syntax-directed editors for visual languages, which are described by a family of editing operations.
Abstract: Since inexpensive computers possessing sophisticated graphics were introduced in the late 1970s, program development research has focused on syntax-directed editors that are based on the grammars of their underlying languages. The system presented here automatically generates object-oriented, syntax-directed editors for visual languages, which are described by a family of editing operations.

Journal ArticleDOI
TL;DR: It is shown that this distinction between syntax and semantics can be made clearer using grammars which adapt themselves to the current program contexts, and the advantages and disadvantages of describing programming language syntax this way are described.
Abstract: This paper is a comment on two recent contributions to Sigplan Notices. In his paper, "The static semantics file", no. 25/4, Brian Meek discusses the relevance of the notion of "static semantics". The relation between a variable's declaration and the restrictions on its use, for example, is usually classified as static semantics. Meek finds the designation rather misleading since it is applied for concepts concerned with context-dependent syntax. The term "semantics" should properly only be used for aspects that have to do with real meaning, e.g., the association between program statements and their intended computation. Here I will show that this distinction between syntax and semantics can be made clearer using grammars which adapt themselves to the current program contexts. For example, declarations of new items can be described by adding new rules to the grammar and thus, within a given scope of a program, the set of valid phrases can be derived freely by means of the current set of grammar rules. This way, we get rid of some of those often quite complicated-context constraints that are called static semantics. In no. 25/5, Boris Buhrsteyn presents an article, "On the modification of the formal grammar at parse time". The author suggests an approach to language recognition in which declarations of, say, variables result in an adaptation of the grammar as outlined above and in turn in an adjustment of the parsing tables. The idea can be traced back to the early sixties, and over the years several proposals for adaptable grammar formalisms have been suggested. In the following, I complement Buhrsteyn's article giving an overview of the area and describe the advantages and disadvantages of describing programming language syntax this way.

Patent
23 May 1990
TL;DR: This paper used document information necessary to remove uncertainty in the translation due to a presence of a plurality of candidates for the translation is utilized whenever uncertainity due to the presence of an uncertain candidate for translation arises by translating according to translation dictionary containing rules for translation.
Abstract: A machine translation system capable of obtaining a consistent translation for an entire document by taking context into account in translating each word or sentence. In this system, document information necessary to remove uncertainty in the translation due to a presence of a plurality of candidates for the translation is utilized whenever uncertainity due to a presence of a plurality of candidates for the translation arises by attempting to translate according to a translation dictionary containing rules for translation.

Proceedings ArticleDOI
06 Jun 1990
TL;DR: The view of grammar developed here is one in which unification is used for semantic interpretation, while purely formal agreement involves only a check for non-distinctness---i.e. variable-matching without variable substitution.
Abstract: Current complex-feature based grammars use a single procedure---unification---for a multitude of purposes, among them, enforcing formal agreement between purely syntactic features. This paper presents evidence from several natural languages that unification---variable-matching combined with variable substitution---is the wrong mechanism for effecting agreement. The view of grammar developed here is one in which unification is used for semantic interpretation, while purely formal agreement involves only a check for non-distinctness---i.e. variable-matching without variable substitution.

01 Jan 1990
TL;DR: The first section of this chapter contains definitions of context-free or algebraic languages by means of contextfree grammars and of systems of algebraic equations as discussed by the authors, and the second section gives a description of the various families of Dyck languages, as well as a proof of the Chomsky-Schutzenberger Theorem.
Abstract: The first section of this chapter contains the definitions of context-free or algebraic languages by means of context-free grammars and of systems of algebraic equations. In the second section, we recall without proof several constructions and closure properties of context-free languages. This section contains also the iteration lemmas for context-free languages. The third section gives a description of the various families of Dyck languages. They have two definitions, as classes of certain congruences, and as languages generated by some context-free grammars. The section ends with a proof of the Chomsky-Schutzenberger Theorem. Two other languages, the Lukasiewicz language and the language of completely parenthesized arithmetic expressions, are studied in the last section.

Proceedings ArticleDOI
20 Aug 1990
TL;DR: A working system for interactive Japanese syntactic analysis that a human user can intervene during parsing to help the system to produce a correct parse tree.
Abstract: In this paper, we describe a working system for interactive Japanese syntactic analysis. A human user can intervene during parsing to help the system to produce a correct parse tree. Human interactions are limited to the very simple task of indicating the modifiee (governor) of a phrase, and thus a non-expert native speaker can use the system. The user is free to give any information in any order, or even to provide no information. The system is being used as the source language analyzer of a Japanese-to-English machine translation system currently under development.

Journal ArticleDOI
TL;DR: It is argued that the complexity of semantic grammars as content analysis schemes, coupled with the on-line capacity of computers to perform data quality checks, produces richer and more reliable data than do traditional content analysis methodologies.
Abstract: This article presents the problems involved in implementing a semantic grammar on a computer. Semantic grammars provide powerful content analysis schemes for collecting data from textual sources (e...

Proceedings ArticleDOI
06 Jun 1990
TL;DR: The mechanisms used by the UNITRAN machine translation system for mapping an underlying lexical-conceptual structure to a syntactic structure (and vice versa) are described and it is shown how these mechanisms coupled with a set of general linking routines solve the problem of thematic divergence in machine translation.
Abstract: Though most translation systems have some mechanism for translating certain types of divergent predicate-argument structures, they do not provide a general procedure that takes advantage of the relationship between lexical-semantic structure and syntactic structure A divergent predicate-argument structure is one in which the predicate (eg, the main verb) or its arguments (eg, the subject and object) do not have the same syntactic ordering properties for both the source and target language To account for such ordering differences, a machine translator must consider language-specific syntactic idiosyncrasies that distinguish a target language from a source language, while making use of lexical-semantic uniformities that tie the two languages together This paper describes the mechanisms used by the UNITRAN machine translation system for mapping an underlying lexical-conceptual structure to a syntactic structure (and vice versa), and it shows how these mechanisms coupled with a set of general linking routines solve the problem of thematic divergence in machine translation

Journal ArticleDOI
TL;DR: The architecture of the system and its use in the application environment of visual text editing (inspired by the Heidelberg icon set) enhanced with file management features are reported.
Abstract: A system to generate and interpret customized visual languages in given application areas is presented. The generation is highly automated. The user presents a set of sample visual sentences to the generator. The generator uses inference grammar techniques to produce a grammar that generalizes the initial set of sample sentences, and exploits general semantic information about the application area to determine the meaning of the visual sentences in the inferred language. The interpreter is modeled on an attribute grammar. A knowledge base, constructed during the generation of the system, is then consulted to construct the meaning of the visual sentence. The architecture of the system and its use in the application environment of visual text editing (inspired by the Heidelberg icon set) enhanced with file management features are reported. >

Book ChapterDOI
05 Mar 1990
TL;DR: The main result is a characterization of the inferred grammars as “samples-composing” meaning that each sample can be derived and each rule contributes to the generation of samples in a certain way.
Abstract: In this paper, a grammatical-inference algorithm is developed with finite sets of sample graphs as inputs and hyperedge-replacement grammars as outputs. In particular, the languages generated by inferred grammars contain the input samples. Essentially, the inference procedure iterates the application of an operation which decomposes hyperedge-replacement rules according to edge-disjoint coverings of the right-hand sides of the rules. The main result is a characterization of the inferred grammars as “samples-composing” meaning that each sample can be derived and each rule contributes to the generation of samples in a certain way.

Proceedings ArticleDOI
20 Aug 1990
TL;DR: The approach can be characterised as an 'intelligent secretary with knowledge of the foreign language', which helps monolingual users to formulate the desired target-language text in the context of a (key-board) dialogue translation systems.
Abstract: This paper concerns an approach to Machine Translation which differs from the typical 'standard' approaches crucially in that it does not rely on the prior existence of a source text as a basis of the translation. Our approach can be characterised as an 'intelligent secretary with knowledge of the foreign language', which helps monolingual users to formulate the desired target-language text in the context of a (key-board) dialogue translation systems.

Proceedings ArticleDOI
06 Jun 1990
TL;DR: This paper informally describes the BEPDA, a machine that recognizes in a bottom-up fashion the set of Tree Adjoining Languages, and explains the LR parsing algorithm, and shows how to construct an LR(0) parsing table (no lookahead).
Abstract: We define a set of deterministic bottom-up left to right parsers which analyze a subset of Tree Adjoining Languages. The LR parsing strategy for Context Free Grammars is extended to Tree Adjoining Grammars (TAGs). We use a machine, called Bottom-up Embedded Push Down Automation (BEPDA), that recognizes in a bottom-up fashion the set of Tree Adjoining Languages (and exactly this set). Each parser consists of a finite state control that drives the moves of a Bottom-up Embedded Pushdown Automaton. The parsers handle deterministically some context-sensitive Tree Adjoining Languages. In this paper, we informally describe the BEPDA then given a parsing table, we explain the LR parsing algorithm. We then show how to construct an LR(0) parsing table (no lookahead). An example of a context-sensitive language recognized deterministically is given. Then, we explain informally the construction of SLR(1) parsing tables for BEPDA. We conclude with a discussion of our parsing method and current work.

Proceedings ArticleDOI
20 Aug 1990
TL;DR: A semi-automatic semantic disambiguator integrated in a knowledge-based machine translation system used to bridge the analysis and generation stages in machine translation.
Abstract: We describe a semi-automatic semantic disambiguator integrated in a knowledge-based machine translation system. It is used to bridge the analysis and generation stages in machine translation. The user interface of the disambiguator is built on mouse-based multiple-selection menus.

Journal ArticleDOI
TL;DR: This work focuses on environments for visual languages having a two-dimensional syntax based on attribute grammars and graphical constraints and introduces edit-semantic attributes, a new class of attributes which control the user interaction and graphic presentation.
Abstract: We review some results in the area of using meta techniques to generate language-oriented programming environments. We focus on environments for visual languages having a two-dimensional syntax based on attribute grammars and graphical constraints. We introduce edit-semantic attributes, a new class of attributes which control the user interaction and graphic presentation. We present LOGGIE, a prototype tool implementing some of the meta techniques discussed. The tool generates interactive language-oriented graphical editors. A number of applications have been generated and are presented, e.g. graphical environments for CCS, G-LOTOS and SDL.

Patent
05 Feb 1990
TL;DR: A translation apparatus is capable of translating a sentence from an original language into a sentence of a target language as mentioned in this paper, where the sentence from the original language is analyzed by a computer so that the sentence of the target language may be produced.
Abstract: A translation apparatus is capable of translating a sentence from an original language into a sentence of a target language. In the translation apparatus, the sentence from the original language is analyzed by a computer so that the sentence of the target language may be produced. Prior to the translation, if the sentence from the original language contain a word or words which have not been registered in dictionaries for translation, such words can be collectively indicated on a screen or outputted by a printer. During the translation, if the sentence of the target language contain an inappropriate word or words, each of these words can be replaced by another desired word.

Proceedings ArticleDOI
20 Aug 1990
TL;DR: Lexical Grammars are a class of unification grammars which share a fixed rule component, for which there exists a simple left-recursion elimination transformation.
Abstract: Lexical Grammars are a class of unification grammars which share a fixed rule component, for which there exists a simple left-recursion elimination transformation. The parsing and generation programs are seen as two dual non-left-recursive versions of the original grammar, and are implemented through a standard top-down Prolog interpreter. Formal criteria for termination are given as conditions on lexical entries: during parsing as well as during generation the processing of a lexical entry consumes some amount of a guide; the guide used for parsing is a list of words remaining to be analyzed, while the guide for generation is a list of the semantics of constituents waiting to be generated.