scispace - formally typeset
Search or ask a question

Showing papers on "Rule-based machine translation published in 1980"


Journal ArticleDOI
01 Jan 1980
TL;DR: A pattern analysis system using attributed grammars for pattern classification and description uses a combination of syntactic and statistical pattern recognition techniques, as is demonstrated by illustrative examples and experimental results.
Abstract: Attributed grammars are defned from the pattern recognidon point of view and shown to be useful for descriptions of syntactic stuctures as well as semantic attributes in primitives, subpatterns, and patterns. A pattern analysis system using attributed grammars Is proposed for pattern classification and description. This system extracts primitives and their attributes after preprocessing, performs syntax analysis of the resulting pattern representations, computes and extracts subpattern attributes for syntactically accepted patterns, and finally makes decisions according to the Bayes decision rule. Such a system uses a combination of syntactic and statistical pattern recognition techniques, as is demonstrated by illustrative examples and experimental results.

299 citations


Journal ArticleDOI
TL;DR: A relationship between parallel rewriting systems and two-way machines is investigated, finding restrictions on the “copying power” of these devices endow them with rich structuring and give insight into the issues of determinism, parallelism, and copying.

181 citations


Journal Article
TL;DR: This paper present an approach to natural language grammars and parsing in which slots and rules for filling them play a major role, such as WH-movement, verb dependencies, and agreement.
Abstract: This paper presents an approach to natural language grammars and parsing in which slots and rules for filling them play a major role. The system described provides a natural way of handling a wide variety of grammatical phenomena, such as WH-movement, verb dependencies, and agreement.

83 citations


Journal ArticleDOI
01 Nov 1980
TL;DR: A characterization of the solutions to the regular grammatical Infrence problem and an Introduction to a methodology for Infering regular grammars, based on the clutering of the states of a "maximal" solution, are presented.
Abstract: A characterization of the solutions to the regular grammatical Infrence problem and an Introduction to a methodology for Infering regular grammars, based on the clutering of the states of a "maximal" solution, are presented. Examples are given with a paricular design of this algorithm.

45 citations


Journal ArticleDOI
K.C You1, King-Sun Fu1
TL;DR: The error-correcting technique is combined with the primitive-extraction-embedding parser to recognize partially distorted shapes and the recognition of several distorted airplane shapes is demonstrated.

38 citations


Journal Article
TL;DR: A theory of understanding (parsing) texts as a process of collecting simple textual propositions into thematically and causally related units is described, based on the concept of macrostructures as proposed by Kintsch and van Dijk.
Abstract: A theory of understanding (parsing) texts as a process of collecting simple textual propositions into thematically and causally related units is described, based on the concept of macrostructures as proposed by Kintsch and van Dijk. These macrostructures are organized into tree hierarchies, and their interrelationships are described in rule-based story grammars related to the Kowalski logic based on Horn clauses. A procedure for constructing and synthesizing such trees from semantic network forms is detailed. The implementation of this procedure is capable of understanding and summarizing any story it can generate using the same basic control structure.

31 citations


Journal ArticleDOI
TL;DR: This paper study systematically three basic classes of grammars incorporating parallel rewriting: Indian parallel Grammars, Russian parallel grammARS and L systems, and introduces new classes of rewriting systems ( ETOL [ k ] systems, ETOLIP systems and ETOLRP systems).
Abstract: In this paper we study systematically three basic classes of grammars incorporating parallel rewriting: Indian parallel grammars, Russian parallel grammars and L systems. In particular by extracting basic characteristics of these systems and combining them we introduce new classes of rewriting systems ( ETOL [ k ] systems, ETOLIP systems and ETOLRP systems) Among others, some results on the combinatorial structure of Indian parallel languages and on the combinatorial structures of the new classes of languages are proved. As far as ETOL systems are concerned we prove that every ETOL language can be generated with a fixed (equal to 8) bounded degree of parallelism.

29 citations


Book ChapterDOI
14 Jan 1980
TL;DR: This paper illustrated by examples taken from real programming languages, two varieties of rule splitting are identified and formalized, and the problems of exploiting rule splitting in a compiler writing system based on attribute grammars are explored.
Abstract: Rule splitting is a phenomenon, most clearly exhibited by attribute grammars and affix grammars, in which the syntactic structure of a phrase is constrained by its attributes. In this paper, rule splitting is illustrated by examples taken from real programming languages, and two varieties of rule splitting are identified and formalized. Implementations of rule splitting (attribute-directed parsing) are demonstrated for top-down and bottom-up parsers, both one-pass and multi-pass. Finally, the problems of exploiting rule splitting in a compiler writing system based on attribute grammars are explored.

23 citations


Proceedings ArticleDOI
Makoto Nagao1, Jun-ichi Tsujii1, K. Mitamura1, H. Hirakawa1, M. Kume1 
30 Sep 1980
TL;DR: A machine translation system from Japanese into English is described, which aims at translation of computer manuals, and basically follows to the transfer approach.
Abstract: A machine translation system from Japanese into English is described. The system aims at translation of computer manuals, and basically follows to the transfer approach. The design principles of the system are discussed in detail, together with the overall constructions of the system. Especially, the effectiveness of lexicon-based procedures, i.e. lexicon-based analysis, transfer, and synthesis, is emphasized. Most of the linguistic phenomena are treated by using lexical descriptions and lexical rules, instead of by general syntactic rules. Because Japanese and English belong to quite different language families, much more structural transfers are necessary than in other MT systems among European languages. Special cares have been paid for designing the transfer component. Some translation results are also given to illustrate the current abilities of the system.

13 citations


Journal ArticleDOI
TL;DR: Progress in linguistics depends on extracting as much as possible from grammars of particular languages and formulating general principles from which the facts of particular language will follow as automatic consequences, so that the field will advance.
Abstract: The main goal of linguistic research is to develop a theory of grammar, i.e. a set of universal principles to characterize human language. Since languages vary superficially, this goal is achieved only when it is shown that superficial differences among languages can be accounted for by the theory of grammar. Clearly, then, the more that superficial differences among languages can be accounted for by universal principles, the more the field will advance. This conclusion is best stated by Perlmutter (1971): … progress in linguistics depends on extracting as much as possible from grammars of particular languages and formulating general principles from which the facts of particular languages will follow as automatic consequences….

10 citations


Proceedings ArticleDOI
Hiroshi Uchida1, Kenji Sugiyama1
30 Sep 1980
TL;DR: In this model, a node represents a concept, and an arc represents a relation beween concepts, and this constitutes a network representing conceptual structure, and it is called such a network.
Abstract: ing general factors in events and objects (abstraction), but excluding pecurialities to each of them (subtraction). In our model, a node represents a concept, and an arc represents a relation beween concepts. This constitutes a network representing conceptual structure, and we also call such a network

Proceedings ArticleDOI
19 Jun 1980
TL;DR: If syntactic acquisition can proceed using just positive examples, then it would seem completely unnecessary to move to any enrichment of the input data that is as yet unsupported by psycholinguistic evidence.
Abstract: A principal goal of modern linguistics is to account for the apparently rapid and uniform acquisition of syntactic knowledge, given the relatively impoverished input that evidently serves as the basis for the induction of that knowledge the so-called projection problem. At least since Chomsky, the usual response to the projection problem has been to characterize knowledge of language as a grammar, and then proceed by restricting so severely the class of grammars available for acquisition that the induction task is greatly simplified perhaps trivialized. consistent with our lcnowledge of what language is and of which stages the child passes through in learning it." [2, page 218] In particular, ahhough the final psycholinguistic evidence is not yet in, children do not appear to receive negative evidence as a basis for the induction of syntactic rules. That is, they do not receive direct reinforcement for what is no_..~t a syntactically well-formed sentence (see Brown and Hanlon [3] and Newport, Gleitman, and Gleitman [4] for discussion). Á If syntactic acquisition can proceed using just positive examples, then it would seem completely unnecessary to move to any enrichment of the input data that is as yet unsupported by psycholinguistic evidence. 2

Journal ArticleDOI
TL;DR: The method of local constraints attempts to describe context-free languages in an apparently context-sensitive form which helps to retain the intuitive insights about the grammatical structure, thus allowing for the possibility of a correctness proof in the form of Knuthian semantics.

Dissertation
01 Jan 1980
TL;DR: A computer software system designed to usefully aid composers in the process of music composition by automating part of the composition process by using generative grammars to automate the generation of music structures is described.
Abstract: The application of computers in music has focused almost exclusively on problems of sound synthesis. The application of computers in the process of music composition, ie. the generation of sound structures, remains largely unexplored. This thesis describes a computer software system designed to usefully aid composers in the process of music composition by automating part of the composition process. The composition system described uses generative grammars to automate the generation of music structures. The core of the system described is two facilities. These are 1) a facility for formally and explicitly defining the grammars of music languages, ie. the GGDL programming language, and 2) a facility for using GGDL language definitions to automatically generate utterances in the specified languages, ie. the GGDL—Generator. An implementation of these facilities has been integrated with programs to enable sound synthesis and the graphic editing of music structures. The system, implemented on a network of computers at the Department of Computer Science, Unversity of Edinburgh, is described. The thesis presents and evaluates some of the practical results obtained using the GGDL computer aided composition system. It is shown how the system may be used to compose macroand micro—sound structures. An automated digital sound synthesis instrument developed using generative grammars is also described.

Book ChapterDOI
K. S. Fu1
01 Jan 1980
TL;DR: Special topics discussed include primitive selection and pattern grammars, syntactic recognition and error-correcting parsing, and clustering analysis for syntactic patterns.
Abstract: Syntactic approach to pattern recognition is introduced. Special topics discussed include primitive selection and pattern grammars, syntactic recognition and error-correcting parsing, and clustering analysis for syntactic patterns.

01 Sep 1980
TL;DR: LMS is a knowledge representation formalism particularly designed for representing knowledge that can be straightforwardly expressed in natural language, a formalism for managing interconnected objects in a highly-organized, network-like memory.
Abstract: : LMS ('Linguistic Memory System') is a knowledge representation formalism particularly designed for representing knowledge that can be straightforwardly expressed in natural language. Fundamentally, it is a semantic network formalism, a formalism for managing interconnected objects in a highly-organized, network-like memory. XLMS is a particular LISP-based implementation of LMS, intended primarily for experimental use. (Author)

Proceedings ArticleDOI
30 Sep 1980
TL;DR: This paper attempts to systematize natural language analysis process by use of a partitioned semantic network formalism as the meaning representation and stepwise translation based on Montague Grammar.
Abstract: This paper attempts to systematize natural language analysis process by (1) use of a partitioned semantic network formalism as the meaning representation and (2) stepwise translation based on Montague Grammar. The meaning representation is obtained in two steps. The first step translates natural language into logical expression. The second step interprets logical expression to generate network structure. We have implemented set of programs which performs the stepwise translation. Experiments are in progress for machine translation and question answering.

Proceedings ArticleDOI
30 Sep 1980
TL;DR: This paper first briefly describes the representation language Objtalk, and illustrates how it is used for building an understanding system for processing German newspaper texts about the jobmarket situation.
Abstract: In the past years we have been applying semantic ATN-grammars - as introduced by Brown & Burton (1974) - to natural language question-answering tasks (e.g. a LISP-Tutor [Barth, 1977], a question-answering system about the micro-world of soccer [Rathke & Sonntag, 1979]). We found that semantic grammars execute efficiently, but become large very quickly even with moderate domains of discourse. We therefore looked for ways to support parsing by domain-dependent knowledge represented in an inheritance network [Laubsch, 1979]. In this paper we first briefly describe our representation language Objtalk, and then illustrate how it is used for building an understanding system for processing German newspaper texts about the jobmarket situation.

Proceedings ArticleDOI
30 Sep 1980
TL;DR: It is illustrated that an attribute grammar for the translation of natural language sentences into expressions of the predicate calculus language can be done in a straightforward way and further improvements of the resulting attribute grammar are outlined.
Abstract: Summary Starting from an ATN-grammar and translation rules assigning expressions of a predicate calculus language to the symbols of the grammar one can produce an attribute grammar for the translation of natural language sentences (here German) into expressions of the predicate calculus language. The paper illustrates that this can be done in a straightforward way and outlines further improvements of the resulting attribute grammar.

Journal ArticleDOI
TL;DR: This work describes a different approach to numerical training in syntactic pattern recognition, namely that of finding an LMSE discriminant hyperplane between sets of class samples in a space of ``structural indices'' determined by a context-free grammar.
Abstract: The usual approach to numerical training in syntactic pattern recognition involves estimation of the production probabilities of stochastic context-free grammars. Here we describe a different approach, namely that of finding an LMSE discriminant hyperplane between sets of class samples in a space of ``structural indices'' determined by a context-free grammar.

Proceedings ArticleDOI
30 Sep 1980
TL;DR: A modification of Prolog has been implemented which allows "floating terminals" to be included in a metamorphosis grammar together with some information enabling to control the search for such a terminal in the unprocessed part of the input.
Abstract: The Prolog programming language allows the user to write powerful parsers in the form of metamorphosis grammars. However, the metamorhosis grammars, as defined by Colmerauer2, have to specify strictly the order of terminal and nonterminal symbols. A modification of Prolog has been implemented which allows "floating terminals" to be included in a metamorphosis grammar together with some information enabling to control the search for such a terminal in the unprocessed part of the input. The modification is illustrated by several examples from the Polish language and some open questions are discussed.

Book ChapterDOI
TL;DR: In this article, the authors discuss the main lines of research that are currently explored in laboratory by the method known under the name of sentence-picture verification, which enables to study the nature of the linguistic representation, together with the transformative operations that people are able to apply to this representation when performing comparative tasks.
Abstract: Publisher Summary This chapter discusses the main lines of research that are currently explored in laboratory by the method known under the name of sentence–picture verification The sentence–picture comparison is a technique which enables to study the nature of the linguistic representation, together with the transformative operations that people are able to apply to this representation when performing comparative tasks These transformations are related to the temporal evolution and the corresponding modifications of the representation in memory The sentence–picture verification task deals with the nature and the functional properties of the cognitive representations of linguistic and pictorial information The sentence–picture verification task also deals with the comprehension of language The chapter provides a description of models used for linguistic information processing In the model proposed by Chase and Clark, coding processes lead to an isomorphic representation that sentences and pictures share in common, under the form of linguistic descriptors This model is relevant in situations in which the two terms to be compared are presented simultaneously; however, with a successive presentation paradigm, it shows that subjects use a figurative representation of sentences to compare them with the pictures

Proceedings ArticleDOI
30 Sep 1980
TL;DR: This paper presents a method of decomposing Japanese sentences appearing in the Patent Documents on "Pulse network", into normal forms, whereby linguistic information is analysed and classified based on the human linguistic process.
Abstract: A diversity and a flexibility of language expression forms are awkward problems for the machine processing of language, such as translation, indexing and question-answering. This paper presents a method of decomposing Japanese sentences appearing in the Patent Documents on "Pulse network", into normal forms. First, the linguistic information is analysed and classified based on the human linguistic process. Then, predicate functions, phrase functions and operators are introduced as the normal forms. Finally, the decomposing procedure and some experimental results are shown.