scispace - formally typeset
Search or ask a question

Showing papers on "Rule-based machine translation published in 1986"


Journal ArticleDOI
TL;DR: A formal model of the mental representation of task languages is presented, a metalanguage for defining task-action grammars (TAG) that rewrite simple tasks into action specifications that make predictions about the relative learnability of different task language designs.
Abstract: A formal model of the mental representation of task languages is presented. The model is a metalanguage for defining task-action grammars (TAG): generative grammars that rewrite simple tasks into action specifications. Important features of the model are (a) Identification of the "simple-tasks" that users can perform routinely and that require no control structure; (b) Representation of simple-tasks by collections of semantic components reflecting a categorization of the task world; (c) Marking of tokens in rewrite rules with the semantic features of the task world to supply selection restrictions on the rewriting of simple-tasks into action specifications. This device allows the representation of family resemblances between individual task-action mappings. Simple complexity metrics over task-action grammars make predictions about the relative learnability of different task language designs. Some empirical support for these predictions is derived from the existing empirical literature on command language learning, and from two unreported experiments. Task-action grammars also provide designers with an analytic tool for exposing the configural properties of task languages.

328 citations


Book
01 Jun 1986
TL;DR: The book presents papers on natural language processing, focusing on the central issues of representation, reasoning, and recognition in syntactic models, semantic interpretation, discourse interpretation, language action and intentions, language generation, and systems.
Abstract: The book presents papers on natural language processing, focusing on the central issues of representation, reasoning, and recognition. The introduction discusses theoretical issues, historical developments, and current problems and approaches. The book presents work in syntactic models (parsing and grammars), semantic interpretation, discourse interpretation, language action and intentions, language generation, and systems.

232 citations


Journal ArticleDOI
TL;DR: The problem of grammatical inference is introduced, and its potential engineering applications are demonstrated, andference algorithms for finite-state and context-free grammars are presented.
Abstract: Inference of high-dimensional grammars is discussed. Specifically, techniques for inferring tree grammars are briefly presented. The problem of inferring a stochastic grammar to model the behavior of an information source is also introduced and techniques for carrying out the inference process are presented for a class of stochastic finite-state and context-free grammars. The possible practical application of these methods is illustrated by examples.

189 citations


Book ChapterDOI
TL;DR: The chapter demonstrates some of the solutions that were developed in LUNAR for handling a variety of problems in semantic interpretation, especially in the interpretation of quantifiers, including a meaning representation language (MRL) that facilitates the uniform interpretation of a wide variety of linguistic constructions.
Abstract: Publisher Summary The history of communication between man and machines has followed a path of increasing provision for the convenience and ease of communication on the part of the human. From raw binary and octal numeric machine languages, through various symbolic assembly, scientific, business, and higher level languages, programming languages have increasingly adopted notations that are more natural and meaningful to a human user. The important characteristic of this trend is the elevation of the level at which instructions are specified, from the low-level details of the machine operations to high-level descriptions of the task to be done, leaving out details that can be filled in by the computer. This chapter is intended to be a discussion of a set of techniques, the problems they solve, and the relative advantages and disadvantages of several alternative approaches. The chapter demonstrates some of the solutions that were developed in LUNAR for handling a variety of problems in semantic interpretation, especially in the interpretation of quantifiers. These include a meaning representation language (MRL) that facilitates the uniform interpretation of a wide variety of linguistic constructions, the formalization of meanings in terms of procedures that define truth conditions and carry out actions, efficient techniques for performing extensional inference, techniques for organizing and applying semantic rules to construct meaning representations, and techniques for generating higher quantifiers during interpretation. The latter include methods for determining the appropriate relative scopes of quantifiers and their interactions with negation, and for handling their interactions with operators such as “average.” Other techniques are described for post-interpretive query optimization and for displaying quantifier dependencies in output. A number of future directions for research in natural language understanding, including some questions of the proper relationship between syntax and semantics, the partial understanding of “ungrammatical” sentences, and the role of pragmatics are also discussed later in the chapter.

174 citations


Journal ArticleDOI
King-Sun Fu1
TL;DR: A combined syntactic-semantic approach based on attributed grammars is suggested, intended to be an initial step toward unification of syntactic and statistical approaches to pattern recognition.
Abstract: The problem of pattern recognition is discussed in terms of single-entity representation versus multiple-entity representation. A combined syntactic-semantic approach based on attributed grammars is suggested. Syntax-semantics tradeoff in pattern representation is demonstrated. This approach is intended to be an initial step toward unification of syntactic and statistical approaches to pattern recognition.

166 citations


Proceedings ArticleDOI
25 Aug 1986
TL;DR: The strategies and potentials of CUGs justify their further exploration in the wider context of research on unification grammars, and approaches to selected linguistic phenomena such as long-distance dependencies, adjuncts, word order, and extraposition are discussed.
Abstract: Categorial unification grammars (CUGs) embody the essential properties of both unification and categorial grammar formalisms. Their efficient and uniform way of encoding linguistic knowledge in well-understood and widely used representations makes them attractive for computational applications and for linguistic research.In this paper, the basic concepts of CUGs and simple examples of their application will be presented. It will be argued that the strategies and potentials of CUGs justify their further exploration in the wider context of research on unification grammars. Approaches to selected linguistic phenomena such as long-distance dependencies, adjuncts, word order, and extraposition are discussed.

160 citations


Proceedings ArticleDOI
01 Jul 1986
TL;DR: This paper presents constraints on individual attributes and semantic functions of an AG that are sufficient to guarantee that a circular AG specifies a well-defined translation and that circularly-defined attribute-instances can be computed via successive approximation.
Abstract: In the traditional formulation of attribute grammars (AGs) circularities are not allowed, that is, no attribute-instance in any derivation tree may be defined in terms of itself Elsewhere in mathematics and computing, though, circular (or recursive) definitions are commonplace, and even essential Given appropriate constraints, recursive definitions are well-founded, and the least fixed-points they denote are computable This is also the case for circular AGsThis paper presents constraints on individual attributes and semantic functions of an AG that are sufficient to guarantee that a circular AG specifies a well-defined translation and that circularly-defined attribute-instances can be computed via successive approximation AGs that satisfy these constraints are called finitely recursiveAn attribute evaluation paradigm is presented that incorporates successive approximation to evaluate circular attribute-instances, along with an algorithm to automatically construct such an evaluator The attribute evaluators so produced are static in the sense that the order of evaluation at each production-instance in the derivation-tree is determined at the time that each translator is generatedA final algorithm is presented that tells which individual attributes and functions must satisfy the constraints

95 citations


Proceedings ArticleDOI
10 Jul 1986
TL;DR: This paper discusses the communication between the syntactic, semantic and pragmatic modules that is necessary for making implicit linguistic information explicit, and lets syntax and semantics recognize missing linguistic entities as implicit entities, so that reference resolution can be directed to find specific referents for the entities.
Abstract: This paper describes the SDC PUNDIT, (Prolog UNDerstands Integrated Text), system for processing natural language messages. PUNDIT, written in Prolog, is a highly modular system consisting of distinct syntactic, semantic and pragmatics components. Each component draws on one or more sets of data, including a lexicon, a broad-coverage grammar of English, semantic verb decompositions, rules mapping between syntactic and semantic constituents, and a domain model.This paper discusses the communication between the syntactic, semantic and pragmatic modules that is necessary for making implicit linguistic information explicit. The key is letting syntax and semantics recognize missing linguistic entities as implicit entities, so that they can be labelled as such, and reference resolution can be directed to find specific referents for the entities. In this way the task of making implicit linguistic information explicit becomes a subset of the tasks performed by reference resolution. The success of this approach is dependent on marking missing syntactic constituents as elided and missing semantic roles as ESSENTIAL so that reference resolution can know when to look for referents.

72 citations


Book
01 Jan 1986
TL;DR: It is shown how some transformations of functional programs may be better understood by viewing the programs as inefficient implementations of attribute grammars.
Abstract: Two mappings from attribute grammars to lazy functional programs are defined. One of these mappings is an efficient implementation of attribute grammars. The other mapping yields inefficient programs. It is shown how some transformations of functional programs may be better understood by viewing the programs as inefficient implementations of attribute grammars.

49 citations


Patent
13 May 1986
TL;DR: In this paper, an operator interactive translation system for translating sentences in a first language to sentence in a second language includes a separate memory for storing translated words in the second language as learned words corresponding to input words in first language, upon being indicated as correct equivalents by the user.
Abstract: An operator interactive translation system for translating sentences in a first language to sentences in a second language includes a separate memory for storing translated words in the second language as learned words corresponding to input words in the first language, upon being indicated as correct equivalents by the user. For each subsequent translation using sentence construction and morpheme analysis, the learned word stored in the buffer memory is selected as the first translation each time the specific input word in the first language appears in a sentence to be translated.

49 citations


Patent
Toshio Okamoto1, Kimihito Takeda1
22 Aug 1986
TL;DR: In this paper, a machine translation system for automatically translating Japanese into a target language was proposed, where the number of rules for the syntactic analysis are only the square of the number number of parts of speech.
Abstract: This invention relates to a machine translation system for automatically translating Japanese into a target language. Conventional translation systems have drawbacks such as low processing efficiency because of the necessity of cumbersome pre-editing process etc. In the machine translation system of the invention, the syntactic analysis which is a principal process in machine translation comprises the following steps: picking up an analysis rule corresponding to the combination of words from a part-of-speech matrix table that describes analysis rules corresponding to combinations of interlinked and interlinking words in order to recognize the presence of a syntactic link; successively stacking the words in which a syntactic link is recognized to be established as a partial analysis tree; and outputting an analysis tree corresponding to the original on the basis of the stacked partial analysis trees. Therefore, the number of rules for the syntactic analysis are only the square of the number of parts of speech. Unlike the prior art, therefore, there is no need of the pre-editing process, and Japanese in any style can be efficiently translated.

Proceedings ArticleDOI
01 Jan 1986
TL;DR: This work introduces copy bypass attribute propagation that dynamically replaces copy rules with nonlocal dependencies, resulting in faster incremental evaluation that allows multiple subtree replacement on any noncircular attribute grammar.
Abstract: Attribute grammars require copy rules to transfer values between attribute instances distant in an attributed parse tree. We introduce copy bypass attribute propagation that dynamically replaces copy rules with nonlocal dependencies, resulting in faster incremental evaluation. A evaluation strategy is used that approximates a topological ordering of attribute instances. The result is an efficient incremental evaluator that allows multiple subtree replacement on any noncircular attribute grammar.

Journal ArticleDOI
TL;DR: A system for generating direct manipulation office systems that employ a new semantic data model to describe office entities and provides a means of generating sophisticated graphics-based user interfaces that are integrated with the underlying semantic model.
Abstract: A system for generating direct manipulation office systems is described. In these systems, the user directly manipulates graphical representations of office entities instead of dealing with these entities abstractly through a command language or menu system. These systems employ a new semantic data model to describe office entities. New techniques based on attribute grammars and incremental attribute evaluation are used to implement this data model in an efficient manner. In addition, the system provides a means of generating sophisticated graphics-based user interfaces that are integrated with the underlying semantic model. Finally, the generated systems contain a general user reversal and recovery (or undo) mechanism that allows them to be much more tolerant of human errors.

Journal ArticleDOI
TL;DR: In this article, a technique for the syntax-directed specification of compilers together with a method for proving the correctness of their parse-driven implementations is presented, and a practical class of compiler implementations are considered, consisting of those driven by LR (k) or LL(k) parsers which cause a sequence of translation routine activations to modify a suitably initialized collection of data structures (called a translation environment).
Abstract: Aspect of the interaction between compiler theory and practice is addressed. Presented is a technique for the syntax-directed specification of compilers together with a method for proving the correctness of their parse-driven implementations. The subject matter is presented in an order-algebraic framework; while not strictly necessary, this approach imposes beneficial structure and modularity on the resulting specifications and implementation correctness proofs. Compilers are specified using an order-algebraic definition of attribute grammars. A practical class of compiler implementations is considered, consisting of those driven by LR(k) or LL(k) parsers which cause a sequence of translation routine activations to modify a suitably initialized collection of data structures (called a translation environment). The implementation correctness criterion consists of appropriately comparing, for each source program, the corresponding object program (contained in the final translation environment) produced by the compiler implementation to the object program dictated by the compiler specification. Provided that suitable intermediate assertions (called translation invariants) are supplied, the program consisting of the (parse-induced) sequence of translation routine activations can be proven partially correct via standard inductive assertion methods.

Book
01 Jan 1986
TL;DR: In this paper, the basic Verb Phrase and auxiliary VPs were used to describe the structure of Noun Phrases in a sentence and its constituents, including constituents, functions and categories.
Abstract: Introduction 1 Sentence structure: constituents 2 Sentence structure: functions 3 Sentence structure: categories 4 The basic Verb Phrase 5 Adverbials and other matters 6 More on verbs: auxiliary VPs 7 The structure of Noun Phrases 8 Sentences within Sentences 9 Wh-clauses 10 Non-finite clauses 11 Languages, sentences and grammars Further Reading Index

Patent
04 Mar 1986
TL;DR: In this paper, a different interpretation key is added to the function keys of an input part in order to attain translation in plural ways of interpretation with simple constitution and also to simplify the editing operation of an operator.
Abstract: PURPOSE: To easily obtain the proper results of translations by adding a different interpretation key to the function keys of an input part in order to attain translation in plural ways of interpretation with simple constitution and also to simplify the editing operation of an operator. CONSTITUTION: An editing control part 4 detects the key input given from an input part 1 or the translation end signal sent from a translation part 5 and then displays the translated sentence candidates and the auxiliary information on a display part 8. In this case, a different interpretation key added to the function keys of the part 1 is operated as long as another different interpretation exists and another kind of interpretation is newly used for the second translation. In other words, an operator pushes the different interpretation key to obtain a different type of interpretation after looking at both an original sentence and its translated one displayed on the part 8. In such a case, the part 4 indicates the new interpretation to an analysis control part and the second translation is carried out via a series evolution table. Then the second translation is displayed on the part 8. Thus it is possible to perform the translation in plural kinds of interpretation and also to simplify the editing operations. Then the proper translation results are obtained. COPYRIGHT: (C)1987,JPO&Japio

Proceedings ArticleDOI
25 Aug 1986
TL;DR: This paper presents a recent advance in multi-lingual knowledge-based machine translation (KBMT), which provides for separate syntactic and semantic knowledge sources that are integrated dynamically for parsing and generation.
Abstract: Building on the well-established premise that reliable machine translation requires a significant degree of text comprehension, this paper presents a recent advance in multi-lingual knowledge-based machine translation (KBMT). Unlike previous approaches, the current method provides for separate syntactic and semantic knowledge sources that are integrated dynamically for parsing and generation. Such a separation enables the system to have syntactic grammars, language specific but domain general, and semantic knowledge bases, domain specific but language general. Subsequently, grammars and domain knowledge are precompiled automatically in any desired combination to produce very efficient and very thorough real-time parsers. A pilot implementation of our KBMT architecture using functional grammars and entity-oriented semantics demonstrates the feasibility of the new approach.1

Patent
07 May 1986
TL;DR: In this article, a machine translation processor with a keyboard with a partial translation command key, a translation processing section having dictionaries for performing translation of an original sentence, a time measuring section for measuring the actual translation time performed in the translation processing, a memory for storing both the original and translation sentences, a dividing section for dividing the original sentence into phrases in accordance with a predetermined rule for division, and a control section for controlling the translation of the original sentences such that the original is divided into phrases by the dividing section and the phrases are translated by the translation process.
Abstract: A machine translation processor according to the present invention includes an input section having a keyboard with a partial translation command key, a translation processing section having dictionaries for performing translation of an original sentence, a time measuring section for measuring the actual translation time performed in the translation processing section, a memory for storing both the original and translation sentences, a dividing section for dividing the original sentence into phrases in accordance with a predetermined rule for division, and a control section for controlling the translation of the original sentence such that when the partial translation command key is operated or the actual translation time measured reaches a predetermined maximum translation time, the original is divided into phrases by the dividing section and the phrases are translated by the translation processing section. Overall translation time is reduced by dividing the difficult original sentence into phrases that may easily translated.

Patent
10 Feb 1986
TL;DR: In machine translation from a first language text to a second language text such as an English text, if information necessary for the translation is not directly expressed or is not sufficiently hinted at in the first-language text, such information is requested by the translation machine to supplement the first language during inputting or pre-editing thereof as mentioned in this paper.
Abstract: In machine translation from a first language text such as a Japanese text to a second language text such as an English text, if information necessary for the translation is not directly expressed or is not sufficiently hinted at in the first language text, such information is requested by the translation machine to supplement the first language during inputting of the first language text or pre-editing thereof.

Proceedings ArticleDOI
10 Jul 1986
TL;DR: The absence of mirror-image constructions in human languages means that it is not enough to extend Context-free Grammars in the direction of context-sensitivity, and a class of grammars must be found which handles (context-sensitive) copying but not ( context-free) mirror images, suggesting that human linguistic processes use queues rather than stacks.
Abstract: The documentation of (unbounded-length) copying and cross-serial constructions in a few languages in the recent literature is usually taken to mean that natural languages are slightly context-sensitive. However, this ignores those copying constructions which, while productive, cannot be easily shown to apply to infinite sublanguages. To allow such finite copying constructions to be taken into account in formal modeling, it is necessary to recognize that natural languages cannot be realistically represented by formal languages of the usual sort. Rather, they must be modeled as families of formal languages or as formal languages with indefinite vocabularies. Once this is done, we see copying as a truly pervasive and fundamental process in human language. Furthermore, the absence of mirror-image constructions in human languages means that it is not enough to extend Context-free Grammars in the direction of context-sensitivity. Instead, a class of grammars must be found which handles (context-sensitive) copying but not (context-free) mirror images. This suggests that human linguistic processes use queues rather than stacks, making imperative the development of a hierarchy of Queue Grammars as a counterweight to the Chomsky Grammars. A simple class of Context-free Queue Grammars is introduced and discussed.

01 Jul 1986
TL;DR: This manual describes the English language syntactic analyzer developed by the PROTEUS Project at New York University, and the version of Restriction Language which is used to write grammars for this analyzer.
Abstract: : This manual describes the English language syntactic analyzer developed by the PROTEUS Project at New York University, and the version of Restriction Language which is used to write grammars for this analyzer. This system is a direct descendant of the Linguistic String Parser, developed by the Linguistic String Project at New York University (since 1973 in collaboration with the Computer Science Department). In particular, we have tried to maintain as much commonality as possible in the Restriction Language used for stating grammars. In developing our new implementation, we have had three objectives: 1) use LISP. The current Linguistic String Parser is implemented in FORTRAN. It is therefore quite efficient but is hard to interface to AI applications, which are usually best developed in LISP. The PROTEUS system has been entirely implemented in LISP. 2) remain small and modular. The Linguistic String Parser gradually became so large and complex that further modification was difficult. Through redesign and the elimination of some features, we have sought to return to a simpler, more easily modifiable system. 3) accomodate different analysis algorithms. One aspect of our current research is the study of alternative analysis strategies. We have therefore tried to develop a system which could accomodate different analysis algorithms. In particular, we have designed the grammar formalism to work with both top-down and bottom-up analyzers.

Proceedings ArticleDOI
Yoshihiko Nitta1
25 Aug 1986
TL;DR: A somewhat new method called "Cross Translation Test (CTT), in short)" is presented that reveals the detail of idiosyncratic gap (IG, in short) together with the so-so satisfiable possibility of MT.
Abstract: Current practical machine translation system (MT, in short), which are designed to deal with a huge amount of document, are generally structure-bound. That is, the translation process is done based on the analysis and transformation of the structure of source sentence, not on the understanding and para-phrasing of the meaning of that. But each language has its own syntactic and semantic idiosyncrasy, and on this account, without understanding the total meaning of source sentences it is often difficult for MT to bridge properly the idiosyncratic gap between source- and target- language. A somewhat new method called "Cross Translation Test (CTT, in short)" is presented that reveals the detail of idiosyncratic gap (IG, in short) together with the so-so satisfiable possibility of MT. It is also mentioned the usefulness of sublanguage approach to reducing the IG between source- and target- language.

Proceedings Article
01 Jan 1986
TL;DR: The analysis phase in an indirect, transfer and global approach to machine translation is studied, which can be described as exhaustive, depth-first and strategically and heuristically driven, while the gran~nar used is an augmented context free grammar.
Abstract: The analysis phase in an indirect, transfer and global approach to machine translation is studied. The analysis conducted can be described as exhaustive (meaning with backtracking), depth-first and strategically and heuristically driven, while the gran~nar used is an augmented context free grammar. The problem areas, being pattern matching, ambiguities, forward propagation, checking for correctness and backtracking, are highlighted. Established results found in the literature are employed whenever adaptable, while suggestions are given otherwise.

Proceedings ArticleDOI
André Schenk1
25 Aug 1986
TL;DR: A solution to one of the problems of machine translation, namely the translation of idioms is described within the theoretical framework of the Rosetta machine translation system.
Abstract: This paper discusses one of the problems of machine translation, namely the translation of idioms. The paper describes a solution to this problem within the theoretical framework of the Rosetta machine translation system.Rosetta is an experimental translation system which uses an intermediate language and translates between Dutch, English and, in the future, Spanish.

Journal ArticleDOI
TL;DR: This report investigates the problem of efficient representation of the attributed parse tree by analyzing and comparing the strategies of two systems that have been used to automatically generate a translator from an attribute grammar: the GAG system developed at the Universitat de Karlsruhe and the LINGUIST-86 system written at Intel Corporation.
Abstract: Attribute grammars are a value-oriented, non-procedural extension to context-free grammars that facilitate the specification of translations whose domain is described by the underlying context-free grammar. Just as parsers for context-free languages can be automatically constructed from a context-free grammar, so can translators, called attribute evaluators, be automatically generated from an attribute grammar. A major obstacle to generating efficient attribute evaluators is that they typically use large amounts of memory to represent the attributed parse tree. In this report we investigate the problem of efficient representation of the attributed parse tree by analyzing and comparing the strategies of two systems that have been used to automatically generate a translator from an attribute grammar: the GAG system developed at the Universitat de Karlsruhe and the LINGUIST-86 system written at Intel Corporation. Our analysis will characterize the two strategies and highlight their respective strengths and weaknesses. Drawing on the insights given by this analysis, we propose a strategy for storage optimization in automatically generated attribute evaluators that not only incorporates the best features of both GAG and LINGUIST-86, but also contains novel features that address aspects of the problem that are handled poorly by both systems.

Book ChapterDOI
TL;DR: Grammars contain rules for generating sentences that are metagrammatical devices that can be used to generate rules of the grammar or to encode certain relations among them, such as redundancies in their form.
Abstract: Grammars contain rules for generating sentences. Metarules are statements about these rules. They are metagrammatical devices that can be used to generate rules of the grammar or to encode certain relations among them, such as redundancies in their form.

Journal ArticleDOI
S. A. Mehdi1
TL;DR: A computer system for syntactic parsing of Arabic sentences that contains a word analyser and a syntactic parser based on Definite Clause Grammars (DCG) formalism is described.
Abstract: This paper describes a computer system for syntactic parsing of Arabic sentences. It contains a word analyser and a syntactic parser based on Definite Clause Grammars (DCG) formalism. The system has been written in Prolog. An introduction to the Arabic language and its features is included.

Proceedings ArticleDOI
Lisette Appelo1
25 Aug 1986
TL;DR: It is shown that a compositional approach leads to a transparent account of the complex aspects of time in natural language and can be used for the translation of temporal expressions.
Abstract: This paper discusses the translation of temporal expressions, in the framework of the machine translation system Rosetta. The translation method of Rosetta, the 'isomorphic grammar method', is based on Montague's Compositionality Principle. It is shown that a compositional approach leads to a transparent account of the complex aspects of time in natural language and can be used for the translation of temporal expressions.

Proceedings Article
11 Aug 1986
TL;DR: The reasoning behind the selection and design of a parser for the Lingo project on natural language interfaces at MCC is presented, and a variant of chart parsing that uses a best-first control structure managed on an agenda as a control structure is chosen.
Abstract: This paper presents the reasoning behind the selection and design of a parser for the Lingo project on natural language interfaces at MCC. The major factors in the selection of the parsing algorithm were the choices of having a syntactically based grammar, using a graph-unification-based representation language, using Combinatory Categorial Grammars, and adopting a one-to-many mapping from syntactic bracketings to semantic representations in certain cases. The algorithm chosen is a variant of chart parsing that uses a best-first control structure managed on an agenda. It offers flexibility for these natural language processing applications by allowing for best-first tuning of parsing for particular grammars in particular domains while at the same time allowing exhaustive enumeration of the search space during grammar development. Efficiency advantages of this choice for graph-unification-based representation languages are outlined, as well as a number of other advantages that acrue to this approach by virtue of its use of an agenda as a control structure. We also mention two useful refinements to the basic best-first chart parsing algorithm that have been implemented in the Lingo project.

Journal ArticleDOI
Yoshihiko Nitta1
TL;DR: The potential and the limitation of current machine translation is discussed by comparing the output of human translation and that of virtual machine translation, showing that the main reason for the limitation or the incompleteness of current practical machine translation systems is the insufficient ability to treat “structural idiosyncrasies” of sentences.