scispace - formally typeset
Search or ask a question

Showing papers on "Natural language understanding published in 2000"


Patent
Mark E. Epstein1
25 Oct 2000
TL;DR: This paper applied a context free grammar to the text input to determine substrings and corresponding parse trees, and examined each possible substring using an inventory of queries corresponding to the CFG.
Abstract: A method and system for use in a natural language understanding system for including grammars within a statistical parser. The method involves a series of steps. The invention receives a text input. The invention applies a first context free grammar to the text input to determine substrings and corresponding parse trees, wherein the substrings and corresponding parse trees further correspond to the first context free grammar. Additionally, the invention can examine each possible substring using an inventory of queries corresponding to the CFG.

74 citations


01 Jan 2000
TL;DR: It is shown that the most important features are those that the natural language understanding module can compute, suggesting that integrating the trained classifier into the NLU module of the How May I Help You system should be straightforward.
Abstract: While it has recently become possible to build spoken dialogue systems that interact with users in real-time in a range of domains, systems that support conversational natural language are still subject to a large number of spoken language understanding (SLU) errors. Endowing such systems with the ability to reliably distinguish SLU errors from correctly understood utterances might allow them to correct some errors automatically or to interact with users to repair them, thereby improving the system’s overall performance. We report experiments on learning to automatically distinguish SLU errors in 11,787 spoken utterances collected in a field trial of AT&T’s How May I Help Yousystem interacting with live customer traffic. We apply the automatic classifier RIPPER (Cohen 96) to train an SLU classifier using features that are automatically obtainable in real-time. The classifer achieves 86% accuracy on this task, an improvement of 23% over the majority class baseline. We show that the most important features are those that the natural language understanding module can compute, suggesting that integrating the trained classifier into the NLU module of the How May I Help You system should be straightforward.

58 citations


Journal ArticleDOI
TL;DR: GALEN has developed a new generation of terminology tools based on a language independent concept reference model using a compositional formalism allowing computer processing and multiple reuses to develop a new multipurpose coding system for surgical procedures in France.

50 citations


Journal ArticleDOI
TL;DR: Some visible evidences of how everyday language interpretation and law making expertise are closely rooted in the same highly inventive cognitive process, abduction are presented.
Abstract: The complexity of any given cognitive phenomenon, such as “scientific discovery”, “technical expertise”, or “natural language understanding”, requires a multidisciplinary approach. Presents, within the framework of such an approach, some visible evidences of how these very different phenomena are closely rooted in the same highly inventive cognitive process, abduction. These evidences will be provided out of examples from both everyday language interpretation and law making expertise.

44 citations


Book ChapterDOI
19 Jun 2000
TL;DR: Two components of Atlas|APE, the integrated planning and execution system at the heart of Atlas, and CARMEL, the natural language understanding component are described, designed as domain-independent rule-based software, with the goal of making them both extensible and reusable.
Abstract: The goal of the Atlas project is to increase the opportunities for students to construct their own knowledge by conversing (in typed form) with a natural language-based ITS. In this paper we describe two components of Atlas|APE, the integrated planning and execution system at the heart of Atlas, and CARMEL, the natural language understanding component. These components have been designed as domain-independent rule-based software, with the goal of making them both extensible and reusable. We illustrate the use of CARMEL and APE by describing Atlas-Andes, a prototype ITS built with Atlas using the Andes physics tutor as the host.

33 citations


Patent
Mark E. Epstein1
25 Oct 2000
TL;DR: In this paper, a statistical natural language understanding (NLU) model is applied to test input for identifying substrings within the text input, and a statistical NLU model can be selected for identifying a particular class of substring.
Abstract: A method and system for statistical parsing. The method involves a series of steps. The system can apply a statistical natural language understanding (NLU) model to test input for identifying substrings within the text input. The statistical NLU model can be selected for identifying a particular class of substring. The system can examine each identified substring using an inventory of queries corresponding to the reusable statistical NLU model.

27 citations


BookDOI
01 Jan 2000
TL;DR: This paper presents a tutoring based approach to the development of intelligent agents and bio-Inspired systems, and an object-oriented framework for building collaborative network agents, based on principles, fuzzy and neural approaches.
Abstract: Preface. Acknowledgments. About the Editors. Contributors. Part 1: Intelligent Agents and Bio-Inspired Systems. 1. A tutoring based approach to the development of intelligent agents G. Tecuci, et al. 2. An object-oriented framework for building collaborative network agents L. Boloni, D.C. Marinescu. 3. Animals versus robotic autonomous agents J.E.R. Staddon, I.M. Chelaru. 4. From configurable circuits to bio-inspired systems M. Sipper, et al. Part 2: Intelligent Data Processing. 5. Fuzzy data mining A. Kandel, A. Klein. 6. Feature-oriented hybrid neural adaptive systems and applications H.-N. Teodorescu, C. Bonciu. 7. Algebraic neuro-fuzzy systems and applications H.-N. Teodorescu, D. Arotaritei. Part 3: Interfaces. 8. Neuro-fuzzy approach to natural language understanding and processing. Part I: Neuro-fuzzy device E. Ferri, G. Langholz. 9. Neuro-fuzzy approach to natural language understanding and processing. Part II: Neuro-fuzzy learning algorithms E. Ferri, G. Langholz. 10. Graph matching and similarity H. Bunke, Xiaoyi Jiang. Part 4: Applications and High-tech Management. 11. Diagnosis systems and strategies: principles, fuzzy and neural approaches P.M. Frank, T. Marcu. 12. Intelligent non-destructive testing and evaluation with industrial applications C. Morabito. 13. Managing high-tech projects. Part I D. Mlynek, P. Mali. 14. Managing high-tech projects. Part II D. Mlynek, P. Mali. Index of Terms.

24 citations


Journal ArticleDOI
TL;DR: This article provides a global overview of the main aspects of current practice in the design, implementation and evaluation of speech recognition components for Spoken Language Dialog Systems (SLDSs), and presents the results of the DISC European project related to speech recognition.
Abstract: This article provides a global overview of the main aspects of current practice in the design, implementation and evaluation of speech recognition components for Spoken Language Dialog Systems (SLDSs), and presents the results of the DISC European project related to speech recognition. DISC and its successor DISC-2 are efforts towards the definition of best practice guidelines for SLDS development and evaluation. SLDSs aim at using natural spoken input for performing an information processing task such as automated standards, call routing or travel planning and reservations. The main functionality of an SLDS are speech recognition, natural language understanding, dialog management, database access and interpretation, response generation and speech synthesis. Speech recognition, which transforms the acoustic signal into a string of words, is a key technology in any SLDS.

19 citations


01 Jul 2000
TL;DR: MS-NLP as mentioned in this paper is a broad-coverage natural language understanding system that has been under development in Microsoft Research since 1991, and it can produce useful linguistic analysis of any piece of text passed to it, regardless of whether that text is formal business prose, casual email, or technical writing from an obscure scientific domain.
Abstract: MS-NLP is a broad-coverage natural language understanding system that has been under development in Microsoft Research since 1991. Perhaps the most notable characteristic of this effort has been its emphasis on arbitrarily broad coverage of natural language phenomena. The system’s goal is to produce a useful linguistic analysis of any piece of text passed to it, regardless of whether that text is formal business prose, casual email, or technical writing from an obscure scientific domain. This emphasis on handling any sort of input has had interesting implications for the design of morphological and syntactic processing. Equally interesting, though, are its implications for semantic processing. The issue of polysemy and the attendant practical task of word sense disambiguation (WSD) take on entirely new dimensions in the context of a system like this, where a word might have innumerable possible meanings. A starting assumption, for example, is that MS-NLP will routinely have to interpret words and technical word senses that are not described in standard reference dictionaries.

11 citations


Proceedings Article
01 Jan 2000
TL;DR: Issues encountered in designing and implementing a natural language based call steering application for telephone service calls are studied and issues encountered are studied.
Abstract: In this paper, a dialogue system for natural language based call steering is described and studied. The system is based on natural language speech recognition and understanding within a mixed initiative dialogue. The system is implemented on Bell Labs. Speech Technology Integration Platform (BLSTIP) using dialogue and natural language understanding components from BT laboratories. A prototype system in the operator service domain [2] is described. In order to improve the acoustic and language modeling for natural language based dialogue applications, various approaches are described and studied. The structure of the dialogue manager is also presented in which mixed-initiative dialogue can be supported with efficiency. Call classification and steering experiments were performed. The results confirm the efficacy of the proposed approach. 1. INTRODUCTION Natural language dialogue between human and machine is a challenge. In order to make a natural language based dialogue system successful, various efforts are made to improve the accuracy, flexibility and robustness of the system component technologies, such as speech recognition, speech understanding, dialogue generation and dialogue manager, text-to-speech synthesis, etc. Such a complex dialogue application imposes stringent requirements on the flexibility of the system platform. One of the drawbacks in systems deployed in the past is the limitation imposed by the finite state grammar on the language that a user can use to communicate with the machine. Although such constraint alleviates the complexity and problem in recognizing human speech, it becomes an obstacle to support more powerful, user friendly and flexible dialogue systems for mixed-initiative dialogues. In this paper, we study issues encountered in designing and implementing a natural language based call steering application for telephone service calls. This is a complicated application, and it performs a detailed diagnostic dialogue to identify the service problem, such as a troubled telephone line and etc., that the user is experiencing. It provides the desired service after receiving user’s consent and confirmation [2]. In the prototype system studied in this paper, the dialogue can go deep through many turns. The natural language based request and query from the user is recognized through natural language based automatic speech recognition. There is no constraint on the way that the user should communicate to the system. It allows the user to make direct requests as well as provide a description of the problem where the final action will be identified as the outcome of the dialogue. A call classifier provides natural language understanding based on the word string from the speech recognition output. The dialogue manager uses this understanding to determine the next appropriate system action. The organization of this paper is as follows. In Section 2, the dialogue system architecture and design are presented which support natural language based mixed-initiative dialogue applications such as call steering, movie locator, etc. Section 3 is devoted to natural language based speech recognition and statistical language modeling for dialogue applications. Section 4 is concentrated on the dialogue manager design and automatic query generation. Call classification and steering are studied in Section 5 and results are given based on a case study in a telephone service application.

11 citations


Proceedings Article
01 Jan 2000
TL;DR: The hierarchical method improves the system accuracy, reduces the computational complexity of the translation, provides additional numerical robustness during training and decoding, and permits a more efficient packaging of the components of the natural language understanding system.
Abstract: For complex natural language understanding systems with a large number of statistically confusable but semantically different formal commands, there are many difficulties in performing an accurate translation of a user input into a formal command in a single step. This paper addresses scalability issues in natural language understanding, and describes a method for performing the translation in a hierarchical manner. The hierarchical method improves the system accuracy, reduces the computational complexity of the translation, provides additional numerical robustness during training and decoding, and permits a more efficient packaging of the components of the natural language understanding system.

Proceedings Article
01 Jan 2000
TL;DR: The paper shows the e ects of combining a stochastic grammar with a word bigram language model by log-linear interpolation and reports attribute error rate (AER) results measured on the Philips corpus of train time table inquiries that show a reduction of up to 9% relative.
Abstract: The paper shows the e ects of combining a stochastic grammar with a word bigram language model by log-linear interpolation. It is divided into three main parts: The rst part derives the stochastic grammar model and gives a sound theoretical motivation to incorporate word dependencies such as bigrams. The second part describes two different algorithmic approaches to the combination of both models by log-linear interpolation. The third part reports attribute error rate (AER) results measured on the Philips corpus of train time table inquiries that show a reduction of up to 9% relative. 1. STOCHASTIC MODEL OF NATURAL LANGUAGE UNDERSTANDING The Philips Natural Language Understanding (NLU) module is used in automated inquiry systems (AIS), such as train table enquiries [2], to analyze the word sequence of a user utterance. It does not try to nd parse trees that cover the whole word sequence but breaks up the sequence into chunks, where each chunk belongs to a semantically meaningful concept. A stochastic context{free grammar is used to derive the word chunk from a concept. The chunking is useful since the spontaneous speech that occurs in dialogue applications is very ungrammatical. Thus, a robust NLU model concentrates on the useful parts of a user utterance. Other recent works also employ some kind of chunking, e.g. [6, 9]. The stochastic model of the Philips NLU module was developed by H. Aust in [1, p. 81]. Here, we show that this model can be derived from Bayes' decision rule. This derivation gives a sound theoretical motivation to incorporate word dependencies such as bigrams. Bayes' decision rule nds the most likely concept sequence K̂ = k̂1; : : : ; k̂s, given the sequence O = o1; : : : ; ot of acoustic observations. The derivation of the concept sequence K does not directly depend on the acoustic observations O but on a word sequenceW = w1; : : : ; wN derived from O as an intermediate result:


01 Jan 2000
TL;DR: This paper describes a design for and current progress towards building DAIENU 1, the Domain knowledge Authoring Interface for Extractive Natural language Understanding, to facilitate the rapid development of robust and efficient natural language understanding interfaces for tutoring systems.
Abstract: A variety of studies in recent years have focused on assessing the relative effectiveness of different human tutorial strategies and investigating the role of language interaction ill such strategies, including (Chi et al. 1989) (Chi et al. 1994) (VanLelm, Jones, Chi 1992) (VanLehn, Siler, & Baggett 1998) (Ros4 et al. 2000). These studies leave open the questions of whether the same tutoring strategies that are effective in human tutoring can be emulated in ITSs with similar effectiveness, and whether this emulation can in fact be achieved in a cost effective way. Without answers to these questions, the direct relevance of such findings about human tutorial dialogue to the field of intelligent tutoring is called into question. While interest in dialogue interfaces for tutoring systems is rapidly growing, real progress in this direction has been greatly hindered by the tremendous time, effort., and expertise that is required to construct such interfaces. This paper describes a design for and current progress towards building DAIENU 1, the Domain knowledge Authoring Interface for Extractive Natural language Understanding. The purpose of the DAIENU tool set is to facilitate the rapid development of robust and efficient natural language understanding interfaces for tutoring systems. The DAIENU tool set is part of the larger Knowledge Construction Dialogue Authoring Tool Suite developed in the context of the Atlas project to automate the authoring of all domain specific knowledge sources required to build a domain specific dialoguebased tutoring systeln using the general Atlas architecture (Freedman et al. 2000). In particular the current prototype version of the tool suite automates the construction of recipes for the APE tutorial dialogue planner (Freedman 2000) and semantic rules for the LCFlex robust parser (ttos4 and Lavie, to appear). The prototype KCD Authoring Tool Suite is currently operational but continuing to be developed and refined. This prototype tool suite was recently used to develop knowledge sources for implen~enting directed lines of reasoifing targeting 50 physics rules covering all aspects of Newtonian

Journal ArticleDOI
TL;DR: This special issue brings together representative views on what has come to be known as "best practice" in the development and evaluation of spoken language dialogue systems (SLDSs) in the context of the European Esprit project DISC, which ran from June 1997 till February 2000.
Abstract: This special issue brings together representative views on what has come to be known as "best practice" in the development and evaluation of spoken language dialogue systems (SLDSs) The issue was initiated in the context of the European Esprit project DISC, which ran from June 1997 till February 2000 DISC's main goal was to identify current practice in both the development and the evaluation of SLDSs, in order to arrive at a useful definition and description of best practice The project has resulted in a collection of guidelines which are intended for different target groups, in particular developers, deployers and customers The last few years the interest in SLDSs has increased enormously At present there is a large number of systems available, many of them for commercial use Their number is growing rapidly, and so are the variety of their functionalities and the diversity of their application domains The tasks that advanced systems are able to perform are often more complex, less stereotypical, and are often carried out in the context of several interconnected domains of application With these advances have come higher expectations of the naturalness and intelligence with which SLDSs fulfill their assignments, and as a consequence the interest in such systems has even grown more, both within academic and commercial circles As far as natural human- system interaction is concerned, one significant change in SLDS design concerns the interaction between natural language understanding and dialogue management Here we see a clear tendency towards models that incorporate a substantial amount of discourse semantics and make use of some conception of context-change This allows for more natural interactions between the system and its human users, due on the one hand to the system's improved ability to compute the intended meaning of the user's input and on the other to the increased sophistication of the strategies it uses for planning its own responses Such improved capacities are crucial when the system is to leave more of the initiative to the user, instead of keeping the dialogue on a narrowly circumscribed path of largely predictable exchanges Further, there is a tendency to combine spoken language human-system interaction with other modalities of information exchange and representation (eg, images and gestures), asking for both modality-specific and modality-integrating syntactic and semantic processing capabilities All these developments have led to a situation in which there is a great need, shared by developers, deployers and customers alike, for effective guidelines, which will enable them to make accurate and successful design and implementation decisions, in accordance with broad consensus of what must be best practice in this particular engineering domain

01 Jan 2000
TL;DR: The transformationbased parsing technique for language understanding is introduced, and found that it is effective in disambiguating among the various kinds of numeric expressions prevalent in the stocks domain, as well as infer possible semantic categories for out-of-vocabulary words.
Abstract: This work demonstrates that our natural language understanding framework can be applied across application domains and languages with ease. Approaches towards language understanding generally involve much handcrafting, e.g. in writing grammars or annotating corpora, hence portability is a desirable trait in the development of language understanding systems. Our framework for natural language understanding couples semantic tagging with Belief Networks for communicative goal inference, and has delivered promising results in the ATIS (Air Travel Information Systems) domain. This work applies the approach to the stocks domain. Furthermore, the approach is extended to Chinese, to support a biliteral / trilingual (English with two Chinese dialects) spoken dialog system known as ISIS. We introduce the transformationbased parsing technique for language understanding, and found that it is effective in disambiguating among the various kinds of numeric expressions prevalent in the stocks domain, as well as infer possible semantic categories for out-of-vocabulary words. The nonterminal categories produced by parsing are fed to Belief Networks trained on English or Chinese queries for inferring the user’s communicative goal. Our experiments gave a goal identification performance of 94% and 93% for Chinese and English respectively.

Journal ArticleDOI
01 Aug 2000
TL;DR: P parsing algorithms that recreate the derivation structure starting with a lexicon and the surface form of a sentence are proposed, which leads to linguistically based algorithms for determining possible meanings for sentences that are ambiguous due to quantifier scope.
Abstract: In this paper, we discuss how recent theoretical linguistic research focusing on the Minimalist Program (MP)(Cho95, Mar95, Zwa94)can be used to guide the parsing of a useful range of natural language sentences and the building of a logical representation in a principles-based manner. We discuss the components of the MP and give an example derivation. We then propose parsing algorithms that recreate the derivation structure starting with a lexicon and the surface form of a sentence. Given the approximated derivation structure, MP principles are applied to generate a logical form, which leads to linguistically based algorithms for determining possible meanings for sentences that are ambiguous due to quantifier scope. In this paper, we introduce a framework for describing the grammar of natural languages due to Noam Chomsky called the Minimalist Program (MP). We investigate how to build a parser that produces syntax trees conforming to the MP and how aspects of language that have bearing on meaning, but cannot be conveniently captured during parsing, can be processed. The linguistic framework is discussed first, followed by the computational implementation. We first give a brief overview of the Minimalist Program (Chomsky 1995; Marantz 1995; Zwart 1994). The MP is the latest incarnation of Principles and Parameters grammars that define language structure in terms of well-motivated principles and parameters that adapt the principles to various natural languages. However, in this paper, we do not describe a parser that implements the MP in all its cognitive implications, which are still not completely understood and are being investigated. We feel that it is a worthwhile exercise to use the basic principles of the MP to obtain rules and structures that enable the traditional implementation of a parser. Parsing, however, is only a part of the processing. There are many linguistic phenomena than can be handled only after a syntax tree has been obtained. We discuss, among other issues, further processing of the parse to handle issues in quantifier scoping. We think this part of the paper is interesting since it shows how vexing linguistic phenomena can be handled using simple computational techniques. In writing the parser, we use a set of sentence types that have been considered by those who have written parsers motivated by earlier versions of Principles and Parameters grammars. Merlo, in his paper on a parser based on an earlier version of Principles and Parameters grammar, called the Government and Binding Theory (GB), wrote that the set of sentences he chose constitutes a

Proceedings Article
12 Apr 2000
TL;DR: A general system architecture is presented which integrates requirements from the analysis of single sentences, as well as those of referentially linked sentences forming cohesive texts, for soundness and validity of the generated text representation structures.
Abstract: SynDiKATe comprises a family of natural language understanding systems for automatically acquiring knowledge from real-world texts (e.g., information technology test reports, medical finding reports), and for transferring their content to formal representation structures which constitute a corresponding text knowledge base. We present a general system architecture which integrates requirements from the analysis of single sentences, as well as those of referentially linked sentences forming cohesive texts. Properly accounting for text cohesion phenomena is a prerequisite for the soundness and validity of the generated text representation structures. It is also crucial for any information system application making use of automatically generated text knowledge bases in a reliable way.

Patent
30 Oct 2000
TL;DR: In this paper, a command prediction system for natural language understanding systems is presented, which includes a user interface for receiving commands from a user, and a command predictor receives commands from the user interface and predicts at least one next command which is likely to be presented by the user based on a command history.
Abstract: The present invention relates to a command prediction system for natural language understanding systems which includes a user interface for receiving commands from a user. A command predictor receives the commands from the user interface and predicts at least one next command which is likely to be presented by the user based on a command history. A probability calculator is included in the command predictor for determining a probability for each of the at least one next command based on the command history such that a list of predicted commands and their likelihood of being a next command are provided.

Posted Content
TL;DR: This paper obtains a case-role analysis, in which the semantic roles of the verb are identified, and presents a semantic parsing approach for unrestricted texts that identifies correctly more than 73% of possible semantic case-roles.
Abstract: This paper presents a semantic parsing approach for unrestricted texts. Semantic parsing is one of the major bottlenecks of Natural Language Understanding (NLU) systems and usually requires building expensive resources not easily portable to other domains. Our approach obtains a case-role analysis, in which the semantic roles of the verb are identified. In order to cover all the possible syntactic realisations of a verb, our system combines their argument structure with a set of general semantic labelled diatheses models. Combining them, the system builds a set of syntactic-semantic patterns with their own role-case representation. Once the patterns are build, we use an approximate tree pattern-matching algorithm to identify the most reliable pattern for a sentence. The pattern matching is performed between the syntactic-semantic patterns and the feature-structure tree representing the morphological, syntactical and semantic information of the analysed sentence. For sentences assigned to the correct model, the semantic parsing system we are presenting identifies correctly more than 73% of possible semantic case-roles. Keys:

Patent
01 Sep 2000
TL;DR: In this article, a method and system, which may be implemented by employing a program, perform method steps of a natural language understanding (NLU) system which include tagging, 200, recognized words of a command input to the NLU system to associate the command with a context, and translating, 300, the command to at least one formal command based on the tagged words.
Abstract: A method and system, which may be implemented by employing a program, perform method steps of a natural language understanding (NLU) system which include tagging, 200, recognized words of a command input to the NLU system to associate the command with a context, and translating, 300, the command to at least one formal command based on the tagged words. A top ranked formal command is determined, 400, based on scoring of the tagged recognized words and scoring translations of the at least one formal command. Whether the top ranked formal command is accepted is determined by comparing a feature vector of the top ranked formal command to representations of feature vectors stored in an accept model. The top ranked formal command is executed, 500, if accepted and incorrect commands are prevented from execution.

Book ChapterDOI
14 Oct 2000
TL;DR: A theoretical task oriented natural language understanding (NLU) model is provided in this paper, together with a practical application based on it, and the model and methods are worth reference in processing other domains' problems.
Abstract: A theoretical task oriented natural language understanding(NLU) model is provided in this paper, together with a practical application based on it. The model is based on sentence framework (regular language mode in specific areas), chunks, etc. And it consists of following modules: post-process, NLU, target searching, connection, action. Every module is discussed in detail taking the application as example. The NLU module is of great importance, and we place emphasis on it. The success of our application shows that the model provided in this paper is feasible in specific task areas. The sentence framework, together with chunks, is very important in expressing sentence meaning. Verbs are of great important in sentence analysis. And we can use this model and methods provided in this paper to build many applications in various areas. Further more, the model and methods are worth reference in processing other domains' problems.

01 Aug 2000
TL;DR: A semantic feature representation for use in practical dialogue systems is proposed and it is argued that it can offer advantages in terms of lexicon development and portability and can also be useful for other system modules that do logical inference.
Abstract: Reasoning about semantic classes and determining compatibility of the words in a given context is an important procedure used in many modules of natural language understanding systems. However, most existing systems do not devote much attention to their ontological knowledge representations, resulting in implementations that are not portable to other domains. At the same time, statistical methods are more robust and less labor-intensive to develop, but typically result in models that are not easily interpretable by humans. We propose a semantic feature representation for use in practical dialogue systems and argue that it can offer advantages in terms of lexicon development and portability---in particular for defining selectional restrictions---and can also be useful for other system modules that do logical inference. We then propose to develop statistical methods allowing us to learn parts of our representation from corpus data.

Journal Article
TL;DR: The studing content of the natural language compreuenscor in brief is recounted and the Chinese sentence characteristic is analysed, and a summary of the present situation and development trend of thenatural language understanding inside and outside of China is summarized.
Abstract: Natural language comprehension is one of the important subjects of artificial intelligence research,and it is difficult to study In this paper,first,we recounted the studing content of the natural language compreuenscor in brief Then we analysed the Chinese sentence characteristicFinally, gjving a summary of the present situation and development trend of the natural language understanding inside and outside of China


Book ChapterDOI
01 Jun 2000
TL;DR: This research project offers a connectionist alternative to Buchheit's symbolic inference module for INFANT called the Connectionist Inference Mechanism (CIM), a hybrid cognitive model that combines the advantages of the symbolic approach, local representation, and parallel distributed processing.
Abstract: Previous research has shown that connectionist models are suitable for cognitive and natural language processing tasks. An inference mechanism is a key element in commonsense reasoning in a natural language understanding system. This research project offers a connectionist alternative to Buchheit's symbolic inference module for INFANT called the Connectionist Inference Mechanism (CIM). CIM is a hybrid cognitive model that combines the advantages of the symbolic approach, local representation, and parallel distributed processing. Moreover, it makes good use of its modular structure. Several modules work together in CIM, including memory, neural networks, and a binding set, to perform the inference generation. Besides rule application capability, CIM is also able to perform variable binding. A number of experiments have shown that CIM can make inferences appropriately.

Book ChapterDOI
01 Jan 2000
TL;DR: A generic device based on a combined architecture of fuzzy logic representation with neural network learning and adaptation is designed, capable of learning in real-time, based on environmental (user) feedback and past experience, and is capable of adjusting its internal perception of the world with every interaction with the environment that holds some change or innovation.
Abstract: In this and the following chapter we discuss a neuro-fuzzy approach to natural language processing. We design a generic device based on a combined architecture of fuzzy logic representation with neural network learning and adaptation. The device uses natural language to interactively communicate with its human environment, it follows the“ conversation’ s” context and understands the meaning of each sentence, and then executes it. The device is capable of learning in real-time, based on environmental (user) feedback and past experience, and is capable of adjusting its internal perception of the world with every interaction with the environment that holds some change or innovation. In the following chapter, we introduce two novel fuzzy learning algorithms that the device uses to learn new abstract terms and to adjust existing ones in the course of performing its task. We present a real-world application to demonstrate the principles of the device.