scispace - formally typeset
Search or ask a question

Showing papers on "Natural language understanding published in 1992"


Journal ArticleDOI
01 Mar 1992-Language
TL;DR: The role of knowledge in language comprehension has been discussed in this article, where the authors discuss the importance of knowledge and the role of discourse understanding in natural language understanding by computers and people.
Abstract: Preface 1 Introduction 2 Language and meaning: representing and remembering discourse 3 Syntax and parsing processes 4 The role of knowledge in language comprehension 5 Understanding coherent discourse 6 Theme 7 Inference processes 8 Understanding stories 9 Question answering and sentence verification 10 Natural language understanding by computers - and people References Author index Subject index Acknowledgments

81 citations


01 Jan 1992
TL;DR: This work proposes a probabilistic basis for natural language understanding models, and argues that probability theory provides an elegant basis for evidential interpretation, to model automatic inference in language understanding.
Abstract: This work proposes a probabilistic basis for natural language understanding models. It has become apparent that syntax and semantics need to be highly integrated, especially to understand constructs like nominal compounds, but inadequate modelling tools have hindered efforts to replace the traditional parser-interpreter pipeline architecture. Qualitatively, associative frameworks like spreading activation and marker passing produce the desired interactions, but their reliance on ad hoc numeric weights make scaling them up to interestingly large domains difficult. On the other hand, statistical approaches ground numeric measures over large domains, but have thus far failed to incorporate the structural generalizations found in traditional models. A major reason for this is the inability of most statistical language models to represent compositional constraints; this is related to the variable binding problem in neural networks. The proposed model attacks these issues from three directions. First, it distinguishes two fundamentally different mental processing modes: automatic and controlled inference. Automatic inference is pre-attentive, subconscious, reflexive, fairly instantaneous, associative, and highly heuristic; this delimits the domain of parallel interactive processing. Automatic inference is motivated by both resource bounds and empirical criteria, and is responsible for much if not most of parsing and semantic interpretation. Second, the nature of mental representations is defined more precisely. The proposed cognitive ontology includes mental images, lexical semantics, conceptual, and lexicosyntactic modules. Automatic inference extends over all modules. The modular ontology approach accounts for a range of subtle meaning distinctions, is consistent with psycholinguistic and neural evidence, and helps reduce the complexity of the concept space. Third, probability theory provides an elegant basis for evidential interpretation, to model automatic inference in language understanding. A uniform representation for all the modules is proposed, compatible with both feature-structures and semantic networks. Probabilistic, associative extensions are then made to those frameworks. Theoretical and approximate maximum entropy methods for evaluating probabilities are proposed, as well as the basis for a normative distribution for learning and generalization.

40 citations


Journal ArticleDOI
01 Nov 1992
TL;DR: Speech activated manipulator (SAM), a reasoning robotic system with sensory capabilities that interacts with a human partner using natural language, is described and is robust and resistant to user errors.
Abstract: Speech activated manipulator (SAM), a reasoning robotic system with sensory capabilities that interacts with a human partner using natural language, is described. The robot understands, in real time, about 1041 semantically meaningful naturally spoken English language sentences using a vocabulary of about 200 words. SAM includes developments in mechanical control, real-time operating systems, multiprocessor communication and synchronization, kinematics, sensors and perception techniques, speech recognition and natural language understanding, robotic reasoning with gripper and arm motion planning. Speech recognition is augmented with semantic analysis and evaluation to form a complete speech understanding system. Used in conjunction with error recovery rules in the robot expert and frame-based knowledge system, SAM is robust and resistant to user errors. The most interesting aspects of the SAM system are described. Observations and experiences are discussed along with some advice for those interested in building similar systems. >

29 citations


Book ChapterDOI
05 Apr 1992
TL;DR: A model for exchange structure is outlined — the systemic flowchart model — and it is shown how the two can be related to each other in an integrated overall model.
Abstract: This paper identifies a major problem in planning discourse, and then points to a solution. The problem is that of the relationship between models for monologue and dialogue. Rhetorical structure theory (RST) is selected as the current best prospect for modelling monologue. A model for exchange structure is then outlined — the systemic flowchart model — and we conclude by showing how the two can be related to each other in an integrated overall model.

26 citations


Journal ArticleDOI
TL;DR: This chapter discusses database architecture for network services, a relational model for large shared data banks, and the case for orderly sharing in a database system.
Abstract: 2. Bowen, T.F., Gopal, G., Herman, G.E. and Mansfield, WH. A scalable database architecture for network services. In Proceedings of the Eighth International Switching Symposium (Stockholm, Sweden, May 1990). 3. Codd, E.F. A relational model for large shared data banks. Commun. ACM 14, 6 (June 1970), 377-387. 4. Date, C.J. An Introduction to Database Systems. Vol 1, Fourth Ed. AddisonWesley, Reading, Mass., 1986. /i. Eswaran, K.P., Gray,J.N., Lorie, R.A. and Traiger, I.L. The notions of consistency and predicate locks in a database system. Commun. ACM 19, 11 (Nov. 1976), 624-633. 6. Gawlick, D. Processing hot spots in high performance systems. In the Proceedings of Spring COMPCON '85. IEEE Computer Society, Los Alamitos, Calif., 1985. 7. Gold, I. and Boral, H. The power of the private workspace model. Inf. Syst. 11, 1 (1986), 1-7. 8. Goldberg, D., Nichols, D., Oki, B. and Terry, D. Using collaborative filtering to weave an information tapestry. Commun. ACM 35, 12 (Dec. 1992). 9. Herman, G. and Gopal, G. The case for orderly sharing. In Lecture Notes in Computer Science on High Performance Transaction Systems, D. Gawlick, M. Haynie, and A. Reuter, Eds. I I I I I I I I l l l OmOHu DUOml F l L T E R l N G

25 citations


Journal ArticleDOI
TL;DR: An intelligent computer-assisted language instruction system that is designed to teach principles of syntactic style to students of English, STASEL for Stylistic Treatment At the Sentence Level makes use of artificial intelligence techniques in natural language processing to analyze free-form input sentences interactively.
Abstract: This article describes an intelligent computer-assisted language instruction system that is designed to teach principles of syntactic style to students of English Unlike conventional style checkers, the system performs a complete syntactic analysis of its input, and takes the student's stylistic intent into account when providing a diagnosis Named STASEL for Stylistic Treatment At the Sentence Level, the system is specifically developed for the teaching of style, and makes use of artificial intelligence techniques in natural language processing to analyze free-form input sentences interactively

21 citations


Journal ArticleDOI
TL;DR: This paper inspects techniques to overcome both syntactically and semantically ill-formed input in sentence parsing and then looks briefly into more recent ideas concerning the extraction of information from texts, and the related question of the role that linguistic research plays in this game.
Abstract: Practical natural language understanding systems used to be concerned with very small miniature domains only: They knew exactly what potential text might be about, and what kind of sentence structures to expect This optimistic assumption is no longer feasible if NLU is to scale up to deal with text that naturally occurs in the "real world" The key issue is robustness: The system needs to be prepared for cases where the input data does not correspond to the expectations encoded in the grammar In this paper, we survey the approaches towards the robustness problem that have been developed throughout the last decade We inspect techniques to overcome both syntactically and semantically ill-formed input in sentence parsing and then look briefly into more recent ideas concerning the extraction of information from texts, and the related question of the role that linguistic research plays in this game Finally, the robust sentence parsing schemes are classified on a more abstract level of analysis

20 citations


01 Jan 1992
TL;DR: Some of the lessons learned in adapting an existing natural language interface to accept spoken input are presented, and the consequences and implications of this addition of speech recognition capabilities are discussed.
Abstract: The addition of speech recognition capabilities would seem to be a logical and desirable extension to a keyboard-entry natural language understanding interface However, because of the limitations of some speech recognition technologies this addition can affect the structure and flexibility of the interface This paper discusses the consequences and implications of this addition, and presents some of the lessons learned in adapting an existing natural language interface to accept spoken input 1 I Introduction

20 citations


Journal ArticleDOI
TL;DR: A system developed for interpreting medical natural language in the domain of symptoms and diagnoses from complete discharge summaries and locating the correspondent category into the International Classification of Diseases, through indexing by the Systematized Nomenclature of Medicine.
Abstract: Developing tools for natural language understanding by computers represents an important and intense field of research. This paper describes a system developed for interpreting medical natural language in the domain of symptoms and diagnoses from complete discharge summaries and locating the correspondent category into the International Classification of Diseases, through indexing by the Systematized Nomenclature of Medicine. The indexing program makes use of the MEID dictionary and some auxiliary semantic databases for identifying adjectival forms, synonyms, hypernyms and other semantic relations while searching for the longest consistent match into SNOMED. A further subdivision of the SNOMED structure was also proposed in order to find the hierarchically superior representative of a conceptual class when this association is not assigned by the related SNOMED code number. The system can be used by any language that possesses a translation of SNOMED and ICD. The knowledge base was built using a conversion file that maps the terms of the nomenclature into the classification, which can be improved by learning from users.

20 citations


Journal ArticleDOI
TL;DR: Researchers are exploring the application of artificial intelligence techniques to information retrieval with the goal of providing intelligent access to online information, and systems incorporating user modeling, natural language understanding, and expert systems technology are presented.
Abstract: Researchers are exploring the application of artificial intelligence techniques to information retrieval with the goal of providing intelligent access to online information This article surveys several such systems to show what is possible in the lab today, and what may be possible in the lab today, and what may be possible in the library or office of tomorrow Systems incorporating user modeling, natural language understanding, and expert sytems technology are presented

19 citations


01 Jun 1992
TL;DR: This thesis views explanation as abduction, where an abduction explanation is a consistent set of assumptions which, together with background knowledge, logically entails a set of observations.
Abstract: A diverse set of intelligent activities, including natural language understanding, diagnosis, and scientific theory formation, requires the ability to construct explanations for observed phenomena. In this thesis, we view explanation as abduction, where an abduction explanation is a consistent set of assumptions which, together with background knowledge, logically entails a set of observations.

Journal ArticleDOI
TL;DR: This paper describes how ROBIN uses these abilities and the contextual evidence from its semantic networks to disambiguate words and infer the most plausible plan/goal analysis of the input, while using the same mechanism to smoothly re-interpret the input if later context makes an alternative interpretation more likely.
Abstract: Lexical and pragmatic ambiguity is a major source of uncertainty in natural language understanding. Symbolic models can make high-level inferences necessary for understanding text, but handle ambiguity poorly, especially when later context requires a re-interpretation of the input. Structured connectionist networks, on the other hand, can use their graded levels of activation to perform lexical disambiguation, but have trouble performing the variable bindings and inferencing necessary for language understanding. We have previously described a structured connectionist model, ROBIN, which overcomes many of these problems and allows the massively-parallel application of a large class of general knowledge rules. This paper describes how ROBIN uses these abilities and the contextual evidence from its semantic networks to disambiguate words and infer the most plausible plan/goal analysis of the input, while using the same mechanism to smoothly re-interpret the input if later context makes an alternative interpretation more likely. We present several experiments illustrating these abilities and comparing them to those of other connectionist models, and discuss several directions in which we are extending the model.

Journal ArticleDOI
TL;DR: The components of the ECO (English COnversational System) family formalism of semantic network are described, and how to superimpose organisational strategies into the network representations is discussed, beginning with the representation of lexical information and extending to the superimposition of topical organisations in the knowledge base.
Abstract: This paper presents an overview of the ECO (English COnversational System) family formalism of semantic network. In the paper, we describe the components of our semantic network, discussing its suitability as a representation of propositional knowledge. The use of our semantic network as a uniform representation mediating between specialised representations appropriate to particular task domains (e.g., understanding natural languages, etc.) is discussed. We motivate and explain a comprehensive network formalism. Special problems with respect to the use of logical connectives, quantifiers, descriptions, modalities, and certain other constructions that fail in conventional semantic networks, are systematically resolved with extensions to conventional network notations. The representation harmonizes with linear one-dimensional logical notations, illustrating the close kinship of the two notations. This kinship supports the claim that networks have inherited formal interpretability from logical notations. Several issues of network form and content, which are more fundamental than the choice of a network syntax, are addressed. These issues are: (i) primitive versus nonprimitive representations; (ii) the separation of propositional content of text from pragmatic aspects; and (iii) network normal form versus ad hoc systems. The design of computer systems for specific tasks depends in part on early commitments to these issues. The succinctness, clarity, and intuitive nature of semantic networks argues in their favour if only for purely methodological advantages. Semantic networks are readable; they suggest procedures for comprehension and inference, and the computer data structures which they resemble. Examples will demonstrate how associative processing algorithms and complex pattern matching operations are readily identifiable using networks. These examples are given in the context of natural language understanding utilizing networks in a state-based conceptual representation. We discuss how to superimpose organisational strategies into the network representations, beginning with the representation of lexical information and extending to the superimposition of topical organisations in the knowledge base. Several special purpose inference mechanisms extend the topical organisation we superimpose on concepts to aid retrieval of other types of information about concepts. The use of networks is assessed and promising areas for future research are described.

Book ChapterDOI
07 Sep 1992
TL;DR: The PUNDIT natural language understanding system as mentioned in this paper is a modular system implemented in Prolog which consists of distinct modules for syntactic, semantic, and pragmatic analysis for natural language processing.
Abstract: The PUNDIT natural language understanding system is a modular system implemented in Prolog which consists of distinct modules for syntactic,semantic, and pragmatic analysis. A central goal underlying PUNDIT's design is that the basic natural language processing functions should be independent of the system's domain of application and application. This approach to the design of natural language processing systems is motivated by the fact that in order to be practical, natural language systems must not require reimplementation of most of the system as they are ported to different domains. Thus, our goal in the design of PUNDIT is to reduce as much as possible the amount of effort that must be done to move the system to different applications.

01 Jan 1992
TL;DR: This thesis describes the design and development of a surface-level text generation system for an intelligent tutoring system for cardiovascular physiology, called CIRCSIM-TUTOR that assists first year medical students to master the negative feedback system that regulates blood pressure.
Abstract: This thesis describes the design and development of a surface-level text generation system for an intelligent tutoring system for cardiovascular physiology, called CIRCSIM-TUTOR that assists first year medical students to master the negative feedback system that regulates blood pressure. Both the natural language understanding and the generation components of the system use a Lexical Functional Grammar and lexicon that I developed especially for the cardiovascular sublanguage. The grammar and lexicon are based on a detailed sublanguage study of human tutoring sessions. The system runs in Procyon Common Lisp on a Macintosh IICi. Most previous work on surface level generation has involved the generation of declarative sentences providing explanations in expert systems or answers to questions. To fill the needs of the tutoring dialogue, our system produces hints and questions and acknowledgments as well as explanations. Detailed algorithms are included for generating compound nominals and conjoined noun phrases and compound and complex sentences. My method of constructing relative clauses is different from any available in the literature. The Lexical Functional Grammar was developed using the Grammar Writer's Workbench developed at Xerox Palo Alto Research Center by Ronald Kaplan. The results of these tests have been implemented in the current text generator and the input understander. Lexical entries in published work about LFG contain too little information, therefore the design of a richer lexicon including semantic relationships between words is sketched. As far as we know, our grammar and lexicon are the largest coherent set of rules and lexical entries available for English. We are developing some linguistic techniques for making tutorial dialogue as natural as possible based on the data drawn from the transcripts. The sublanguage study is based on the analysis of seven face-to-face and twenty-eight keyboard-to-keyboard tutoring sessions carried out by faculty members at Rush Medical College with first year students. The result of our cardiovascular sublanguage study served as the basis for the design and construction of our surface level generation.

DOI
01 Jan 1992
TL;DR: It is argued that feature-based systems (such as TDL) and DATR look compatible because of their common mathematical interpretation as graph description languages for directed graphs, but that this masks radically different modeling conventions for the graphs themselves.
Abstract: A FEATURE-BASED lexicon is especially sensible for natural language processing systems which are feature-based. Feature-based lexicons offer the advantages: (i) having a maximally transparent (empty) interface to feature-based grammars and processors; (ii) supplying exactly the EXPRESSIVE CAPABILITY exploited in these systems; and (iii) providing concise, transparent, and elegantspecification possibilities for various lexical relationships, including both inflection and derivation. The development of TYPED feature description languages allows the use of INHERITANCE in lexical description, and recent work explores the use of DEFAULT INHERITANCE as a means of easing lexical development. TDL is the implementation of a TYPE DESCRIPTION LANGUAGE based on HPSG feature logics. It is employed for both lexical and grammatical specification. As a lexical specification tool, it not only realizes these advantages, but it also separates a linguistic and a computational view of lexical contents and supplies a development environment for lexicon engineering. The most important competitor for feature-based lexical work is the very competent special purpose tool DATR, whose interface to feature-based systems is, however, inherently problematic. It is argued that feature-based systems (such as TDL) and DATR look compatible because of their common mathematical interpretation as graph description languages for directed graphs, but that this masks radically different modeling conventions for the graphs themselves. The development of TDL is continuing at the German Artificial Intelligence Center (Deutsches Forschungszentrum fur Kunstliche Intelligenz - DFKI) in the natural language understanding project DISCO.

Proceedings ArticleDOI
28 May 1992
TL;DR: NLUS is a Prolog-based natural language understanding system, which exploits multi-knowledge representation formalisms and is particularly good at handling the rule-oriented type of sentences and is considered a decent prototype for experimenting with natural languageUnderstanding.
Abstract: NLUS is a Prolog-based natural language understanding system, which exploits multi-knowledge representation formalisms. Sentences input from the user are converted into semantic networks and/or production rules; whereas the grammar rules are represented in predicate logic. After processing a sentence, the software may initiate its inference engine to deduce a response. A question would also be parsed and converted into a semantic structure which may contain possibly some missing information. To find an answer to a question, the missing information may be extracted from another related semantic structure in the working memory. At present, the software can handle sentences with relatively simple syntactic structures and some restricted types of questions. However, intelligent man-machine dialogues can be realized. It is particularly good at handling the rule-oriented type of sentences and is considered a decent prototype for experimenting with natural language understanding. >

11 Sep 1992
TL;DR: The addition of speech recognition capabilities to InterFIS is discussed, which affects the structure and flexibility of the interface because of the limitations of today's speech recognition technology.
Abstract: : InterFIS is a natural language interface to the troubleshooting module of the Fault Isolation Shell (FIS), which is an expert system development tool for the diagnosis of failures in analog electronics equipment. The main functions of this FIS module are as follows: (1) to compute the probability that a particular fault hypothesis is correct after one or more tests have been performed on a particular piece of electronics equipment, and (2) to recommend the next best test based on information supplied by the diagnostician during a testing session. The original interface to FIS was standard keyboard input, where the appropriate abbreviations for all commands were displayed on the screen in a large list grouped by function. A simple graphic interface also was developed, where the user invoked the commands by clicking on screen buttons labeled according to their functional grouping. Later a natural language interface, InterFIS, was added. InterFIS is a natural language understanding interface that accepts typed English commands as input. The PROTEUS chart parser performs a syntactic analysis, producing an application-independent syntactic representation of the input sentence. This intermediate representation is mapped to domain-specific verb models by the semantic interpreter PFQAS and then converted to FIS commands by the command translator COIN. The main drawback to this interface is that typing English sentences is slow and requires the use of both hands. This report discusses the addition of speech recognition capabilities to InterFIS. Because of the limitations of today's speech recognition technology, the addition of this capability affects the structure and flexibility of the interface. The report describes the speech recognition module, and provides a brief evaluation of its performance.

Book ChapterDOI
01 Jan 1992
TL;DR: The general goals of the natural language aspects of the TRAINS project, including parsing, semantic interpretation and discourse modelling are described.
Abstract: The TRAINS project is a long-term research effort on building an intelligent planning assistant that is conversationally-proficient in natural language. The TRAINS project serves as an umbrella for research that involves pushing the state of the art in planning, natural language understanding, natural language dialog, and discourse modelling. Significant emphasis is being put on the knowledge representation issues that arise in supporting the tasks in the domain. This paper describes the general goals of the natural language aspects of the TRAINS project, including parsing, semantic interpretation and discourse modelling.

Book ChapterDOI
01 Jan 1992
TL;DR: Taxonomies of the temporal subordinating conjunctions and prepositions in English, Dutch and German are presented to help computerized Natural Language understanding or translation system.
Abstract: Taxonomies of the temporal subordinating conjunctions and prepositions in English, Dutch and German are presented. These high frequency words are highly ambiguous. So a good taxonomy is of practical importance for computerized Natural Language understanding or translation system.

Journal ArticleDOI
TL;DR: The interdisciplinary Master of Science program in Artificial Intelligence at the University of Georgia is intended to prepare students for careers as developers of artificial intelligence applications or for further graduate work in artificial intelligence or related areas.
Abstract: The interdisciplinary Master of Science program in Artificial Intelligence at the University of Georgia is intended to prepare students for careers as developers of artificial intelligence applications or for further graduate work in artificial intelligence or related areas. The program includes foundational courses in computer science, linguistics, logic, philosophy, and psychology as well as specialized courses in artificial intelligence programming languages and techniques. Seminars emphasize knowledge-based systems, natural language understanding, and logic programming. Students are admitted to the program with degrees in many areas including business, computer science, education, linguistics, philosophy, and psychology. A liberal undergraduate education with some previous experience in computing is desirable. It normally takes two years to complete all prerequisites, all required courses, and the thesis.

01 Jan 1992
TL;DR: This work proposes a probabilistic basis for natural language understanding models that accounts for a range of subtle meaning distinctions, is consistent with psycholinguistic and neural evidence, and helps reduce the complexity of the concept space.
Abstract: This work proposes a probabilistic basis for natural language understanding models It has become apparent that syntax and semantics need to be highly integrated, especially to understand constructs like nominal compounds, but inadequate modelling tools have hindered efforts to replace the traditional parser-interpreter pipeline architecture Associative semantic networks rely on {\it ad hoc\/} numeric weights that make scaling them up to interestingly large domains difficult On the other hand, most statistical approaches do not handle compositional constraints well and thus omit important structural regularities underlying language The proposed model attacks these issues from three directions First, it distinguishes two fundamentally different mental processing modes: {\it automatic\/} and {\it controlled inference} Automatic inference is responsible for most of parsing and interpretation; it is pre-attentive, subconscious, reflexive, fairly instantaneous, associative, and highly heuristic Second, the nature of mental representations is defined more precisely The proposed cognitive ontology includes mental images, lexical semantics, conceptual, and lexicosyntactic modules The modular ontology approach accounts for a range of subtle meaning distinctions, is consistent with psycholinguistic and neural evidence, and helps reduce the complexity of the concept space Third, probability theory provides an elegant basis for evidential interpretation The representational basis for all modules is a probabilistic extension of feature-structures and semantic networks Theoretical and approximate maximum entropy methods for evaluating probabilities are proposed

Proceedings ArticleDOI
TL;DR: The HIRONDELLE research project of the Banque de France intends to summarize economic surveys giving statements about a specific economic domain using a set of primitives representing statements and causality meta-language, based on three distinct hierarchical structures.
Abstract: The HIRONDELLE research project of the Banque de France intends to summarize economic surveys giving statements about a specific economic domain. The principal goal is the detection of causal relations between economic events appearing in the texts. We will focus on knowledge representation, based on three distinct hierarchical structures. The first one concerns the lexical items and allows inheritance of syntactic properties. Descriptions of the applications domains are achieved by a taxonomy based on attribute-value models and case relations, adapted to the economic sectors. The summarization goal of this system defines a set of primitives representing statements and causality meta-language. The semantic analysis of the texts is based on two phases. The first one leads to a propositional representation of the sentences through conceptual graphs formalization, taking into account the syntactic transformations of sentences. The second one is dedicated to the summarizing role of the system, detecting paraphrastic sentences by processing syntactic and semantic transformations like negation or metonymious constructions.

Book ChapterDOI
01 Jan 1992
TL;DR: This short paper claims that in order to make progress in generation much more of this kind of knowledge would have to be stored in a declarative fashion.
Abstract: Problems arise when one attempts to ‘attach’ a natural language generation system to an existing natural language understanding system instead of ‘integrating’ it as an equal partner from the beginning. Much valuable information about the complex relationships between concepts and words is not available to generation though it is utilized by various processes and is to some degree represented in the knowledge base and within knowledge base rules. This short paper claims that in order to make progress in generation much more of this kind of knowledge would have to be stored in a declarative fashion. This short paper is meant as a supplement reading to the work presented by Lang and Novak in this volume.

Book ChapterDOI
01 Jul 1992
TL;DR: As the authors stated at the beginning of this paper, systems using natural language as a means of human/machine communication exhibit varying degrees of complexity; closely related to it there is the degree of complexity of the NLP module.
Abstract: As we stated at the beginning of our paper, systems using natural language as a means of human/machine communication exhibit varying degrees of complexity; closely related to it there is the degree of complexity of the NLP module. In any case, however, NLP is both a practically necessary and a theoretically stimulative task. A command of language is an inherent feature of human beings; if an AI system is supposed to model human intelligence, it cannot dispense with a module of natural language understanding and production. A considerable knowledge of language structure and functioning has been gathered throughout the centuries of linguistic research; it would mean reinventing the wheel if this knowledge were ignored.


Journal ArticleDOI
01 Jan 1992-Language
TL;DR: Singer as discussed by the authors is a good introductory text for a course in natural language processing with a focus on discourse structure and processing, which is also suitable as a desk reference for the linguist, cognitive psychologist, or computer scientist with an interest in NLP.
Abstract: Case or agr. Just as abstract Case and AGR are cover terms for syntactic features with often idiosyncratic morphological realization, thematic roles seem most useful simply as syntactic designations for positions that enter into argument and adjunct relations. This is a good book, not just for its critique of theta theory, but also for the positive contribution of further development of the Decompositional Theory. Although the book is a revised dissertation, it doesn't read like one: the style is engaging and philosophical, and the exemplification is detailed and clear. And while it does not cover every thematic-role-based proposal ever made, the book is nevertheless to be recommended to anyone who has puzzled over the content of thematic roles. [Edwin Battistella, University of Alabama in Birmingham] Psychology of language: An introduction to sentence and discourse processes. By Murray Singer. Hillsdale, NJ: Lawrence Erlbaum, 1990. Pages xi, 308. $24.95. Recent years have seen the emergence of an essentially interdisciplinary, cognitive-science approach to natural language processing, with some of the most fruitful work focusing on discourse structure and processing. Singer's book is intended as an introductory graduate textbook, though it is also suitable as a desk reference for the linguist, cognitive psychologist, or computer scientist with an interest in natural language processing. An introductory graduate textbook must accomplish several purposes: (i) familiarize students with key theoretical concepts; (ii) refer them to the fundamental literature in which those concepts are discussed; (iii) provide them with exemplars that teach them how to apply the concepts insightfully to the data; and (iv) inculcate a sense of relevance that enables students to read professional articles and evaluate their theoretical significance. S's book is probably above average by these standards. Key concepts are clearly if succinctly explained, fundamental articles are cited in each of the relevant disciplines, and examples are frequent and detailed enough to satisfy the most exacting linguist. The book's weaknesses are those endemic to any introductory text: it emphasizes consensus, spending more time summarizing established results than exploring open issues and controversies, and the passages which explain key concepts are often terse, so that the instructor may sometimes wish to assign source articles as background reading. Overall, the book appears to be a valuable introductory textbook for a course in psycholinguistics or discourse processing. It is divided into ten chapters, as follows: Ch. 1, 'Introduction'; Ch. 2, 'Language and meaning: Representing and remembering discourse'; Ch. 3, 'Syntax and parsing processes'; Ch. 4, The role of knowledge in language comprehension'; Ch. 5, 'Understanding coherent discourse'; Ch. 6, Theme'; Ch. 7, 'Inference processes'; Ch. 8, 'Understanding stories'; Ch. 9, 'Question answering and sentence verification'; and Ch. 10, 'Natural language understanding by computers—and people'. One of the noteworthy aspects of the book is its thorough integration of psycholinguistic data with the concepts of discourse analysis, including coherence, the givennew contract, thematic structure, and scripts. [Paul Deane, University of Central Florida.] An essay on grammar-parser relations. By Jan van de Koot. Dordrecht: Foris, 1990. Pp. xii, 152. Paper $24.50. In this monograph, K investigates the possible relations between grammars and parsers. Any theory of the mind that postulates separate

Proceedings Article
01 Jan 1992
TL;DR: The use and the benefit of the spatial image of the world in natural language understanding process, using a geometric representation and reconstructing a model from the scenic descriptions (in Japanese) drawing spax’e.
Abstract: This paper describes the use and the benefit of the spatial image of the world in natural language understanding process. The actual or purely imaginary image of the world heli)s us to understand the natural language texts. In order to treat the image of the described world, the attthors use a geometric representation and try to reconstruct a geometric nmdel of the global scene from the scenic descriptions (in Japanese) drawing spax’e. An exl)erimental coml)uter program SPRINT is made to reconstruct a model. SPRINT extracts the qualitative spatial constraints from the text and represents them by the numerical constraints on spatial attributes of the descril)ed entities in the world. This makes it 1)ossible to express the vagueness of the spatial concepts, to accumulate fragmentary information on the memory, and to derive the maximally plausible model from a chunk of such information. In this process, the view of the observer and its transition is reflected. One can haxdly treat the view without such geometric representations. The visual disappearance of the spatial entities is also discussed with respect to the view of the observer. By constructing a geometric representation of the world, these phenomena are reviewed.