scispace - formally typeset
Search or ask a question

Showing papers on "Question answering published in 1992"


Patent
25 Nov 1992
TL;DR: In this article, a system capable of automatically generating an answer in response to a query or an instruction such as a letter writing instruction input to the system in the form of a natural language.
Abstract: A system capable of automatically generating an answer in response to a query or an instruction such as a letter writing instruction input to the system in the form of a natural language. The system understands the goal of the natural language input and retrieves information from knowledge bases to formulate a plan and an action for achieving the goal.

242 citations


Journal ArticleDOI
TL;DR: This paper will examine and review recent progress in using the lexical, syntactic, semantic and discourse levels of the language analysis for tasks like automatic and semi-automatic indexing of text, text retrieval, text abstracting and summarisation, thesaurus generation from text corpus and conceptual information retrieval.
Abstract: Techniques of automatic natural language processing have been under development since the earliest computing machines, and in recent years these techniques have proven to be robust, reliable and efficient enough to lead to commercial products in many areas. The applications include machine translation, natural language interfaces and the stylistic analysis of texts but NLP techniques have also been applied to other computing tasks besides these. In this paper we will examine and review recent progress in using the lexical, syntactic, semantic and discourse levels of the language analysis for tasks like automatic and semi-automatic indexing of text, text retrieval, text abstracting and summarisation, thesaurus generation from text corpus and conceptual information retrieval. Our own work on the application of syntactic analysis to the matching and ranking of phrases using structured representations of texts, will be included in the overview. Finally, the prospects for gains in terms of overall retrieval effectiveness or quality will be discussed.

128 citations


Journal ArticleDOI
01 Mar 1992-Language
TL;DR: The role of knowledge in language comprehension has been discussed in this article, where the authors discuss the importance of knowledge and the role of discourse understanding in natural language understanding by computers and people.
Abstract: Preface 1 Introduction 2 Language and meaning: representing and remembering discourse 3 Syntax and parsing processes 4 The role of knowledge in language comprehension 5 Understanding coherent discourse 6 Theme 7 Inference processes 8 Understanding stories 9 Question answering and sentence verification 10 Natural language understanding by computers - and people References Author index Subject index Acknowledgments

81 citations


BookDOI
Paul S. Jacobs1
01 Jul 1992
TL;DR: This chapter discusses Text Representation for Intelligent Text Retrieval: A Classification-Oriented View, and Intelligent High-Volume Text Processing Using Shallow, Domain-Specific Techniques.
Abstract: Contents: P.S. Jacobs, Introduction: Text Power and Intelligent Systems. Part I:Broad-Scale NLP. J.R. Hobbs, D.E. Appelt, J. Bear, M. Tyson, D. Magerman, Robust Processing of Real-World Natural-Language Texts. Y. Wilks, L. Guthrie, J. Guthrie, J. Cowie, Combining Weak Methods in Large-Scale Text Processing. G. Hirst, M. Ryan, Mixed-Depth Representations for Natural Language Text. D.D. McDonald, Robust Partial-Parsing Through Incremental, Multi-Algorithm Processing. Corpus-Based Thematic Analysis. Part II:"Traditional" Information Retrieval. W.B. Croft, H.R. Turtle, Text Retrieval and Inference. K.S. Jones, Assumptions and Issues in Text-Based Retrieval. D.D. Lewis, Text Representation for Intelligent Text Retrieval: A Classification-Oriented View. G. Salton, C. Buckley, Automatic Text Structuring Experiments. Part III:Emerging Applications. C. Stanfill, D.L. Waltz, Statistical Methods, Artificial Intelligence, and Information Retrieval. P.J. Hayes, Intelligent High-Volume Text Processing Using Shallow, Domain-Specific Techniques. Y.S. Maarek, Automatically Constructing Simple Help Systems from Natural Language Documentation. M.A. Hearst, Direction-Based Text Interpretation as an Information Access Refinement.

71 citations


Proceedings ArticleDOI
28 Jun 1992
TL;DR: A prototype information retrieval system which uses advanced natural language processing techniques to enhance the effectiveness of traditional key-word based document retrieval and has displayed capabilities that appear to make it superior to the purely statistical base.
Abstract: We developed a prototype information retrieval system which uses advanced natural language processing techniques to enhance the effectiveness of traditional key-word based document retrieval. The backbone of our system is a statistical retrieval engine which performs automated indexing of documents, then search and ranking in response to user queries. This core architecture is augmented with advanced natural language processing tools which are both robust and efficient. In early experiments, the augmented system has displayed capabilities that appear to make it superior to the purely statistical base.

68 citations


Journal ArticleDOI
TL;DR: It is taken that information retrieval systems are fundamentally linguistic in nature - in essence, the languages of document representation and searching are dialects of natural language.
Abstract: This discussion takes the position that information retrieval systems are fundamentally linguistic in nature - in essence, the languages of document representation and searching are dialects of natural language. Because of this, the discipline of the Philosophy of Language should have some bearing on the problems of document representation and search query formulation. The philosophies of Austin, Searle, Grice and Wittgenstein are briefly examined and their relevance to information retrieval theory is discussed

54 citations


Journal ArticleDOI
TL;DR: How knowledge is represented by QUEST's conceptual graph structures and how the procedural mechanisms operate on the knowledge structures during question atmwering are described.
Abstract: QUEST is a computer model of question answering that simulates answers that adults produce when they answer open-class questions (e.g., why, how, what-if) and clued-class questions (e.g., is X true or false?). QUEST has four major procedural components: (1) question interpretation, (2) identification of relevant information sources, (3) prasmatics, and (4) convergence mechanisms. The procedures operate on information sources which are represented as conceptual graph structures. These structures contain goal/plan hierarchies, causal networks, taxonomic hierarchies, spatial region hierarchies, and other forms of knowledge. This article describes how knowledge is represented by QUEST's conceptual graph structures and how the procedural mechanisms operate on the knowledge structures during question atmwering. The primary focus is on convergence mechanisms, which identify the small subset of nodes in the information sources that serve as relevant answers to a particular question. An important convergence mechanism is the arc search procedures, which identify legal emswer~ to the question by pursuing particular paths of arcs in each information source. We have developed a computer model of human question answering, called QUEST. QUEST simulates tile answers that people produce when they answer different types of questions, such as why, how, when, where, what-if, and yes/no verification questions. When QUEST answers a particular question, the model identifies relevant information sources and taps information within each source. Each information source is a package of world knowledge that is organized in the form of a "conceptual graph structure" containing nodes and relational arcs. The question answering (Q/A) procedures operate on these structures systematically, pursuing some paths of ares, but not others, depending on the question category. The success of QUEST in simulating human question answering depends critically on an appropriate organization of world knowledge structures as well as an appropriate specification of the Q/A procedures that operate on the structures. The computational foundations of QUEST were inspired by models of question answering in artificial intelligence and computational linguistics (1-8). In these models, text and world knowledge are

49 citations


Journal ArticleDOI
TL;DR: It is argued that one way for a system to compensate for an unreliable user model is to be able to react to feedback from users about the suitability of the texts it produces, and how such a capability can actually alleviate some of the burden now placed on user modeling.
Abstract: Natural Language is a powerful medium for interacting with users, and sophisticated computer systems using natural language are becoming more prevalent. Just as human speakers show an essential, inbuilt responsiveness to their hearers, computer systems must “tailor” their utterances to users. Recognizing this, researchers devised user models and strategies for exploiting them in order to enable systems to produce the “best” answer for a particular user. Because these efforts were largely devoted to investigating how a user model could be exploited to produce better responses, systems employing them typically assumed that a detailed and correct model of the user was available a priori, and that the information needed to generate appropriate responses was included in that model. However, in practice, the completeness and accuracy of a user model cannot be guaranteed. Thus, unless systems can compensate for incorrect or incomplete user models, the impracticality of building user models will prevent much of the work on tailoring from being successfully applied in real systems. In this paper, we argue that one way for a system to compensate for an unreliable user model is to be able to react to feedback from users about the suitability of the texts it produces. We also discuss how such a capability can actually alleviate some of the burden now placed on user modeling. Finally, we present a text generation system that employs whatever information is available in its user model in an attempt to produce satisfactory texts, but is also capable of responding to the user's follow-up questions about the texts it produces.

47 citations


Proceedings ArticleDOI
23 Feb 1992
TL;DR: A language learner is described that extracts distributional information from a corpus annotated with parts of speech and is able to use this extracted information to accurately parse short sentences.
Abstract: In this paper, we present evidence that the acquisition of the phrase structure of a natural language is possible without supervision and with a very small initial grammar. We describe a language learner that extracts distributional information from a corpus annotated with parts of speech and is able to use this extracted information to accurately parse short sentences. The phrase structure learner is part of an ongoing project to determine just how much knowledge of language can be learned solely through distributional analysis.

43 citations


Proceedings ArticleDOI
01 Jun 1992
TL;DR: An automated method of classifying research project descriptions is described: a human expert classifies a sample set of projects into a set of disjoint and pre-defined classes, and then the computer learns from this sample how to classify new projects into these classes.
Abstract: In this paper we describe an automated method of classifying research project descriptions: a human expert classifies a sample set of projects into a set of disjoint and pre-defined classes, and then the computer learns from this sample how to classify new projects into these classes. Both textual and non-textual information associated with the projects are used in the learning and classification phases. Textual information is processed by two methods of analysis: a natural language analysis followed by a statistical analysis. Non-textual information is processed by a symbolic learning technique. We present the results of some experiments done on real data: two different classifications of our research projects.

40 citations


Proceedings ArticleDOI
01 Jun 1992
TL;DR: It is suggested that a hypertext book with a nonlinear structure and including a variety of navigational tools can equal or surpass conventional books as an information-seeking medium, even with minimal training.
Abstract: An important issue in the evolution of hypertext is the design of such systems to optimally support user tasks such as asking questions. Few studies have systematically compared the use of hypertext to books in seeking information, and those that have been done have not found a consistent superiority for hypertext. In addition, designers developing hypertext books have few guidelines. In the present study, users performed information-seeking tasks and answered a variety of types of questions about Sherlock Holmes stories using either a conventional paper encyclopedia or a hypertext encyclopedia. The questions varied on the amount of information needed to derive an answer (fact or inference), the location of the question’s key phrase in the hypertext (enrry title or entry content), and the format of the information (text or map). Accuracy and time were recorded. The hypertext group excelled in answering fact questions where the information was embedded in a text entry. The book group excelled only in answering fact questions based on maps. In spite of having far more experience using books, the book group was not significantly faster overall and did not perform as well on an incidental learning task. Our results suggest that a hypertext book with a nonlinear structure and including a variety of navigational tools can equal or surpass conventional books as an information-seeking medium, even with minimal training.

Journal ArticleDOI
TL;DR: This article examines the process of specifying a question-answering help facility in the context of UNIX mail based upon experimental expert-user facilitative dialogues, providing insights into both tutoring strategy and the linguistic forms required to generate help output.

Journal ArticleDOI
01 Oct 1992
TL;DR: This paper presents a methodology to map natural language constructs into relational algebra through E-R representation, which employs a logical form to represent the natural language queries.
Abstract: Research on accessing databases in natural language usually employs an intermediate form for the mapping process from natural language to database languages. However, much effort is needed to bridge the gap between the existing intermediate forms and the database languages. In this paper, we present a methodology to map natural language constructs into relational algebra through E-R representation. This methodology employs a logical form to represent the natural language queries. The logical form has the merits that it can be mapped from natural language constructs by referring to the Entity-Relationship conceptual schema and can be efficiently transformed into relational algebra for query execution. The whole process provides a clear and natural framework for processing natural language queries to retrieve data from database systems.

01 Jan 1992
TL;DR: The NASA Astrophysics Data System (ADS) is establishing a service to provide access to the literature abstracts relevant to astronomy in the NASA Scientific and Technical Aerospace Reports and the International Aerospace Abstracts (together also known as NASA RECON).
Abstract: In collaboration with the NASA Scientific and Technical Information System, the NASA Astrophysics Data System (ADS) is establishing a service to provide access to the literature abstracts relevant to astronomy in the NASA Scientific and Technical Aerospace Reports and the International Aerospace Abstracts (together also known as NASA RECON). The service will include several sophisticated retrieval methods, which may be combined. Included will be methods to perform relevancy ranking from natural language queries, synonym and misspelling recognition, author name translation (e.g. for multiple transliteration possibilities), and other features. The capabilities of the current release will be shown, and the plans for the near future will be discussed.

Book ChapterDOI
03 Sep 1992
TL;DR: The Metexa system (Medical Text Analysis) for the analysis of radiological reports is presented, which combines a unification-based bottom-up parser with a relevant part of the Conceptual Graph Theory in order to yield a conceptual graph as the semantic representation of an utterance.
Abstract: In medicine large amounts of natural language documents have to be processed. Medical language is an interesting domain for the application of techniques developed in computational linguistics. Moreover, large scale applications of medical language processing raise the need to study the process of language engineering, which emphasizes some different problems than basic research. The texts found in medical applications show characteristics of a specific sublanguage that can be exploited for language processing. We present the Metexa system (Medical Text Analysis) for the analysis of radiological reports. To be able to process utterances of telegraphic style, the emphasis in system design has been put on semantic and knowledge processing components. However, a unification-based bottom-up parser is used to exploit syntactic information wherever possible. For semantic and knowledge representation a relevant part of the Conceptual Graph Theory by John Sowa has been implemented in order to yield a conceptual graph as the semantic representation of an utterance. This can be mapped e.g. to a database schema. A resolution-based inference procedure has been implemented to infer new facts from the analysed utterances.

Journal ArticleDOI
Paul S. Jacobs1
TL;DR: Trump (TRansportable Understanding Mechanism Package) is a natural language analyzer that functions in a variety of domains, in both interfaces and text processing that is capable of performing fairly extensive analysis with a minimum of customization for each application.
Abstract: Transportability has perpetually been the nemesis of natural language processing systems, in both the research and commercial sectors. During the last 20 years, the technology has not moved much closer to providing robust coverage of everyday language, and has failed to produce commercial successes beyond a few specialized interfaces and application programs. the redesign required for each application has limited the impact of natural language systems. Trump (TRansportable Understanding Mechanism Package) is a natural language analyzer that functions in a variety of domains, in both interfaces and text processing. While other similar efforts have treated transportability as a problem in knowledge engineering, Trump instead relies mainly on a “core” of knowledge about language and a set of techniques for applying that knowledge within a domain. the information about words, word meanings, and linguistic relations in this generic knowledge base guides the conceptual framework of language interpretation in each domain. Turmp uses this core knowledge to piece together a conceptual representation of a natural language input by combining generic and specialized inforamtion. the result has been a language processing system that is capable of performing fairly extensive analysis with a minimum of customization for each application.

Proceedings Article
01 Jan 1992
TL;DR: The backbone of this prototype text retrieval system is a traditional statistical engine which builds inverted indere files from pre-processed documents, and then searches and ranks the documents in response to user queries.
Abstract: We developed a prototype text retrieval system which uses advanced natural language processing techniques to enhance the effectiveness of key-word based document retrieval. The backbone of our system is a traditional statistical engine which builds inverted indere files from pre-processed documents, and then searches and ranks the documents in response to user queries. Natural language processing is used to (1) preprocers the documents in order to extract contents-carrying terms, (2) discover inter-term dependencies and build a conceptual hierarchy specific to the database domain, and (3) process user's natural language requests into effective search queries

Book ChapterDOI
02 Jan 1992
TL;DR: Question asking is described, both similar to and different from qualitive think-out-loud (TOL) methods for eliciting verbalizations related to thinking and problem solving, for studying human-computer interaction issues connected with the design and evaluation of computer systems.
Abstract: This paper describes a qualitative empirical method aimed at uncovering what computer users need to know to use a computer to accomplish tasks. The method asks users to acquire information about how to use a computer by asking questions of a more experienced user (i.e., the investigator) or a “coach” (a term we prefer). The technique is both similar to and different from qualitive think-out-loud (TOL) methods for eliciting verbalizations related to thinking and problem solving. TOL verbal protocol techniques have been widely applied in cognitive psychology (see Ericsson & Simon, 1980, 1984). Although question asking may have equally wide applicability, the focus of this paper will be on its use in studying human-computer interaction issues connected with the design and evaluation of computer systems.

Proceedings ArticleDOI
01 Jun 1992
TL;DR: A system is presented that uses Horn Clause Logic as meaning representation language, employs advanced techniques from natural Language Processing to achieve incremental extensibility, and uses methods from Logic Programming to achieve robustness in the face of insufficient data.
Abstract: Most natural language based document retrieval systems use the syntax structures of constituent phrases of documents as index terms. Many of these systems also attempt to reduce the syntactic variability of natural language by some normalisation procedure applied to these syntax structures. However, the retrieval performance of such systems remains fairly disappointing. Some systems therefore use a meaning representation language to index and retrieve documents. In this paper, a system is presented that uses Horn Clause Logic as meaning representation language, employs advanced techniques from natural Language Processing to achieve incremental extensibility, and uses methods from Logic Programming to achieve robustness in the face of insufficient data.

Patent
10 Jul 1992
TL;DR: In this article, an information retrieval system is used for retrieving information from a database, which includes a parser for parsing a natural language input query into constituent phrases as a syntax analysis result, and a retrieval execution unit for retrieving data from the database on the basis of the database retrieval formula.
Abstract: An information retrieval system is used for retrieving information from a database. The information retrieval system includes a parser for parsing a natural language input query into constituent phrases as a syntax analysis result. The system also includes a virtual table for converting phrases of the natural language query to retrieval keys that are possessed by the database. The virtual table accounts for particles that modify the phrases in the input query. A collating unit is provided in the system for preparing a database retrieval formula from the syntax analysis result by selecting a virtual table that it is used to convert the phrases to the keys possessed by the database. Lastly, the system includes a retrieval execution unit for retrieving data from the database on the basis of the database retrieval formula.

Journal ArticleDOI
TL;DR: Investigations in implementing a document retrieval system based on a neural network model shows that many of the standard strategies of information retrieval are applicable in a Neural network model.
Abstract: The task of a document retrieval system is to match a query, perhaps in natural language, against a large number of natural language documents. Neural networks are known to be good pattern matchers. This article describes investigations in implementing a document retrieval system based on a neural network model. It shows that many of the standard strategies of information retrieval are applicable in a neural network model.

Book ChapterDOI
07 Oct 1992
TL;DR: The result of this work has been the individuation of general criteria to obtain natural languages restatements from a single query expressed in a graphical language on Entity-Relationship schemas.
Abstract: Various works have been proposed to simplify the interaction between casual users and databases. Too powerful tools, however, can cause the user to lose control of the many operations performed. A natural language restatement of the query has seemed the best way to assure the user about the accuracy of the formulation of his/her intents. The result of this work has been the individuation of general criteria to obtain natural languages restatements from a single query expressed in a graphical language on Entity-Relationship schemas.

Proceedings ArticleDOI
01 Nov 1992
TL;DR: The paper concludes that natural language retrieval of information in hypertext documents can provide users with both the browsing capabilities of hypertext and the semantic search capabilities of natural language query processing.
Abstract: Current hypertext systems have no intelligent means for finding specific information. When searching for specific information (as opposed to browsing), users can get disoriented in large hypertext documents and may end up following a path that takefs them farther away from the information they seek. This paper describes an information retrieval system called HRS (Hypertext Retrieval System) that allows users to retrieve information in hypertext documents based on its semantic content. HR.S is comprised of an authoring system, a browser, and a graph-based information retrieval facility. The graph-baaed retrieval facility allows users to retrieve specific information in hypertext documents by posing English language queries. The retrieval facility is based on the use of Conceptual Graphs, a knowledge representation scheme. The English language queries posed by users are automatically converted to Conceptual Graphs by a parser. The information in hypertext documents is also represented using Conceptual Graphs. Query processing is treated as a graph matchingprocess, and retrieval is performed by a semantic baaed search. This technology is useful for retrieval of information in large knowledge domains where a user needs to find specific information and does not bow the organisation of the hypertext document or the words used in the document. The paper concludes that natural language retrieval of information in hypertext documents can provide users with both the browsing capabilities of hypertext and the semantic search capabilities of natural language query processing.

Proceedings ArticleDOI
23 Aug 1992
TL;DR: This paper organizes an adverbial lexicon which will be useful for information retrieval and natural language processing systems.
Abstract: The adverb is the most complicated, and perhaps also the most interesting part of speech. Past research in natural language processing, however, has not dealt seriously with adverbs, though linguists have done significant work on this word class. The current paper draws on this linguistic research to organize an adverbial lexicon which will be useful for information retrieval and natural language processing systems.

Journal ArticleDOI
TL;DR: All children across the four languages appear to start answering negative questions using the English system, and while English-speaking and Korean-speaking children find true negative statements more difficult to verify than false negative statements, Japanese- speaking children find them less difficult.
Abstract: This review article examines how children verify a statement (e.g., You are a child. Right or wrong?) and answer a corresponding question (e.g., Are you a child? Yes or no?) in English, French, Japanese, and Korean. While people verify affirmative statements and answer affirmative questions similarly across the four languages, they answer negative questions differently across the four languages. In English, answering negative questions works in a way opposite to verification (e.g., Are you not a child? Yes; You are not a child. Wrong). In French, si is used in the place of the yes response in English. In Japanese and Korean, answering negative questions works in a way similar to verification (e.g., Are you not a child? No; You are not a child. Wrong). The effects of these linguistic characteristics are examined. Findings are: (1) All children across the four languages appear to start answering negative questions using the English system; (2) English-speaking children find verifying negative statements more difficult than answering the corresponding questions but Japanese-speaking children find it less difficult; and (3) while English-speaking and Korean-speaking children find true negative statements more difficult to verify than false negative statements, Japanese-speaking children find them less difficult. Language-universal and language-specific processes in verification and answering are discussed.

Book
01 Jan 1992
TL;DR: The experimental results showed that the effectiveness of the structural model was barely comparable to that of the other models and replications of this study are needed to further prove or disprove the usefulness of case relations in improving retrieval effectiveness.
Abstract: The purpose of this research is to design a document retrieval model which is a structural model based on case relations and to test how effectively a prototype of this model would perform retrieval on a test database. Case relations are a major component of case grammar proposed by linguistic theorists and developed in computational linguistics and natural language processing. The design of the structural retrieval model involves case relations and structured document representation, case relation-based natural language parsing and automatic structural indexing, and tree mapping and structural matching. In this model, a document is represented by a set of tree-like case frames in which the components of a natural language clause are assigned to different nodes called cases, and all nodes have pre-defined case relations to the verb of the clause. To implement such a structural representation by automatic means, an indexing engine was coded (using PROLOG) and developed which consists of a natural language parser and a case frame generator. In response to a natural language query, the prototype of the model (1) processes and converts the query into a set of case frames; (2) measures the structural closeness between the query and every document in a database through tree-mapping; and (3) presents the retrieved documents, according to their closeness to the query, in ranked order. A number of typical retrieval experiments have been designed to compare the structural model with the vector space model and the Boolean model. All of the model prototypes processed a set of thirty queries on a test database of 534 documents. The retrieval performance was measured using recall-precision graphs, averaged recall and precision, and statistical tests. The experimental results showed that the effectiveness of the structural model was barely comparable to that of the other models. The conclusions are: (1) the structural model is not more effective than other models, and (2) replications of this study are needed to further prove or disprove the usefulness of case relations in improving retrieval effectiveness.


Proceedings ArticleDOI
26 May 1992
TL;DR: The first experiment of knowledge extraction from geographic information system (KEGIS) shows how the knowledge base created by KEGIS can closely represent the ’real world’.
Abstract: Expert systems (ES) have been shown to be useful in many areas of natural resource and environmental impact studies. However, the major obstacle in the development of expert system is difficult to extract expert’s knowledge into a knowledge base. An alternative approach that can overcome the obstacle is to extract domain knowledge from information system by machine learning. This study is the first experiment of knowledge extraction from geographic information system (KEGIS). Major effort in this study is to develop a landuse expert system with a knowledge base that is generated by learning from sample data of a geographic information system (GIS). In this study, 154 sample areas were selected from Wongnute County, Inner Mongolia, for knowledge base extraction. With the landuse knowledge base, an inference engine, and a user interface, a landuse expert system has been constructed for landuse consulting. In an accuracy test, The landuse expert system can provide 73% of correct siggestions. This result ;how& that the knowledge base created by KEGIS can closely represent the ’real world‘.

Book ChapterDOI
Udo Pletat1
07 Sep 1992
TL;DR: An overview of the typed predicate logic LLILOG which serves as the target language for translating the information provided in German texts into machine processible form and a flexible theorem prover for processing the information extracted from natural language texts.
Abstract: We give an overview of the typed predicate logic LLILOG which serves as the target language for translating the information provided in German texts into machine processible form. Being part of the natural language understanding system LEU/2, the knowledge representation system built around LLILOG serves different purposes. Its knowledge engineering environment has been used for modeling the semantical backgound knowledge for the application domain of LEU/2. The inference engine implementing LLILOG is a flexible theorem prover for processing the information extracted from natural language texts.

Proceedings ArticleDOI
31 Mar 1992
TL;DR: The MARIE system as discussed by the authors employs natural language processing techniques to identify photographic images concerning various military projects, and the captions are parsed to produce a logical form from which nouns and verbs are extracted to form the primary keywords.
Abstract: This paper briefly describes the current implementation status of an intelligent information retrieval system, MARIE, that employs natural language processing techniques. Descriptive captions are used to identify photographic images concerning various military projects. The captions are parsed to produce a logical form from which nouns and verbs are extracted to form the primary keywords. User queries are also specified in natural language. A two-phase search process employing coarse-grain and fine-grain match processes is used to find the captions that best match the query. A type hierarchy based on object-oriented programming constructs is used to represent the semantic knowledge base. This knowledge base contains knowledge of various military concepts and terminology with specifics from the Naval Weapons Center. Methods are used for creating the logical form during semantic analysis, generating the keywords to be used in the coarse-grain match process, and fine-grain matching between query and caption logical forms.