scispace - formally typeset
Search or ask a question

Showing papers on "Natural language published in 1998"


Journal ArticleDOI
01 Aug 1998
TL;DR: It will be shown that probabilistic methods can be used to predict topic changes in the context of the task of new event detection and provide further proof of concept for the use of language models for retrieval tasks.
Abstract: In today's world, there is no shortage of information. However, for a specific information need, only a small subset of all of the available information will be useful. The field of information retrieval (IR) is the study of methods to provide users with that small subset of information relevant to their needs and to do so in a timely fashion. Information sources can take many forms, but this thesis will focus on text based information systems and investigate problems germane to the retrieval of written natural language documents. Central to these problems is the notion of "topic." In other words, what are documents about? However, topics depend on the semantics of documents and retrieval systems are not endowed with knowledge of the semantics of natural language. The approach taken in this thesis will be to make use of probabilistic language models to investigate text based information retrieval and related problems. One such problem is the prediction of topic shifts in text, the topic segmentation problem. It will be shown that probabilistic methods can be used to predict topic changes in the context of the task of new event detection. Two complementary sets of features are studied individually and then combined into a single language model. The language modeling approach allows this problem to be approached in a principled way without complex semantic modeling. Next, the problem of document retrieval in response to a user query will be investigated. Models of document indexing and document retrieval have been extensively studied over the past three decades. The integration of these two classes of models has been the goal of several researchers but it is a very difficult problem. Much of the reason for this is that the indexing component requires inferences as to the semantics of documents. Instead, an approach to retrieval based on probabilistic language modeling will be presented. Models are estimated for each document individually. The approach to modeling is non-parametric and integrates the entire retrieval process into a single model. One advantage of this approach is that collection statistics, which are used heuristically for the assignment of concept probabilities in other probabilistic models, are used directly in the estimation of language model probabilities in this approach. The language modeling approach has been implemented and tested empirically and performs very well on standard test collections and query sets. In order to improve retrieval effectiveness, IR systems use additional techniques such as relevance feedback, unsupervised query expansion and structured queries. These and other techniques are discussed in terms of the language modeling approach and empirical results are given for several of the techniques developed. These results provide further proof of concept for the use of language models for retrieval tasks.

2,736 citations


Proceedings ArticleDOI
01 Nov 1998
TL;DR: A comparison of the effectiveness of five different automatic learning algorithms for text categorization in terms of learning speed, realtime classification speed, and classification accuracy is compared.
Abstract: 1. ABSTRACT Text categorization – the assignment of natural language texts to one or more predefined categories based on their content – is an important component in many information organization and management tasks. We compare the effectiveness of five different automatic learning algorithms for text categorization in terms of learning speed, realtime classification speed, and classification accuracy. We also examine training set size, and alternative document representations. Very accurate text classifiers can be learned automatically from training examples. Linear Support Vector Machines (SVMs) are particularly promising because they are very accurate, quick to train, and quick to evaluate. 1.1

1,606 citations


Journal ArticleDOI
TL;DR: The role of computers in language instruction has now become an important issue confronting large numbers of language teachers throughout the world as mentioned in this paper, and there has been an explosion of interest in using computers for language teaching and learning.
Abstract: Recent years have shown an explosion of interest in using computers for language teaching and learning. A decade ago, the use of computers in the language classroom was of concern only to a small number of specialists. However, with the advent of multimedia computing and the Internet, the role of computers in language instruction has now become an important issue confronting large numbers of language teachers throughout the world.

1,072 citations


01 Jan 1998
TL;DR: This thesis demonstrates that several important kinds of natural language ambiguities can be resolved to state-of-the-art accuracies using a single statistical modeling technique based on the principle of maximum entropy.
Abstract: This thesis demonstrates that several important kinds of natural language ambiguities can be resolved to state-of-the-art accuracies using a single statistical modeling technique based on the principle of maximum entropy. We discuss the problems of sentence boundary detection, part-of-speech tagging, prepositional phrase attachment, natural language parsing, and text categorization under the maximum entropy framework. In practice, we have found that maximum entropy models offer the following advantages: State-of-the-art accuracy. The probability models for all of the tasks discussed perform at or near state-of-the-art accuracies, or outperform competing learning algorithms when trained and tested under similar conditions. Methods which outperform those presented here require much more supervision in the form of additional human involvement or additional supporting resources. Knowledge-poor features. The facts used to model the data, or features, are linguistically very simple, or "knowledge-poor", but yet succeed in approximating complex linguistic relationships. Reusable software technology. The mathematics of the maximum entropy framework are essentially independent of any particular task, and a single software implementation can be used for all of the probability models in this thesis. The experiments in this thesis suggest that experimenters can obtain state-of-the-art accuracies on a wide range of natural language tasks, with little task-specific effort, by using maximum entropy probability models.

510 citations


Proceedings ArticleDOI
10 Aug 1998
TL;DR: Novel aspects of a new natural language generator called Nitrogen are described, which has a highly flexible input representation that allows a spectrum of input from syntactic to semantic depth, and shifts the burden of many linguistic decisions to the statistical post-processor.
Abstract: We describe novel aspects of a new natural language generator called Nitrogen. This generator has a highly flexible input representation that allows a spectrum of input from syntactic to semantic depth, and shifts the burden of many linguistic decisions to the statistical post-processor. The generation algorithm is compositional, making it efficient, yet it also handles non-compositional aspects of language. Nitrogen's design makes it robust and scalable, operating with lexicons and knowledge bases of one hundred thousand entities.

463 citations


Proceedings ArticleDOI
14 Apr 1998
TL;DR: A large vocabulary sign language interpreter is presented with real-time continuous gesture recognition of sign language using a data glove using hidden Markov models for 51 fundamental postures, 6 orientations, and 8 motion primitives.
Abstract: A large vocabulary sign language interpreter is presented with real-time continuous gesture recognition of sign language using a data glove. Sign language, which is usually known as a set of natural language with formal semantic definitions and syntactic rules, is a large set of hand gestures that are daily used to communicate with the hearing impaired. The most critical problem, end-point detection in a stream of gesture input is first solved and then statistical analysis is done according to four parameters in a gesture: posture, position, orientation, and motion. The authors have implemented a prototype system with a lexicon of 250 vocabularies and collected 196 training sentences in Taiwanese Sign Language (TWL). This system uses hidden Markov models (HMMs) for 51 fundamental postures, 6 orientations, and 8 motion primitives. In a signer-dependent way, a sentence of gestures based on these vocabularies can be continuously recognized in real-time and the average recognition rate is 80.4%,.

455 citations


Proceedings ArticleDOI
01 Sep 1998
TL;DR: The system that was developed, SUMMONS, uses the output of systems developed for the DARPA Message Understanding Conferences to generate summaries of multiple documents on the same or related events, presenting similarities and differences, contradictions, and generalizations among sources of information.
Abstract: We present a methodology for summarization of news about current events in the form of briefings that include appropriate background (historical) information. The system that we developed, SUMMONS, uses the output of systems developed for the DARPA Message Understanding Conferences to generate summaries of multiple documents on the same or related events, presenting similarities and differences, contradictions, and generalizations among sources of information. We describe the various components of the system, showing how information from multiple articles is combined, organized into a paragraph, and finally, realized as English sentences. A feature of our work is the extraction of descriptions of entities such as people and places for reuse to enhance a briefing.

425 citations


Book
01 Jan 1998
TL;DR: This book discusses patterns of co-occurrence of verb-forms in spoken and written English and a discourse-based re-examination of a traditional area of language teaching about Idioms in use.
Abstract: Acknowledgements The author 1. Introduction 2. Spoken language and the notion of genre 3. What should we teach about the spoken language? 4. When does sentence grammar become discourse grammar? 5. Some patterns of co-occurrence of verb-forms in spoken and written English 6. Vocabulary and the spoken language 7. Idioms in use: a discourse-based re-examination of a traditional area of language teaching 8. 'So Mary was saying': speech reporting in everyday conversation Glossary References Index.

370 citations


Dissertation
01 Jan 1998
TL;DR: This thesis is an inquiry into the nature of the high-level, rhetorical structure of unrestricted natural language texts, computational means to enable its derivation, and two applications (in automatic summarization and natural language generation) that follow from the ability to build such structures automatically.
Abstract: This thesis is an inquiry into the nature of the high-level, rhetorical structure of unrestricted natural language texts, computational means to enable its derivation, and two applications (in automatic summarization and natural language generation) that follow from the ability to build such structures automatically The thesis proposes a first-order formalization of the high-level, rhetorical structure of text The formalization assumes that text can be sequenced into elementary units; that discourse relations hold between textual units of various sizes; that some textual units are more important to the writer's purpose than others; and that trees are a good approximation of the abstract structure of text The formalization also introduces a linguistically motivated compositionality criterion, which is shown to hold for the text structures that are valid The thesis proposes, analyzes theoretically, and compares empirically four algorithms for determining the valid text structures of a sequence of units among which some rhetorical relations hold Two algorithms apply model-theoretic techniques; the other two apply proof-theoretic techniques The formalization and the algorithms mentioned so far correspond to the theoretical facet of the thesis An exploratory corpus analysis of cue phrases provides the means for applying the formalization to unrestricted natural language texts A set of empirically motivated algorithms were designed in order to determine the elementary textual units of a text, to hypothesize rhetorical relations that hold among these units, and eventually, to derive the discourse structure of that text The process that finds the discourse structure of unrestricted natural language texts is called rhetorical parsing The thesis explores two possible applications of the text theory that it proposes The first application concerns a discourse-based summarization system, which is shown to significantly outperform both a baseline algorithm and a commercial system An empirical psycholinguistic experiment not only provides an objective evaluation of the summarization system, but also confirms the adequacy of using the text theory proposed here in order to determine the most important units in a text The second application concerns a set of text planning algorithms that can be used by natural language generation systems in order to construct text plans in the cases in which the high-level communicative goal is to map an entire knowledge pool into text

313 citations


Patent
11 Aug 1998
TL;DR: In this paper, a customer service system includes a natural language device, a remote device remotely coupled to the natural language devices over a network, and a database coupled to a NLP system.
Abstract: A customer service system includes a natural language device, a remote device remotely coupled to the natural language device over a network and a database coupled to the natural language device. The database has a plurality of answers stored on it that are indexed to natural language keys. The natural language device implements a natural language understanding system. The natural language device receives a natural language question over the network from the remote device. The question is analyzed using the natural language understanding system. Based on the analysis, the database is then queried. An answer to the question is received based on the query, and the answer is provided to the remote device over the network.

298 citations



Journal ArticleDOI
J.R. Bellegarda1
TL;DR: A new framework is proposed to construct multispan language models for large vocabulary speech recognition, by exploiting both local and global constraints present in the language, via a paradigm first formulated in the context of information retrieval, called latent semantic analysis (LSA).
Abstract: A new framework is proposed to construct multispan language models for large vocabulary speech recognition, by exploiting both local and global constraints present in the language. While statistical n-gram modeling can readily take local constraints into account, global constraints have been more difficult to handle within a data-driven formalism. In this work, they are captured via a paradigm first formulated in the context of information retrieval, called latent semantic analysis (LSA). This paradigm seeks to automatically uncover the salient semantic relationships between words and documents in a given corpus. Such discovery relies on a parsimonious vector representation of each word and each document in a suitable, common vector space. Since in this space familiar clustering techniques can be applied, it becomes possible to derive several families of large-span language models, with various smoothing properties. Because of their semantic nature, the new language models are well suited to complement conventional, more syntactically oriented n-grams, and the combination of the two paradigms naturally yields the benefit of a multispan context. An integrative formulation is proposed for this purpose, in which the latent semantic information is used to adjust the standard n-gram probability. The performance of the resulting multispan language models, as measured by perplexity, compares favorably with the corresponding n-gram performance.

Patent
03 Mar 1998
TL;DR: The universal machine translator as mentioned in this paper enables the semantic, or meaningful, translation of arbitrary languages with zero loss of meaning of the source language in the target language translation, which loss is typical in prior art human and machine translations.
Abstract: A universal machine translator of arbitrary languages enables the semantic, or meaningful, translation of arbitrary languages with zero loss of meaning of the source language in the target language translation, which loss is typical in prior art human and machine translations. The universal machine translator embodies universal transformations itself and comprises the means for identifying high-level grammatical constructions of a source language word stream, constructing a grammatical world model of the syntax of the source language high-level word stream, decomposing source and target languages into universal moments of meaning, or epistemic instances, translating the epistemic moments of source and target languages with substantially no loss in meaning, constructing a grammatical world model of the syntax of the target language high-level word stream, optionally adjusting the target language syntax to comply with a preferred target language grammar, and generating the translated target language word stream. The universal machine translator also comprises the means to embody arbitrary sensory/motor receptions and transmissions of arbitrary word streams, which allows universally translated communications to occur among human beings and machines.

Book
01 Jun 1998
TL;DR: This work presents a DOP model for tree representations, a formal stochastic language theory, and a model for non-context-free representations for compositional semantic representations.
Abstract: 1. Introduction: what are the productive units of natural language? 2. A DOP model for tree representations 3. Formal stochastic language theory 4. Parsing and disambiguation 5. Testing DOP: redundancy vs. minimality 6. Learning new words 7. Learning new structures 8. A DOP model for compositional semantic representations 9. Speech understanding and dialogue processing 10. DOP models for non-context-free representations 11. Conclusion: linguistics reconsidered References.

Proceedings ArticleDOI
10 Aug 1998
TL;DR: An overview of the distinguishing characteristics of MindNet, the steps involved in its creation, and its extension beyond dictionary text are provided.
Abstract: As a lexical knowledge base constructed automatically from the definitions and example sentences in two machine-readable dictionaries (MRDs), MindNet embodies several features that distinguish it from prior work with MRDs. It is, however, more than this static resource alone. MindNet represents a general methodology for acquiring, structuring, accessing, and exploiting semantic information from natural language text. This paper provides an overview of the distinguishing characteristics of MindNet, the steps involved in its creation, and its extension beyond dictionary text.

Proceedings ArticleDOI
13 Oct 1998
TL;DR: A preliminary typology of summaries in general is presented; a description of the current and planned modules and performance of the SUMMARIST automated multilingual text summarization system is described; and three methods to evaluate summaries are discussed.
Abstract: This paper consists of three parts: a preliminary typology of summaries in general; a description of the current and planned modules and performance of the SUMMARIST automated multilingual text summarization system being built sat ISI, and a discussion of three methods to evaluate summaries.

Journal ArticleDOI
01 Mar 1998
TL;DR: The paper presents the use case model, the linguistic basis and the guided process along with the associated guidelines and support rules, and the process is illustrated with the automated teller machine (ATM) case study.
Abstract: An approach for guiding the construction of use case specifications is presented. A use case specification comprises contextual information of the use case, its change history, the complete graph of possible pathways, attached requirements and open issues. The proposed approach delivers a use case specification as an unambiguous natural language text. This is done by a stepwise and guided process which progressively transforms initial and partial natural language descriptions of scenarios into well structured, integrated use case specifications. The basis of the approach is a set of linguistic patterns and linguistic structures. The former constitutes the deep structure of the use case specification whereas the latter corresponds to the surface structures. The paper presents the use case model, the linguistic basis and the guided process along with the associated guidelines and support rules. The process is illustrated with the automated teller machine (ATM) case study.

Book ChapterDOI
21 Sep 1998
TL;DR: The paper shows that the new probabilistic interpretation of tf×idf term weighting might lead to better understanding of statistical ranking mechanisms, for example by explaining how they relate to coordination level ranking.
Abstract: This paper presents a new probabilistic model of information retrieval The most important modeling assumption made is that documents and queries are defined by an ordered sequence of single terms This assumption is not made in well known existing models of information retrieval, but is essential in the field of statistical natural language processing Advances already made in statistical natural language processing will be used in this paper to formulate a probabilistic justification for using tf×idf term weighting The paper shows that the new probabilistic interpretation of tf×idf term weighting might lead to better understanding of statistical ranking mechanisms, for example by explaining how they relate to coordination level ranking A pilot experiment on the Cranfield test collection indicates that the presented model outperforms the vector space model with classical tf×idf and cosine length normalisation

01 Aug 1998
TL;DR: Information Extraction (IE) as mentioned in this paper extracts information about a pre-specified set of entities, relations or events from natural language texts and records this information in structured representations called templates.
Abstract: In this paper we give a synoptic view of the growth of the text processing technology of information extraction (IE) whose function is to extract information about a pre‐specified set of entities, relations or events from natural language texts and to record this information in structured representations called templates. Here we describe the nature of the IE task, review the history of the area from its origins in AI work in the 1960s and 70s till the present, discuss the techniques being used to carry out the task, describe application areas where IE systems are or are about to be at work, and conclude with a discussion of the challenges facing the area. What emerges is a picture of an exciting new text processing technology with a host of new applications, both on its own and in conjunction with other technologies, such as information retrieval, machine translation and data mining.

Book ChapterDOI
TL;DR: This paper developed an explicit unified formal treatment of all the different varieties of informational independence in linguistic semantics, which amounts to a new type of logic, which is thereby opened for investigation and called attention to several actual linguistic phenomena which instantiate informational independence and provide evidence of its ubiquity.
Abstract: Many linguists and philosophers of language may have heard of informational independence, but most, not to say virtually all, of them consider it as a marginal feature of the semantics of natural languages. Yet in reality it is a widespread phenomenon in languages like English. In this paper, we shall develop an explicit unified formal treatment of all the different varieties of informational independence in linguistic semantics. This treatment amounts to a new type of logic, which is thereby opened for investigation. We shall also call attention to several actual linguistic phenomena which instantiate informational independence and provide evidence of its ubiquity. Last but not least, we shall show that the phenomenon of informational independence prompts several highly interesting methodological problems and suggestions.

Journal ArticleDOI
01 Jun 1998-Language
TL;DR: The work in Language and Space as discussed by the authors brings together the major lines of research and the most important theoretical viewpoints in the areas of psychology, linguistics, anthropology, and neuroscience, providing a much needed synthesis across these diverse domains.
Abstract: The fifteen original contributions in Language and Space bring together the major lines of research and the most important theoretical viewpoints in the areas of psychology, linguistics, anthropology, and neuroscience, providing a much needed synthesis across these diverse domains. The study of the relationship between natural language and spatial cognition has the potential to yield answers to vexing questions about the nature of the mind, language, and culture. The fifteen original contributions in Language and Space bring together the major lines of research and the most important theoretical viewpoints in the areas of psychology, linguistics, anthropology, and neuroscience, providing a much needed synthesis across these diverse domains. Each chapter gives a clear up-to-date account of a particular research program. Overall, they address such questions as: how does the brain represent space, how many kinds of spatial representations are there, how do we learn to talk about space and what role does culture play in these matters, should experimental tests of the relations between space and language be restricted to closed-class linguistic elements or must the role of open-class elements be considered as well? Throughout authors speak to each other's arguments, laying bare key areas of agreement and disagreement. Contributors Manfred Bierwisch, Paul Bloom, Melissa Bowerman, Karen Emmorey, Merrill Garrett, Ray Jackendoff, Philip Johnson-Laird, Barbara Landau, Willem Levelt, Stephen Levinson, Gordon Logan, Jean Mandler, Lynn Nadel, John O'Keefe, Mary Peterson, Daniel Sadler, Tim Shallice, Len Talmy, Barbara Tversky

Proceedings Article
01 Jul 1998
TL;DR: In this paper, a sparse network of linear separators is proposed for natural language disambiguation, which is based on the Winnow learning algorithm and is shown to perform well in a variety of ambiguity resolution problems.
Abstract: We analyze a few of the commonly used statistics based and machine learning algorithms for natural language disambiguation tasks and observe that they can be recast as learning linear separators in the feature space. Each of the methods makes a priori assumptions which it employs, given the data, when searching for its hypothesis. Nevertheless, as we show, it searches a space that is as rich as the space of all linear separators. We use this to build an argument for a data driven approach which merely searches for a good linear separator in the feature space, without further assumptions on the domain or a specific problem.We present such an approach - a sparse network of linear separators, utilizing the Winnow learning algorithm - and show how to use it in a variety of ambiguity resolution problems. The learning approach presented is attribute-efficient and, therefore, appropriate for domains having very large number of attributes.In particular, we present an extensive experimental comparison of our approach with other methods on several well studied lexical disambiguation tasks such as context-sensitive spelling correction, prepositional phrase attachment and part of speech tagging. In all cases we show that our approach either outperforms other methods tried for these tasks or performs comparably to the best.

Proceedings ArticleDOI
Jerome R. Bellegarda1
12 May 1998
TL;DR: A new framework is proposed to integrate the various constraints, both local and global, that are present in the language, resulting in several families of multi-span language models for large vocabulary speech recognition.
Abstract: A new framework is proposed to integrate the various constraints, both local and global, that are present in the language. Local constraints are captured via n-gram language modeling, while global constraints are taken into account through the use of latent semantic analysis. An integrative formulation is derived for the combination of these two paradigms, resulting in several families of multi-span language models for large vocabulary speech recognition. Because of the inherent complementarity in the two types of constraints, the performance of the integrated language models, as measured by the perplexity, compares favorably with the corresponding n-gram performance.

Journal ArticleDOI
TL;DR: A synoptic view of the growth of the text processing technology of information extraction whose function is to extract information about a pre‐specified set of entities, relations or events from natural language texts and to record this information in structured representations called templates is given.
Abstract: In this paper we give a synoptic view of the growth of the text processing technology of information extraction (IE) whose function is to extract information about a pre‐specified set of entities, relations or events from natural language texts and to record this information in structured representations called templates. Here we describe the nature of the IE task, review the history of the area from its origins in AI work in the 1960s and 70s till the present, discuss the techniques being used to carry out the task, describe application areas where IE systems are or are about to be at work, and conclude with a discussion of the challenges facing the area. What emerges is a picture of an exciting new text processing technology with a host of new applications, both on its own and in conjunction with other technologies, such as information retrieval, machine translation and data mining.

Proceedings Article
09 Jan 1998
TL;DR: An information extraction system was adapted to act as a post-filter on the output of an IR system to improve precision on routing tasks and make it easier to write IE grammars for multiple topics.
Abstract: : The authors describe an approach to applying a particular kind of Natural Language Processing (NLP) system to the TREC routing task in Information Retrieval (IR). Rather than attempting to use NLP techniques in indexing documents in a corpus, they adapted an information extraction (IE) system to act as a post-filter on the output of an IR system. The IE system was configured to score each of the top 2000 documents as determined by an IR system and on the basis of that score to rerank those 2000 documents. One aim was to improve precision on routing tasks. Another was to make it easier to write IE grammars for multiple topics.

Patent
28 Oct 1998
TL;DR: In this paper, an apparatus for automatically identifying command boundaries in a conversational natural language system, in accordance with the present invention, includes a speech recognizer for converting an input signal to recognized text and a boundary identifier coupled to the speech-recognizer for receiving the recognized text, the boundary identifier outputting the command if present in recognized text.
Abstract: An apparatus for automatically identifying command boundaries in a conversational natural language system, in accordance with the present invention, includes a speech recognizer for converting an input signal to recognized text and a boundary identifier coupled to the speech recognizer for receiving the recognized text and determining if a command is present in the recognized text, the boundary identifier outputting the command if present in the recognized text. A method for identifying command boundaries in a conversational natural language system is also included.

Proceedings Article
01 Jul 1998
TL;DR: This work introduces a methodology for automating the maintenance of domain-specific taxonomies based on natural language text understanding and ranks concept hypotheses according to credibility and the most credible ones are selected for assimilation into the domain knowledge base.
Abstract: We introduce a methodology for automating the maintenance of domain-specific taxonomies based on natural language text understanding. A given ontology is incrementally updated as new concepts are acquired from real-world texts. The acquisition process is centered around the linguistic and conceptual "quality" of various forms of evidence underlying the generation and refinement of concept hypotheses. On the basis of the quality of evidence, concept hypotheses are ranked according to credibility and the most credible ones are selected for assimilation into the domain knowledge base.

Journal Article
TL;DR: A model for the geometry of spatial relations was calibrated for a set of 59 English-language spatial predicates to provide a basis for high-level spatial query languages that exploit natural-language terms and serves as a model for processing such queries.
Abstract: relations are the basis for many selections users perform when they query geographic information systems (GISs). Although such query languages use natural-language-like terms, the formal definitions of those spatial relations rarely reflect the same meaning people would apply when they communicate among each other. To bridge the gap between computational models for spatial relations and people's use of spatial terms in their natural languages, a model for the geometry of spatial relations was calibrated for a set of 59 English-language spatial predicates. The model distinguishes topological and metric properties. The calibration from sketches that were drawn by 34 human subjects identifies ten groups of spatial terms with similar properties and provides a mapping from spatial terms onto significant geometric parameters and their values. The calibration's results reemphasize the importance of topological over metric properties in the selection of English-language spatial terms. The model provides a basis for high-level spatial query languages that exploit natural-language terms and serves as a model for processing such queries.

Book ChapterDOI
01 Jan 1998
TL;DR: An overview of ORM is provided, and its advantages over Entity Relationship and traditional Object-Oriented modeling are noted.
Abstract: Object-Role Modeling (ORM) is a method for modeling and querying an information system at the conceptual level, and mapping between conceptual and logical (e.g. relational) levels. ORM comes in various flavors, including NIAM (Natural language Information Analysis Method). This article provides an overview of ORM, and notes its advantages over Entity Relationship and traditional Object-Oriented modeling.

Book
13 Aug 1998
TL;DR: Carruthers and Boucher as mentioned in this paper investigated propositional thinking in an a-propositional aphasic language model and found that it augments human cognition with language.
Abstract: 1. Introduction: opening up options Peter Carruthers and Jill Boucher Part I: Language, development and evolution 2. Thought before language: the expression of motion events prior to the impact of a conventional language model Susan Goldin-Meadow and Ming-Yu Zheng 3. The prerequisites for language acquisition: evidence from cases of anomalous language development Jill Boucher 4. Some thoughts about the evolution of lads, with special reference to Tom and Sam Juan-Carlos Gomez 5. Thinking in language? Evolution and a modularist possibility Peter Carruthers Part II: Language, reasoning and concepts Introduction Peter Carruthers and Jill Boucher 6. Aphasic language, aphasic thought: an investigation of propositional thinking in an a-propositional aphasic Rosemary Varley 7. Representing representations Gabriel Segal 8. Magic words: how language augments human computation Andy Clark 9. The mapping between the mental and the public lexicon Dan Sperber and Deirdre Wilson 10. Convention-based semantics and the development of language Stephen Laurence Part III: Language and conscious reasoning Introduction Peter Carruthers and Jill Boucher 11. Language, thought, and the language of thought: Aunty's own argument revisited Martin Davies 12. Natural language and virtual belief Keith Frankish 13. The meta-intentional nature of executive functions and theory of mind Josef Perner 14. Reflections on language and mind Daniel Dennett.