scispace - formally typeset
Search or ask a question

Showing papers on "Knowledge representation and reasoning published in 1992"


Journal ArticleDOI
TL;DR: A mediator is a software module that exploits encoded knowledge about certain sets or subsets of data to create information for a higher layer of applications as discussed by the authors, which simplifies, abstracts, reduces, merges, and explains data.
Abstract: For single databases, primary hindrances for end-user access are the volume of data that is becoming available, the lack of abstraction, and the need to understand the representation of the data. When information is combined from multiple databases, the major concern is the mismatch encountered in information representation and structure. Intelligent and active use of information requires a class of software modules that mediate between the workstation applications and the databases. It is shown that mediation simplifies, abstracts, reduces, merges, and explains data. A mediator is a software module that exploits encoded knowledge about certain sets or subsets of data to create information for a higher layer of applications. A model of information processing and information system components is described. The mediator architecture, including mediator interfaces, sharing of mediator modules, distribution of mediators, and triggers for knowledge maintenance, are discussed. >

2,441 citations


Journal ArticleDOI
TL;DR: It is shown that while the problem of deciding satisfiability of an S5 formula with one agent is NP-complete, the problem for many agents is PSPACE-complete and the problem becomes complete for exponential time once a common knowledge operator is added to the language.

886 citations


Journal ArticleDOI
TL;DR: A generalization of Allen's interval-based approach to temporal reasoning is presented and the notion of ‘conceptual neighborhood’ of qualitative relations between events is central to the presented approach, using semi-intervals rather than intervals as the basic units of knowledge.

701 citations


Journal ArticleDOI
TL;DR: A mathematical model for conceptual knowledge systems allows us to study mathematically the representation, inference, acquisition, and communication of conceptual knowledge.
Abstract: “Concept Lattice” is the central notion of “Formal Concept Analysis”, a new area of research which is based on a set-theoretical model for concepts and conceptual hierarchies. This model yields not only a new approach to data analysis but also methods for formal representation of conceptual knowledge. These methods are outlined on three levels. First, basics on concept lattices are explained starting from simple data contexts which consist of a binary relation between objects and attributes indicating which object has which attribute. On the second level, conceptual relationships are discussed for data matrices which assign attribute values to each of the given objects. Finally, a mathematical model for conceptual knowledge systems is described. This model allows us to study mathematically the representation, inference, acquisition, and communication of conceptual knowledge.

590 citations


Journal ArticleDOI
TL;DR: The KL-ONE family is introduced, an overview of current research is given, some of the systems that have been developed are described, and some future research directions are outlined.
Abstract: The knowledge representation system KL-ONE has been one of the most influential and imitated knowledge representation systems in the Artificial Intelligence community. Begun at Bolt Beranek and Newman in 1978, KL-ONE pioneered the development of taxonomic representations that can automatically classify and assimilate new concepts based on a criterion of terminological subsumption. This theme generated considerab interest in both the formal community and a large community of potential users. The KL-ONE community has since expanded to include many systems at many institutions and in many different countries. This paper introduces the KL-ONE family and discusses some of the main themes explored by KL-ONE and its successors. We give an overview of current research, describe some of the systems that have been developed, and outline some future research directions.

384 citations


Journal ArticleDOI
TL;DR: This paper proposes the beginnings of a theory of reasoning with abstraction which captures and generalizes most previous work in the area and provides the foundations for the mechanization of abstraction inside an abstract proof checker.

309 citations


Journal ArticleDOI
TL;DR: A conceptual model called REMAP (representation and maintenance of process knowledge) that relates process knowledge to the objects that are created during the requirements engineering process has been developed and a prototype environment that provides assistance to the various stakeholders involved in the design and management of large systems has been implemented.
Abstract: Support for various stakeholders involved in software projects (designers, maintenance personnel, project managers and executives, end users) can be provided by capturing the history about design decisions in the early stages of the system's development life cycle in a structured manner. Much of this knowledge, which is called the process knowledge, involving the deliberation on alternative requirements and design decisions, is lost in the course of designing and changing such systems. Using an empirical study of problem-solving behavior of individual and groups of information systems professionals, a conceptual model called REMAP (representation and maintenance of process knowledge) that relates process knowledge to the objects that are created during the requirements engineering process has been developed. A prototype environment that provides assistance to the various stakeholders involved in the design and management of large systems has been implemented. >

305 citations


MonographDOI
26 Jun 1992
TL;DR: The Logic of Typed Feature Structures is the first monograph that brings all the main theoretical ideas into one place where they can be related and compared in a unified setting and is an indispensable compendium for the researcher or graduate student working on constraint-based grammatical formalisms.
Abstract: This book develops the theory of typed feature structures, a data structure that generalizes both first-order terms and feature structures of unification-based grammars to include inheritance, typing, inequality, cycles and intensionality The resulting synthesis serves as a logical foundation for grammars, logic programming and constraint-based reasoning systems A logical perspective is adopted which employs an attribute-value description language along with complete equational axiomatizations of the various systems of feature structures At the same time, efficiency concerns are kept in mind and complexity and representability results are provided The application of feature structures to phrase structure grammars is described and completeness results are shown for standard evaluation strategies Definite clause logic programs are treated as a special case of phrase structure grammars Constraint systems are introduced and an enumeration technique is developed for solving arbitrary attribute-value logic constraints This book, with its innovative approach to data structure, will be essential reading for researchers in computational linguistics, logic programming and knowledge representation Its self-contained presentation makes it flexible enough to serve as both a research tool and a text book

278 citations


Journal ArticleDOI
TL;DR: This paper defines computationally efficient procedures for solving two related reasoning tasks that arise in interval and point algebras: Given (possibly indefinite) knowledge of the relationships between some intervals or points, find one or more scenarios that are consistent with the information provided, and find all the feasible relations between every pair of interval or points.

266 citations


Proceedings Article
01 Jan 1992
TL;DR: In this paper, the problem of integrating Reiter's default logic into terminological representation systems is considered, and it turns out that such an integration is less straightforward than we expected, considering the fact that the terminological language is a decidable sublanguage of first-order logic.
Abstract: We consider the problem of integrating Reiter's default logic into terminological representation systems. It turns out that such an integration is less straightforward than we expected, considering the fact that the terminological language is a decidable sublanguage of first-order logic. Semantically, one has the unpleasant effect that the consequences of a terminological default theory may be rather unintuitive, and may even vary with the syntactic structure of equivalent concept expressions. This is due to the unsatisfactory treatment of open defaults via Skolemization in Reiter's semantics. On the algorithmic side, we show that this treatment may lead to an undecidable default consequence relation, even though our base language is decidable, and we have only finitely many (open) defaults. Because of these problems, we then consider a restricted semantics for open defaults in our terminological default theories: default rules are applied only to individuals that are explicitly present in the knowledge base. In this semantics it is possible to compute all extensions of a finite terminological default theory, which means that this type of default reasoning is decidable. We describe an algorithm for computing extensions and show how the inference procedures of terminological systems can be modified to give optimal support to this algorithm.

259 citations


Journal ArticleDOI
TL;DR: Several systems adopting this approach to encode general knowledge in an expressive language, then dynamically construct a decision model for each particular situation or problem instance are developed.
Abstract: In recent years there has been a growing interest among AI researchers in probabilistic and decision modelling, spurred by significant advances in representation and computation with network modelling formalisms. In applying these techniques to decision support tasks, fixed network models have proven to be inadequately expressive when a broad range of situations must be handled. Hence many researchers have sought to combine the strengths of flexible knowledge representation languages with the normative status and well-understood computational properties of decision-modelling formalisms and algorithms. One approach is to encode general knowledge in an expressive language, then dynamically construct a decision model for each particular situation or problem instance. We have developed several systems adopting this approach, which illustrate a variety of interesting techniques and design issues.

Journal ArticleDOI
TL;DR: The spatial query and spatial reasoning based on a 2D C-string representation are presented and the similarity measure is defined and the algorithm for similarity retrieval of iconic images is also proposed.

Proceedings Article
12 Jul 1992
TL;DR: This paper introduces a new operation for description logics: computing the "least common subsumer" of a pair of descriptions, which computes the largest set of commonalities between two descriptions.
Abstract: Description logics are a popular formalism for knowledge representation and reasoning. This paper introduces a new operation for description logics: computing the "least common subsumer" of a pair of descriptions. This operation computes the largest set of commonalities between two descriptions. After arguing for the usefulness of this operation, we analyze it by relating computation of the least common subsumer to the well-understood problem of testing subsumption; a close connection is shown in the restricted case of "structural subsumption". We also present a method for computing the least common subsumer of "attribute chain equalities" , and analyze the tractability of computing the least common subsumer of a set of descriptions--an important operation in inductive learning.

Journal ArticleDOI
TL;DR: The scope of data modeling now extends far beyond what it was in the early days of file-oriented systems; many organizations are embarking on corporate data modeling as a part of the strategic planning activity.
Abstract: The focus has subsequently shifted to modeling data as seen by the application and the user. Basic data abstraction concepts of classification , generalization, aggregation, and identification were applied in different combinations and different degrees to produce a plethora of \"semantic\" data models in the late 1970s and early 1980s. This article traces this evolution of data models and discusses the recent developments that have dominated the commercial practice of data modeling: the entity-relationship, the functional, and the object-oriented approaches. The article concludes with an overview of the current areas such as modeling of dynamic, active databases, and knowledge discovery from databases. Data modeling benefited immensely from developments in knowledge representation and ter-minological reasoning, and new models such as CANDIDE [10] are springing up as a result of the marriage between these two areas. Certain developments have dominated the commercial practice of data *The word Data will be used in singular throughout this article in keeping with the convention in database literature. modeling: the entity-relationship [14], and the binary approach called NIAM [37] are two examples. The functional approach was exemplified by the DAPLEX model [34] and is having an impact on object models coupled with functions such as the Iris model, now available commercially from Hewlett-Packard in their Open ODB system. A variant of the ER model called IDEF/1X (see [25]) gained a good degree of following in some government establishments. Recently, a major effort for standardizing the representation and modeling of parts data and designs under the name Product Data Exchange using STEP (PDES) [32] is under way. STEP is the ISO standard for the Exchange of Product model data. This has brought about the renaissance of data modeling which is being applied in diverse industries such as building and construction, electrical components, and architecture. Thus, the scope of data modeling now extends far beyond what it was in the early days of file-oriented systems; many organizations are embarking on corporate data modeling as a part of the strategic planning activity. Since this issue of Communications is devoted to different aspects of modeling that include object-oriented analysis and modeling as well as the knowledge representation area, we will not dwell heavily on it. Our focus will be on the mod-eling of data as applied to the design of database structures. We will highlight the current trends in object oriented modeling of data as well as modeling of active …

01 Jan 1992
TL;DR: A unified framework is developed through an analysis of various types, aspects and roles of knowledge relevant for the kind of systems described above, which aims to provide an environment for discussion of different approaches to knowledge intensive problem solving and learning.
Abstract: The problem addressed in this research is that of developing a method which integrates problem solving with learning from experience within an extensive model of different knowledge types. A unified framework is developed through an analysis of various types, aspects and roles of knowledge relevant for the kind of systems described above. The framework contains a knowledge representation platform and a generic model of problem solving. It further specifies a general reasoning approach that combines reasoning within a deep model with reasoning from heuristic rules and past cases. Finally, it provides a model of learning methods that retain concrete problem solving cases in a way that makes them useful for solving similar problems later. The framework emphasizes knowledge-intensive case-based reasoning and learning as the major paradigm. A comprehensive and thorough knowledge model is the basis for generation of goal related explanations that support the reasoning and learning processes. Reasoning from heuristic rules or from 'scratch' within the deeper model is regarded partly as supporting methods to the case-based reasoning, partly as methods to 'fall back on' if the case-based method fails. The purpose of the framework is to provide an environment for discussion of different approaches to knowledge intensive problem solving and learning. Four systems focus on different methodological issues of knowledge intensive problem solving and learning. Each system represents interesting solutions to subproblems, but none of them provide a scope that is broad enough to represent the type of method requested for developing and maintaining complex applications in a practical, real world setting. CREEK specifies a structural and functional architecture based on an expressive, frame-based knowledge representation language, and an explicit model of control knowledge. It has a reasoning strategy which first attempts case-based reasoning, then rule-based reasoning, and, finally, model-based reasoning. The system interacts with the user during both problem solving and learning, e.g. by asking for confirmation or rejection of unexplained facts. The knowledge representation system, including an explicit model of basic representational constructs and basic inference methods, has been implemented. Otherwise, CREEK is an architectural specification--a system design. Its main characteristics are demonstrated by an example from the domain of diagnosis and treatment of oil well drilling fluid (mud). (Abstract shortened with permission of author.)

Proceedings ArticleDOI
08 Mar 1992
TL;DR: The author proposes extended fuzzy cognitive maps (E-FCMs) to represent causal relationships more naturally and computer simulation results indicate the effectiveness of the E- FCMs.
Abstract: Fuzzy cognitive maps (FCMs) have been proposed to represent causal reasoning by using numeric processing. They graphically represent uncertain causal reasoning. In the resonant states, there emerges a limit cycle or a hidden pattern, which is a FCM inference. However, there are some shortcomings concerned with knowledge representation in the conventional FCMs. The author proposes extended fuzzy cognitive maps (E-FCMs) to represent causal relationships more naturally. The features of the E-FCMs are nonlinear membership functions, conditional weights, and time delay weights. Computer simulation results indicate the effectiveness of the E-FCMs. >

Journal ArticleDOI
TL;DR: This paper reviews different ways of describing expert system reasoning, emphasizing the use of simple logic, set, and graph notations for making dimensional analyses of modeling languages and inference methods.

Journal ArticleDOI
01 Nov 1992
TL;DR: A representation and set of inference techniques for the dynamic construction of probabilistic and decision‐theoretic models expressed as networks are described and an interpretation of the network construction process is developed in terms of the implicit networks encoded in the database.
Abstract: We describe a representation and set of inference techniques for the dynamic construction of probabilistic and decision-theoretic models expressed as networks. In contrast to probabilistic reasoning schemes that rely on fixed models, we develop a representation that implicitly encodes a large number of possible model structures. Based on a particular query and state of information, the system constructs a customized belief net for that particular situation. We develop an interpretation of the network construction process in terms of the implicit networks encoded in the database. A companion method for constructing belief networks with decisions and values (decision networks) is also developed that uses sensitivity analysis to focus the model building process. Finally, we discuss some issues of control of model construction and describe examples of constructing networks.

Journal ArticleDOI
TL;DR: The authors introduce two reorganization primitives, composition and decomposition, which change the population of agents and the distribution of knowledge in an organization, and develop computational organizational self-design techniques for agents with architectures based on production systems.
Abstract: The authors introduce two reorganization primitives, composition and decomposition, which change the population of agents and the distribution of knowledge in an organization. To create these primitives, they formalize organizational knowledge, which represents knowledge of potential and necessary interactions among agents in an organization. The authors develop computational organizational self-design (OSD) techniques for agents with architectures based on production systems to take advantage of the well-understood body of theory and practice. They first extend parallel production systems, where global control exists, into distributed production systems, where problems are solved by a society of agents using distributed control. Then they introduce OSD into distributed production systems to provide adaptive work allocation. Simulation results demonstrate the effectiveness of the approach in adapting to changing environmental demands. The approach affects production system design and improves the ability of build production systems that can adapt to changing real-time constraints. >

Journal ArticleDOI
TL;DR: A simple algorithm, called activation sharpening, is presented that allows a standard feed-forward backpropagation network to develop semi-distributed representations, thereby reducing the problem of catastrophic forgetting.
Abstract: A major problem with connectionist networks is that newly-learned information may completely destroy previously-learned information unless the network is continually retrained on the old information. This phenomenon, known as catastrophic forgetting, is unacceptable both for practical purposes and as a model of mind. This paper advances the claim that catastrophic forgetting is in part the result of the overlap of system's distributed representations and can be reduced by reducing this overlap. A simple algorithm, called activation sharpening, is presented that allows a standard feed-forward backpropagation network to develop semi-distributed representations, thereby reducing the problem of catastrophic forgetting. Activation sharpening is discussed in tight of recent work done by other researchers who have experimented with this and other techniques for reducing catastrophic forgetting.

Journal ArticleDOI
TL;DR: The effectiveness of taxonomic reasoning techniques as an active support to knowledge acquisition and conceptual schema design is shown and an extended formalism and taxonomic inference algorithms for models giving prominence to attributes are given.
Abstract: Taxonomic reasoning is a typical task performed by many AI knowledge representation systems In this paper, the effectiveness of taxonomic reasoning techniques as an active support to knowledge acquisition and conceptual schema design is shown The idea developed is that by extending conceptual models with defined concepts and giving them rigorous logic semantics, it is possible to infer isa relationships between concepts on the basis of their descriptions From a theoretical point of view, this approach makes it possible to give a formal definition for consistency and minimality of a conceptual schema From a pragmatic point of view it is possible to develop an active environment that allows automatic classification of a new concept in the right position of a given taxonomy, ensuring the consistency and minimality of a conceptual schema A formalism that includes the data semantics of models giving prominence to type constructors (E/R, TAXIS, GALILEO) and algorithms for taxonomic inferences are presented: their soundness, completeness, and tractability properties are proved Finally, an extended formalism and taxonomic inference algorithms for models giving prominence to attributes (FDM, IFO) are given

Journal ArticleDOI
TL;DR: A formal model is presented in which one can capture various assumptions frequently made about systems, such as whether they are deterministic or nondeterministic, whether knowledge is cumulative, and whether or not the "environment" affects the state transitions of the processes.
Abstract: It has been argued that knowledge is a useful tool for designing and analyzing complex systems. The notion of knowledge that seems most relevant in this context is an external, zrzforrnatzorr-based notion that can be shown to satisfy all the axioms of the modal logic S5. The properties of this notion of knowledge are examined, and it is shown that they depend crucially. and in subtle ways, on assumptions made about the system and about the language used for describing knowledge. A formal model is presented in which one can capture various assumptions frequently made about systems, such as whether they are deterministic or nondeterministic, whether knowledge is cumulative (which means that processes never "forget"), and whether or not the "environment" affects the state transitions of the processes. It 1s then shown that under some assumptions about the system and the language, certain states of knowledge are not attainable and the axioms of S5 do not completely characterize the properties of knowledge; extra axioms are needed. Complete axiomatlzations for knowledge in a number of cases of interest are provided.

Journal ArticleDOI
01 May 1992
TL;DR: The economic theory of rationality promises to equal mathematical logic in its importance for the mechanization of reasoning as discussed by the authors, and the growing literature on how the basic notions of probability, utility, and rational choice, coupled with practical limitations on information and resources, influence the design and analysis of reasoning and representation systems.
Abstract: The economic theory of rationality promises to equal mathematical logic in its importance for the mechanization of reasoning. We survey the growing literature on how the basic notions of probability, utility, and rational choice, coupled with practical limitations on information and resources, influence the design and analysis of reasoning and representation systems.

Proceedings ArticleDOI
23 Mar 1992
TL;DR: An understanding system, designed for both speech and text input, has been implemented based on statistical representation of task specific semantic knowledge, which extracts words and their association to the conceptual structure of the task directly from the acoustic signal.
Abstract: An understanding system, designed for both speech and text input, has been implemented based on statistical representation of task specific semantic knowledge. The core of the system is the conceptual decoder, which extracts the words and their association to the conceptual structure of the task directly from the acoustic signal. The conceptual information, which is also used to clarify the English sentences, is encoded following a statistical paradigm. A template generator and an SQL (structured query language) translator process the sentence and produce SQL code for querying a relational database. Results of the system on the official DARPA test are given. >

Journal ArticleDOI
TL;DR: It is shown that extending jy - with unrestricted existential quantification makes subsumption NP-complete, the first proof of intractability for a concept language containing, whether explicitly or implicitly, no construct expressing disjunction.

Journal ArticleDOI
TL;DR: The components of an NLP system, which is currently being developed in the Geneva Hospital, and within the European Community's AIM programme, are described, which includes a Natural Language Analyser, a Conceptual Graphs Builder, a Data Base Storage component, a Query Processor, a Natural language Generator and a Translator.
Abstract: For medical records, the challenge for the present decade is Natural Language Processing (NLP) of texts, and the construction of an adequate Knowledge Representation. This article describes the components of an NLP system, which is currently being developed in the Geneva Hospital, and within the European Community's AIM programme. They are: a Natural Language Analyser, a Conceptual Graphs Builder, a Data Base Storage component, a Query Processor, a Natural Language Generator and, in addition, a Translator, a Diagnosis Encoding System and a Literature Indexing System. Taking advantage of a closed domain of knowledge, defined around a medical specialty, a method called proximity processing has been developed. In this situation no parser of the initial text is needed, and the system is based on semantical information of near words in sentences. The benefits are: easy implementation, portability between languages, robustness towards badly-formed sentences, and a sound representation using conceptual graphs.

Book ChapterDOI
Michael Villano1
10 Jun 1992
TL;DR: The applicability of Knowledge Space Theory and Bayesian Belief Networks as probabilistic student models imbedded in an Intelligent Tutoring System is examined and student modeling issues such as knowledge representation, adaptive assessment, curriculum advancement, and student feedback are addressed.
Abstract: The applicability of Knowledge Space Theory (Falmagne and Doignon) and Bayesian Belief Networks (Pearl) as probabilistic student models imbedded in an Intelligent Tutoring System is examined. Student modeling issues such as knowledge representation, adaptive assessment, curriculum advancement, and student feedback are addressed. Several factors contribute to uncertainty in student modeling such as careless errors and lucky guesses, learning and forgetting, and unanticipated student response patterns. However, a probabilistic student model can represent uncertainty regarding the estimate of the student's knowledge and can be tested using empirical student data and established statistical techniques.

Book ChapterDOI
10 Jun 1992
TL;DR: The tutor's student modeling procedure which employs an overlay of a set of several hundred programming rules which is used to guide remediation and implement mastery learning is evaluated.
Abstract: The ACT Programming Languages Tutor helps students as they write short computer programs. The tutor is constructed around a set of several hundred programming rules that allows the program to solve exercises step-by-step along with the student. This paper evaluates the tutor's student modeling procedure which employs an overlay of these programming rules. The tutor maintains an estimate of the probability that the student has learned each rule, based on the student's performance. These estimates are used to guide remediation and implement mastery learning. The predictive validity of these probability estimates for posttest and tutor performance is assessed.

Journal ArticleDOI
John F. Sowa1
TL;DR: This paper surveys conceptual graphs, their development from each of these traditions, and the applications based on them, as well as providing a computational mechanism for relating conceptual graphs to external procedures and databases.
Abstract: Conceptual graphs are a knowledge representation language designed as a synthesis of several different traditions. First are the semantic networks, which have been used in machine translation and computational linguistics for over thirty years. Second are the logic-based techniques of unification, lambda calculus, and Peirce's existential graphs. Third is the linguistic research based on Tesniere's dependency graphs and various forms of case grammar and thematic relations. Fourth are the dataflow diagrams and Petri nets, which provide a computational mechanism for relating conceptual graphs to external procedures and databases. The result is a highly expressive system of logic with a direct mapping to and from natural languages. The lambda calculus supports the definitions for a taxonomic system and provides a general mechanism for restructuring knowledge bases. With the definitional mechanisms, conceptual graphs can be used an intermediate stage between natural languages and the rules and frames of expert systems—an important feature for knowledge acquisition and for help and explanations. During the past five years, conceptual graphs have been applied to almost every aspect of AI, ranging from expert systems and natural language to computer vision and neural networks. This paper surveys conceptual graphs, their development from each of these traditions, and the applications based on them.

Journal ArticleDOI
TL;DR: The goal is an interactive model of composite system design incorporating deficiency-driven design, formal analysis, incremental design and rationalization, and design reuse that is used to reconstruct the design of two existing composite systems rationally.
Abstract: The design process that spans the gap between the requirements acquisition process and the implementation process, in which the basic architecture of a system is defined, and functions are allocated to software, hardware, and human agents. is studied. The authors call this process composite system design. The goal is an interactive model of composite system design incorporating deficiency-driven design, formal analysis, incremental design and rationalization, and design reuse. They discuss knowledge representations and reasoning techniques that support these goals for the product (composite system) that they are designing, and for the design process. To evaluate the model, the authors report on its use to reconstruct the design of two existing composite systems rationally. >