scispace - formally typeset
Search or ask a question

Showing papers on "Knowledge representation and reasoning published in 1995"


Journal ArticleDOI
TL;DR: The role of ontology in supporting knowledge sharing activities is described, and a set of criteria to guide the development of ontologies for these purposes are presented, and it is shown how these criteria are applied in case studies from the design ofOntologies for engineering mathematics and bibliographic data.
Abstract: Recent work in Artificial Intelligence is exploring the use of formal ontologies as a way of specifying content-specific agreements for the sharing and reuse of knowledge among software entities. We take an engineering perspective on the development of such ontologies. Formal ontologies are viewed as designed artifacts, formulated for specific purposes and evaluated against objective design criteria. We describe the role of ontologies in supporting knowledge sharing activities, and then present a set of criteria to guide the development of ontologies for these purposes. We show how these criteria are applied in case studies from the design of ontologies for engineering mathematics and bibliographic data. Selected design decisions are discussed, and alternative representation choices and evaluated against the design criteria.

6,949 citations


Book
14 Aug 1995
TL;DR: Reasoning About Knowledge is the first book to provide a general discussion of approaches to reasoning about knowledge and its applications to distributed systems, artificial intelligence, and game theory.
Abstract: A model for knowledge and its properties completeness and complexity - results and techniques knowledge in distributed systems actions and protocols common knowledge, co-ordination and agreement evolving knowledge dealing with logical omniscience knowledge and computation common knowledge revisited.

4,318 citations


Journal ArticleDOI
TL;DR: The notion of the ontological level is introduced, intermediate between the epistemological and the conceptual levels discussed by Brachman, as a way to characterize a knowledge representation formalism taking into account the intended meaning of its primitives.
Abstract: The purpose of this paper is to defend the systematic introduction of formal ontological principles in the current practice of knowledge engineering, to explore the various relationships between ontology and knowledge representation, and to present the recent trends in this promising research area. According to the "modelling view" of knowledge acquisition proposed by Clancey, the modelling activity must establish a correspondence between a knowledge base and two separate subsystems: the agent's behaviour (i.e. the problem-solving expertise ) and its own environment (the problem domain ). Current knowledge modelling methodologies tend to focus on the former sub-system only, viewing domain knowledge as strongly dependent on the particular task at hand: in fact, AI researchers seem to have been much more interested in the nature of reasoning rather than in the nature of the real world. Recently, however, the potential value of task-independent knowledge bases (or "ontologies") suitable to large scale integration has been underlined in many ways. In this paper, we compare the dichotomy between reasoning and representation to the philosophical distinction between epistemology and ontology. We introduce the notion of the ontological level, intermediate between the epistemological and the conceptual levels discussed by Brachman, as a way to characterize a knowledge representation formalism taking into account the intended meaning of its primitives. We then discuss some formal ontologic distinctions which may play an important role for such purpose.

1,140 citations


Book
01 Sep 1995

969 citations


01 Jan 1995
TL;DR: The Information Manifold is described, a system for browsing and querying of multiple networked information sources that demonstrates the viability of knowledge representation technology for retrieval and organization of information from disparate information sources.
Abstract: We describe the Information Manifold (IM), a system for browsing and querying of multiple networked information sources. As a first contribution, the system demonstrates the viability of knowledge representation technology for retrieval and organization of information from disparate (structured and unstructured) information sources. Such an organization allows the user to pose high-level queries that use data from multiple information sources. As a second contribution, we describe novel query processing algorithms used to combine information from multiple sources. In particular, our algorithms are guaranteed to find exactly the set of information sources relevant to a query, and to completely exploit knowledge about local closed world information (Etzioni et al. 1994).

337 citations


Journal ArticleDOI
TL;DR: This work indicates how one can achieve enhanced access to data and knowledge by using descriptions in languages for schema design and integration, queries, answers, updates, rules, and constraints.
Abstract: Description logics and reasoners, which are descendants of the KL-ONE language, have been studied in depth in artificial intelligence. After a brief introduction, we survey their application to the problems of information management, using the framework of an abstract information server equipped with several operations-each involving one or more languages. Specifically, we indicate how one can achieve enhanced access to data and knowledge by using descriptions in languages for schema design and integration, queries, answers, updates, rules, and constraints. >

321 citations


Book
01 Mar 1995
TL;DR: This book extends the logic programming form of knowledge representation and method of inference to permit the inclusion of uncertainties such as probabilistic knowledge and fuzzy incompleteness to general areas of knowledge engineering including expert and decision-support systems, evidential and case-based reasoning, fuzzy control and databases.
Abstract: From the Publisher: Presents a theory of uncertainty, consistent with and combining the theories of probability and fuzzy sets. Extends the logic programming form of knowledge representation and method of inference to permit the inclusion of uncertainties such as probabilistic knowledge and fuzzy incompleteness. Describes the application to general areas of knowledge engineering including expert and decision-support systems, evidential and case-based reasoning, fuzzy control and databases. An accompanying disk for Macintosh and one for the IBM PC enables readers to implement the examples while following the text.

267 citations


Journal ArticleDOI
TL;DR: In this article, the problem of integrating Reiter's default logic into terminological representation systems is considered, and it turns out that such an integration is less straightforward than we expected, considering the fact that the terminological language is a decidable sublanguage of first-order logic.
Abstract: We consider the problem of integrating Reiter's default logic into terminological representation systems. It turns out that such an integration is less straightforward than we expected, considering the fact that the terminological language is a decidable sublanguage of first-order logic. Semantically, one has the unpleasant effect that the consequences of a terminological default theory may be rather unintuitive, and may even vary with the syntactic structure of equivalent concept expressions. This is due to the unsatisfactory treatment of open defaults via Skolemization in Reiter's semantics. On the algorithmic side, we show that this treatment may lead to an undecidable default consequence relation, even though our base language is decidable, and we have only finitely many (open) defaults. Because of these problems, we then consider a restricted semantics for open defaults in our terminological default theories: default rules are applied only to individuals that are explicitly present in the knowledge base. In this semantics it is possible to compute all extensions of a finite terminological default theory, which means that this type of default reasoning is decidable. We describe an algorithm for computing extensions and show how the inference procedures of terminological systems can be modified to give optimal support to this algorithm.

258 citations


Journal ArticleDOI
Ron Sun1
TL;DR: It is demonstrated that combining rules and similarities can result in more robust reasoning models, and many seemingly disparate patterns of commonsense reasoning are actually different manifestations of the same underlying process and can be generated using the integrated architecture, which captures the underlying process to a large extent.

235 citations


Journal ArticleDOI
TL;DR: The main goal is to define a semantically well-founded logic for approximate reasoning, which is justifiable from the intuitive point of view, and to provide fast algorithms for dealing with it even when using expressive languages.

204 citations


Book
01 Jan 1995
TL;DR: The central premise of this book, that the development of LK BS should be centred on the elaboration of explicit models of law, is well demonstrated and it is an extremely worthwhile read for anyone interested in the theoretical foundations of AI and law and knowledge representation in particular.
Abstract: Although the field of Artificial Intelligence and Law has matured considerably, there is still no comprehensive view on the field, its achievements, and no agenda or clear direction for research. Moreover, present approaches to the development of legal knowledge-based systems (LKBS) - such as the use of rule-based systems, case-based systems, or logics - have obtained somewhat limited theoretical and practical results. This book provides a critical overview of the field by describing present approaches and analysing their problems in detail. A new "modelling approach" to legal knowledge engineering is proposed to address these problems and provide an agenda for research and development. This approach applies recent developments in knowledge modelling to the law domain. The book's central premise, that the development of LK BS should be centred on the elaboration of explicit models of law, is well demonstrated, it is an extremely worthwhile read for anyone interested in the theoretical foundations of AI and law and knowledge representation in particular.

Book ChapterDOI
David Heckerman1
01 Jan 1995
TL;DR: This chapter discusses a knowledge representation, called a Bayesian network, that allows one to learn uncertain relationships in a domain by combining expert domain knowledge and statistical data.
Abstract: Publisher Summary This chapter discusses a knowledge representation, called a Bayesian network, that allows one to learn uncertain relationships in a domain by combining expert domain knowledge and statistical data. A Bayesian network is a graphical representation of uncertain knowledge that most people find easy to construct directly from domain knowledge. In addition, the representation has formal probabilistic semantics, making it suitable for statistical manipulation. Over the past decade, the Bayesian network has become a popular representation for encoding uncertain expert knowledge in expert systems. More recently, researchers have developed methods for learning Bayesian networks from a combination of expert knowledge and data. The techniques that have been developed are new and still evolving, but they have been shown to be remarkably effective in some domains. Learning using Bayesian networks is similar to that using neural networks. The process employing Bayesian networks, however, has two important advantages: (1) one can easily encode expert knowledge in a Bayesian network, and use this knowledge to increase the efficiency and accuracy of learning; and (2) the nodes and arcs in learned Bayesian networks often correspond to recognizable distinctions and causal relationships.

Journal ArticleDOI
TL;DR: An animation approach where synthetic vision is used for navigation by a digital actor and offers a universal approach to pass the necessary information from the environment to an actor in the problems of path searching, obstacle avoidance, and internal knowledge representation with learning and forgetting characteristics is described.

Journal ArticleDOI
TL;DR: Some of the ontological questions that arise in artificial intelligence are surveyed, some answers that have been proposed by various philosophers, and an application of the philosophical analysis to the clarification of some current issues in AI are applied.
Abstract: Philosophers have spent 25 centuries debating ontological categories. Their insights are directly applicable to the analysis, design, and specification of the ontologies used in knowledge-based systems. This paper surveys some of the ontological questions that arise in artificial intelligence, some answers that have been proposed by various philosophers, and an application of the philosophical analysis to the clarification of some current issues in AI. Two philosophers who have developed the most complete systems of categories are Charles Sanders Peirce and Alfred North Whitehead. Their analyses suggest a basic structure of categories that can provide some guidelines for the design of AI systems.

Journal ArticleDOI
TL;DR: Applications such as databases, spreadsheets, semantic networks, expert systems, multimedia/hypermedia construction, can function as computer-based cognitive tools that function as intellectual partners with learners to expand and even amplify their thinking, thereby changing the role of learners in college classrooms to knowledge constructors rather than information reproducers.
Abstract: COGNITIVE TOOLS are computer-based applications that are normally used as productivity software. However, these applications may also function as knowledge representation formalisms that require learners to think critically when using them to represent content being studied or what they already know about a subject. Applications such as databases, spreadsheets, semantic networks, expert systems, multimedia/hypermedia construction, can function as computer-based cognitive tools that function as intellectual partners with learners to expand and even amplify their thinking, thereby changing the role of learners in college classrooms to knowledge constructors rather than information reproducers. Cognitive tools are examples of learningwith technologies rather thanfrom them.

Proceedings Article
20 Aug 1995
TL;DR: This paper explores the explanation of subsumption reasoning in Description Logics that are implemented using normalization methods, focusing on the perspective of knowledge engineers.
Abstract: This paper explores the explanation of subsumption reasoning in Description Logics that are implemented using normalization methods, focusing on the perspective of knowledge engineers. The notion of explanation is specified using a proof-theoretic framework for presenting the inferences supported in these systems. The problem of overly long explanations is addressed by decomposing them into smaller, independent steps, using the notions of "atomic description" and "atomic justification". Implementation aspects are explored by considering the design space and some desiderata for explanation modules. This approach has been implemented for the CLASSIC knowledge representation system.

Proceedings Article
20 Aug 1995
TL;DR: A methodology for comparing knowledge representation formalisms in terms of their "representational succinctness," that is, their ability to express knowledge situations relatively efficiently, is developed.
Abstract: We develop a methodology for comparing knowledge representation formalisms in terms of their "representational succinctness," that is, their ability to express knowledge situations relatively efficiently. We use this framework for comparing many important formalisms for knowledge base representation: propositional logic, default logic, circumscription, and model preference defaults; and, at a lower level, Horn formulas, characteristic models, decision trees, disjunctive normal form, and conjunctive normal form. We also show that adding new variables improves the effective expressibility of certain knowledge representation formalisms.

Journal ArticleDOI
TL;DR: An extended version of propositional calculus is developed and is demonstrated to be useful for nonmonotonic reasoning, dealing with conflicting beliefs and for coping with inconsistency generated by unreliable knowledge sources.

Book
01 Jan 1995
TL;DR: The Knowledge Acquisition and Representation Language (KARL) combines a description of a knowledge based system at the conceptual level (a so called model of expertise) with a description at a formal and executable level that allows the precise and unique specification of the functionality of aknowledge based system independent of any implementation details.
Abstract: The Knowledge Acquisition and Representation Language (KARL) combines a description of a knowledge based system at the conceptual level (a so called model of expertise) with a description at a formal and executable level. Thus, KARL allows the precise and unique specification of the functionality of a knowledge based system independent of any implementation details. A KARL model of expertise contains the description of domain knowledge, inference knowledge, and procedural control knowledge. For capturing these different types of knowledge, KARL provides corresponding modeling primitives based on Frame Logic and Dynamic Logic. A declarative semantics for a complete KARL model of expertise is given by a combination of these two types of logic. In addition, an operational definition of this semantics, which relies on a fixpoint approach, is given. This operational semantics defines the basis for the implementation of the KARL interpreter, which includes appropriate algorithms for efficiently executing KARL specifications. This enables the evaluation of KARL specifications by means of testing.

Proceedings ArticleDOI
16 Jan 1995
TL;DR: Fuzzy set theory and fuzzy cognitive maps offer a suitable technique to allow symbolic reasoning in the FMEA instead of numerical methods, thus providing human like interpretations of the system model under analysis, and they allow for the integration of multiple expert opinions.
Abstract: A failure mode and effects analysis (FMEA) seeks to determine how a system will behave in the event of a device failure. It involves the integration of several expert tasks to select components for analysis, determine failure modes, predict failure effects, propose corrective actions, etc. During an FMEA, numerical values are often not available or applicable and qualitative thresholds and linguistic terms such as high, slightly high, low, etc., are usually more relevant to the design than numerical expressions. Fuzzy set theory and fuzzy cognitive maps provide a basis for automating much of the reasoning required to carry out an FMEA on a system. They offer a suitable technique to allow symbolic reasoning in the FMEA instead of numerical methods, thus providing human like interpretations of the system model under analysis, and they allow for the integration of multiple expert opinions. This paper describes how fuzzy cognitive maps can be used to describe a system, its missions, failure modes, their causes and effects. The maps can then be evaluated using both numerical and graphical methods to determine the effects of a failure and the consistency of design decisions.

Journal ArticleDOI
TL;DR: The proposed algorithmic approach presents a viable option for efficiently traversing large‐scale, multiple thesauri (knowledge network) and can be adopted for automatic, multiple‐thesauri consultation.
Abstract: This paper presents a framework for knowledge discovery and concept exploration. In order to enhance the concept exploration capability of knowledge-based systems and to alleviate the limitations of the manual browsing approach, we have developed two spreading activation-based algorithms for concept exploration in large, heterogeneous networks of concepts (e.g., multiple thesauri). One algorithm, which is based on the symbolic AI paradigm, performs a conventional branch-and-bound search on a semantic net representation to identify other highly relevant concepts (a serial, optimal search process). The second algorithm, which is based on the neural network approach, executes the Hopfield net parallel relaxation and convergence process to identify “convergent” concepts for some initial queries (a parallel, heuristic search process). Both algorithms can be adopted for automatic, multiple-thesauri consultation. We tested these two algorithms on a large text-based knowledge network of about 13,000 nodes (terms) and 80,000 directed links in the area of computing technologies. This knowledge network was created from two external thesauri and one automatically generated thesaurus. We conducted experiments to compare the behaviors and performances of the two algorithms with the hypertext-like browsing process. Our experiment revealed that manual browsing achieved higher-term recall but lower-term precision in comparison to the algorithmic systems. However, it was also a much more laborious and cognitively demanding process. In document retrieval, there were no statistically significant differences in document recall and precision between the algorithms and the manual browsing process. In light of the effort required by the manual browsing process, our proposed algorithmic approach presents a viable option for efficiently traversing large-scale, multiple thesauri (knowledge network). © 1995 John Wiley & Sons, Inc.

Journal ArticleDOI
TL;DR: A new conceptual clustering method is introduced which addresses the problem of clustering large amounts of structured objects and the conditions under which the method is applicable are discussed.
Abstract: An important structuring mechanism for knowledge bases is building an inheritance hierarchy of classes based on the content of their knowledge objects. This hierarchy facilitates group-related processing tasks such as answering set queries, discriminating between objects, finding similarities among objects, etc. Building this hierarchy is a difficult task for the knowledge engineer. Conceptual clustering may be used to automate or assist the engineer in the creation of such a classification structure. This article introduces a new conceptual clustering method which addresses the problem of clustering large amounts of structured objects. The conditions under which the method is applicable are discussed. >

Journal ArticleDOI
Yoshida Kenichi1, Hiroshi Motoda1
TL;DR: A new concept-learning method is proposed that learns new concepts from inference patterns, not from positive/negative examples that most conventional concept learning methods use, and automatically generates multilevel representations from a given physical/single-level representation of a carry-chain circuit.

Journal ArticleDOI
01 Aug 1995
TL;DR: This paper considers, in detail, a particular argumentative structure, where each argument is defined as a classical inference together with the applied premises, and a variety of definitions of acceptability are provided, the properties of these definitions are explored, and their inter-relationship described.
Abstract: Classical logic has many appealing features for knowledge representation and reasoning. But unfortunately it is flawed when reasoning about inconsistent information, since anything follows from a classical inconsistency. This problem is addressed by introducing the notions of ‘argument’ and of ‘acceptability’ of an argument. These notions are used to introduce the concept of ‘argumentative structures’. Each definition of acceptability selects a subset of the set of arguments, and an argumentative structure is a subset of the power set of arguments. In this paper, we consider, in detail, a particular argumentative structure, where each argument is defined as a classical inference together with the applied premises. For such arguments, a variety of definitions of acceptability are provided, the properties of these definitions are explored, and their inter-relationship described. The definitions of acceptability induce a family of logics called argumentative logics which we explore. The relevance of this work is considered and put in a wider perspective.

Journal ArticleDOI
TL;DR: A novel unified approach for integrating explicit knowledge and learning by example in recurrent networks is proposed, which is accomplished by using a technique based on linear programming, instead of learning from random initial weights.
Abstract: Proposes a novel unified approach for integrating explicit knowledge and learning by example in recurrent networks. The explicit knowledge is represented by automaton rules, which are directly injected into the connections of a network. This can be accomplished by using a technique based on linear programming, instead of learning from random initial weights. Learning is conceived as a refinement process and is mainly responsible for uncertain information management. We present preliminary results for problems of automatic speech recognition. >

Journal ArticleDOI
01 May 1995
TL;DR: A knowledge-based approach for fuzzy information retrieval is proposed, where interval queries and weighted-interval queries are allowed for document retrieval, and knowledge is represented by a concept matrix.
Abstract: A knowledge-based approach for fuzzy information retrieval is proposed, where interval queries and weighted-interval queries are allowed for document retrieval. In this paper, knowledge is represented by a concept matrix, where the elements in a concept matrix represent relevant values between concepts. The implicit relevant values between concepts are inferred by the transitive closure of the concept matrix based on fuzzy logic. The proposed method is more flexible than previous methods due to the fact that it has the capability to deal with interval queries and weighted-interval queries. >

Book
15 Jun 1995
TL;DR: This survey attempts to identify and describe some of the common threads that tie together work in reasoning about knowledge in such diverse elds as philosophy, economics, linguistics, artiicial intelligence, and theoretical computer science.
Abstract: In this survey, I attempt to identify and describe some of the common threads that tie together work in reasoning about knowledge in such diverse elds as philosophy, economics, linguistics, artiicial intelligence, and theoretical computer science, with particular emphasis on work of the past ve years, particularly in computer science. It is a revised and updated version of a paper entitled \Reasoning about knowledge: a survey circa 1991", which appears in the

Journal ArticleDOI
TL;DR: The development environment, BEST (Blackboard based Expert Systems Toolkit), is aimed to provide the ability to produce large scale, evolvable, heterogeneous intelligent systems.
Abstract: The complexity and diversity of real world applications have forced researchers in the AI field to focus more on the integration of diverse knowledge representation and reasoning techniques for solving challenging, real world problems. Our development environment, BEST (Blackboard based Expert Systems Toolkit), is aimed to provide the ability to produce large scale, evolvable, heterogeneous intelligent systems. BEST incorporates the best of multiple programming paradigms in order to avoid restricting users to a single way of expressing either knowledge or data. It combines rule based programming, object oriented programming, logic programming, procedural programming and blackboard modelling in a single architecture for knowledge engineering, so that the user can tailor a style of programming to his application, using any or arbitrary combinations of methods to provide a complete solution. The deep integration of all these techniques yields a toolkit more effective even for a specific single application than any technique in isolation or collections of multiple techniques less fully integrated. Within the basic, knowledge based programming paradigm, BEST offers a multiparadigm language for representing complex knowledge, including incomplete and uncertain knowledge. Its problem solving facilities include truth maintenance, inheritance over arbitrary relations, temporal and hypothetical reasoning, opportunistic control, automatic partitioning and scheduling and both blackboard and distributed problem solving paradigms. >

Journal ArticleDOI
TL;DR: This paper presents the principles of the knowledge-based expert system for construction planning, explains the structure of its knowledge base, and discusses various implementation aspects and includes an example of the application.
Abstract: The knowledge-based expert system for construction planning is part of an automated building realization process. The system uses an object-oriented representation of the building and production rules, routines, and functions to manipulate objects and to generate the construction plan. The building is represented by its zones, functional systems, and works. The procedural knowledge employs rules for the generation of activities and their dependences. The algorithms are employed to allocate resources and to generate the construction schedule. The algorithms reflect the different objectives of managerial decision making, such as least cost, managerial efficiency, or fastest completion. This paper presents the principles of the system, explains the structure of its knowledge base, and discusses various implementation aspects. It also includes an example of the application.

Journal ArticleDOI
TL;DR: The discussion begins with the empirical model and aims at a computational model which is implementable without determining the concrete implementation tools (the design model according to KADS), and feels that a small simulation model of professional summarizing is feasible.
Abstract: Four working steps taken from a comprehensive empirical model of expert abstracting are studied in order to prepare an explorative implementation of a simulation model. It aims at explaining the knowledge processing activities during professional summarizing. Following the case-based and holistic strategy of qualitative empirical research, we develop the main features of the simulation system by investigating in detail a small but central test case—four working steps where an expert abstractor discovers what the paper is about and drafts the topic sentence of the abstract. Following the KADS methodology of knowledge engineering, our discussion begins with the empirical model (a conceptual model in KADS terms) and aims at a computational model which is implementable without determining the concrete implementation tools (the design model according to KADS). The envisaged solution uses a blackboard system architecture with cooperating object-oriented agents representing cognitive strategies and a dynamic text representation which borrows its conceptual relations in particular from RST (Rhetorical Structure Theory). As a result of the discussion we feel that a small simulation model of professional summarizing is feasible.