scispace - formally typeset
Search or ask a question

Showing papers on "Knowledge representation and reasoning published in 1998"


Book
01 Nov 1998

1,028 citations


Journal ArticleDOI
TL;DR: The Remote Agent is described, a specific autonomous agent architecture based on the principles of model-based programming, on-board deduction and search, and goal-directed closed-loop commanding, that takes a significant step toward enabling this future of space exploration.

727 citations


Book
01 Jan 1998
TL;DR: It is important to focus on the common scientific principles and open problems arising from current tools, methodologies, and applications of ontology to provide a solid general foundation for this work.
Abstract: Research on ontology is becoming increasingly widespread in the computer science community. While this term has been rather confined to the philosophical sphere in the past, it is now gaining a specific role in areas such as Artificial Intelligence, Computational Linguistics, and Databases. Its importance has been recognized in fields as diverse as knowledge engineering, knowledge representation, qualitative modeling, language engineering, database design, information integration, object-oriented analysis, information retrieval and extraction, knowledge management and organization, agent-based systems design. Current applications areas are disparate, including enterprise integration, natural language translation, medicine, mechanical engineering, electronic commerce, geographic information systems, legal information systems, and biological information systems. Various workshops addressing the engineering aspects of ontology have been held in the recent years. However, ontology by 'its very nature' ought to be a unifying discipline. Insights in this field have potential impact on the whole area of information systems (taking this term in its broadest sense), as testified by the interest recently shown by international standards organizations. In order to provide a solid general foundation for this work, it is therefore important to focus on the common scientific principles and open problems arising from current tools, methodologies, and applications of ontology.

557 citations


Book ChapterDOI
04 Jul 1998
TL;DR: The current agent-oriented methodologies are introduced and what approaches have been followed, the suitability of these approaches for agent modelling, and some conclusions drawn from the survey are discussed.
Abstract: This article introduces the current agent-oriented methodologies. It discusseswhat approacheshave been followed (mainly extending existing objectoriented and knowledge engineering methodologies), the suitability of these approaches for agent modelling, and some conclusions drawn from the survey.

475 citations


Journal ArticleDOI
TL;DR: This paper points out the precise domain-specific knowledge required by each method, such as the explicit intentions of the guideline designer, and presents a machine-readable language, called Asbru, to represent and to annotate guidelines based on the task-specific ontology.

473 citations


Journal ArticleDOI
TL;DR: It is shown that in general, the reasoning problem for recursive carin - A LCNR knowledge bases is undecidable, and the constructors of ALCNR causing the undecidability is identified.

401 citations


Proceedings Article
01 Jul 1998
TL;DR: Technical design issues faced in the development of Open Knowledge Base Connectivity are discussed, how OKBC improves upon GFP is highlighted, and practical experiences in using it are reported on.
Abstract: The technology for building large knowledge bases (KBs) is yet to witness a breakthrough so that a KB can be constructed by the assembly of prefabricated knowledge components. Knowledge components include both pieces of domain knowledge (for example, theories of economics or fault diagnosis) and KB tools (for example, editors and theorem provers). Most of the current KB development tools can only manipulate knowledge residing in the knowledge representation system (KRS) for which the tools were originally developed. Open Knowledge Base Connectivity (OKBC) is an application programming interface for accessing KRSs, and was developed to enable the construction of reusable KB tools. OKBC improves upon its predecessor, the Generic Frame Protocol (GFP), in several significant ways. OKBC can be used with a much larger range of systems because its knowledge model supports an assertional view of a KRS. OKBC provides an explicit treatment of entities that are not frames, and it has a much better way of controlling inference and specifying default values. OKBC can be used on practically any platform because it supports network transparency and has implementations for multiple programming languages. In this paper, we discuss technical design issues faced in the development of OKBC, highlight how OKBC improves upon GFP, and report on practical experiences in using it.

354 citations


Journal ArticleDOI
TL;DR: PROforma is a knowledge representation language that is designed to support this new mode of dissemination of medical knowledge publishing, based on an intuitive model of the processes of care and well-understood logical semantics.

337 citations


Proceedings Article
01 Jul 1998
TL;DR: This paper provides a language that cleanly integrates frame-based representation systems and Bayesian networks, and describes an implemented system that allows most of the main frame systems in existence today to annotate their knowledge bases with Probabilistic information, and to use that information in answering probabilistic queries.
Abstract: Two of the most important threads of work in knowledge representation today are frame-based representation systems (FRS's) and Bayesian networks (BNs). FRS's provide an excellent representation for the organizational structure of large complex domains, but their applicability is limited because of their inability to deal with uncertainty and noise. BNs provide an intuitive and coherent probabilistic representation of our uncertainty, but are very limited in their ability to handle complex structured domains. In this paper, we provide a language that cleanly integrates these approaches, preserving the advantages of both. Our approach allows us to provide natural and compact definitions of probability models for a class, in a way that is local to the class frame. These models can be instantiated for any set of interconnected instances, resulting in a coherent probability distribution over the instance properties. Our language also allows us to represent important types of uncertainty that cannot be accomodated within the framework of traditional BNs: uncertainty over the set of entities present in our model, and uncertainty about the relationships between these entities. We provide an inference algorithm for our language via a reduction to inference in standard Bayesian networks. We describe an implemented system that allows most of the main frame systems in existence today to annotate their knowledge bases with probabilistic information, and to use that information in answering probabilistic queries.

326 citations


Proceedings Article
02 Jun 1998
TL;DR: This work presents a novel approach to conceptual modeling for Information Integration, which allows for suitably modeling the global concepts of the application, the individual information sources, and the constraints among different sources.
Abstract: Information Integration is one of the core problems in distributed databases, cooperative information systems, and data warehousing, which are key areas in the software development industry. Two critical factors for the design and maintenance of applications requiring Information Integration are conceptual modeling of the domain, and reasoning support over the conceptual representation. We demonstrate that Knowledge Representation and Reasoning techniques can play an important role for both of these factors, by proposing a Description Logic based framework for Information Integration. We show that the development of successful Information Integration solutions requires not only to resort to very expressive Description Logics, but also to significantly extend them. We present a novel approach to conceptual modeling for Information Integration, which allows for suitably modeling the global concepts of the application, the individual information sources, and the constraints among different sources. Moreover, we devise inference procedures for the fundamental reasoning services, namely relation and concept subsumption, and query containment. Finally, we present a methodological framework for Information Integration, which can be applied in several contexts, and highlights the role of reasoning services within the design process.

288 citations


Book ChapterDOI
26 Mar 1998
TL;DR: A unifying Description Logic is used, which incorporates all the features needed for the logical reformulation of the data models used in the various contexts, and it is demonstrated that several popular data modeling formalisms can be expressed in terms of specific logics of the family.
Abstract: The article aims at establishing a logical approach to class-based data modeling. After a discussion on class-based formalisms for data modeling, we introduce a family of logics, called Description Logics, which stem from research on Knowledge Representation in Artificial Intelligence. The logics of this family are particularly well suited for specifying data classes and relationships among classes, and are equipped with both formal semantics and inference mechanisms. We demonstrate that several popular data modeling formalisms, including the Entity-Relationship Model, and the most common variants of object-oriented data models, can be expressed in terms of specific logics of the family. For this purpose we use a unifying Description Logic, which incorporates all the features needed for the logical reformulation of the data models used in the various contexts. We also discuss the problem of devising reasoning procedures for the unifying formalism, and show that they provide valuable supports for several important data modeling activities.

Journal ArticleDOI
01 May 1998
TL;DR: A characterization of information modeling techniques is offered which classifies them according to their ontologies, the type of application for which they are intended, the set of abstraction mechanisms they support, as well as the tools they provide for building, analyzing, and managing application models.
Abstract: Information modeling is concerned with the construction of computer-based symbol structures which capture the meaning of information and organize it in ways that make it understandable and useful to people. Given that information is becoming an ubiquitous, abundant and precious resource, its modeling is serving as a core technology for information systems engineering. We present a brief history of information modeling techniques in Computer Science and briefly survey such techniques developed within Knowledge Representation (Artificial Intelligence), Data Modeling (Databases), and Requirements Analysis (Software Engineering and Information Systems). We then offer a characterization of information modeling techniques which classifies them according to their ontologies , i.e., the type of application for which they are intended, the set of abstraction mechanisms (or, structuring principles ) they support, as well as the tools they provide for building, analyzing, and managing application models. The final component of the paper uses the proposed characterization to assess particular information modeling techniques and draw conclusions about the advances that have been achieved in the field.

Book ChapterDOI
01 Jan 1998
TL;DR: In Cognitive Psychology, the experimental study of expertise involves applying concepts and methods from a number of areas: problem-solving, learning, and ergonomics, to name just a few.
Abstract: In Cognitive Psychology, the experimental study of expertise involves applying concepts and methods from a number of areas: problem-solving, learning, and ergonomics, to name just a few. The study of expertise provides a focus for basic research on many phenomena of cognition, such as memory limitations and reasoning biases. It also provides a focus for discussion of issues in cognitive theory, such as those involving knowledge representation. The psychological study of expertise has been invigorated in recent years by the advent of expert systems, but studies of expertise can be found even in the earliest psychological research. Furthermore, a great deal of the research in the tradition of judgment and decisionmaking can be regarded, in hindsight, as studies of expertise (e.g., linear decision models of the reasoning of economists). Clearly, the literature of psychological studies of expertise is vast.

Proceedings Article
01 Jul 1998
TL;DR: This work has developed methods for mapping web sources into a simple, uniform representation that makes it efficient to integrate multiple sources and makes it easy to maintain these agents and incorporate new sources as they become available.
Abstract: The Web is based on a browsing paradigm that makes it difficult to retrieve and integrate data from multiple sites. Today, the only way to do this is to build specialized applications, which are time-consuming to develop and difficult to maintain. We are addressing this problem by creating the technology and tools for rapidly constructing information agents that extract, query, and integrate data from web sources. Our approach is based on a simple, uniform representation that makes it efficient to integrate multiple sources. Instead of building specialized algorithms for handling web sources, we have developed methods for mapping web sources into this uniform representation. This approach builds on work from knowledge representation, machine learning and automated planning. The resulting system, called Ariadne, makes it fast and cheap to build new information agents that access existing web sources. Ariadne also makes it easy to maintain these agents and incorporate new sources as they become available.

Book ChapterDOI
TL;DR: The Schematization Similarity Conjecture is proposed: to the extent that space is schematized similarly in language and cognition, language will be successful in conveying space.
Abstract: As Talmy has observed, language schematizes space; language provides a systematic framework to describe space, by selecting certain aspects of a referent scene while neglecting the others. Here, we consider the ways that space and the things in it are schematized in perception and cognition, as well as in language. We propose the Schematization Similarity Conjecture: to the extent that space is schematized similarly in language and cognition, language will be successful in conveying space. We look at the evidence in both language and perception literature to support this view. Finally, we analyze schematizations of routes conveyed in sketch maps or directions, finding parallels in the kind of information omitted and retained in both.


Journal ArticleDOI
TL;DR: The properties that any arbitration operator should satisfy are investigated, in the style of Alchourron, Gardenfors, and Makinson, and proposed actual operators for arbitration are proposed.
Abstract: Knowledge-based systems must be able to "intelligently" manage a large amount of information coming from different sources and at different moments in time. Intelligent systems must be able to cope with a changing world by adopting a "principled" strategy. Many formalisms have been put forward in the artificial intelligence (AI) and database (DB) literature to address this problem. Among them, belief revision is one of the most successful frameworks to deal with dynamically changing worlds. Formal properties of belief revision have been investigated by Alchourron, Gardenfors, and Makinson, who put forward a set of postulates stating the properties that a belief revision operator should satisfy. Among these properties, a basic assumption of revision is that the new piece of information is totally reliable and, therefore, must be in the revised knowledge base. Different principles must be applied when there are two different sources of information and each one has a different view of the situation-the two views contradicting each other. If we do not have any reason to consider any of the sources completely unreliable, the best we can do is to "merge" the two views in a new and consistent one, trying to preserve as much information as possible. We call this merging process arbitration. In this paper, we investigate the properties that any arbitration operator should satisfy. In the style of Alchourron, Gardenfors, and Makinson we propose a set of postulates, analyze their properties, and propose actual operators for arbitration.

Journal ArticleDOI
Daniel Mailharro1
TL;DR: The main contribution of the work is to provide an object-oriented model completely integrated in the CSP schema, with inheritance and classification mechanisms, and with specific arc consistency algorithms.
Abstract: One of the main difficulties with configuration problem solving lies in the representation of the domain knowledge because many different aspects, such as taxonomy, topology, constraints, resource balancing, component generation, etc., have to be captured in a single model. This model must be expressive, declarative, and structured enough to be easy to maintain and to be easily used by many different kind of reasoning algorithms. This paper presents a new framework where a configuration problem is considered both as a classification problem and as a constraint satisfaction problem (CSP). Our approach deeply blends concepts from the CSP and object-oriented paradigms to adopt the strengths of both. We expose how we have integrated taxonomic reasoning in the constraint programming schema. We also introduce new constrained variables with nonfinite domains to deal with the fact that the set of components is previously unknown and is constructed during the search for solution. Our work strongly focuses on the representation and the structuring of the domain knowledge, because the most common drawback of previous works is the difficulty to maintain the knowledge base that is due to a lack of structure and expressiveness of the knowledge representation model. The main contribution of our work is to provide an object-oriented model completely integrated in the CSP schema, with inheritance and classification mechanisms, and with specific arc consistency algorithms.

Journal ArticleDOI
TL;DR: Computational methods for the rough analysis of databases, a relatively new mathematical tool for use in computer applications in circumstances which are characterized by vagueness and uncertainty, are discussed.

Journal ArticleDOI
TL;DR: The development of a knowledge representation model based on the SHARED object model reveals that certain aspects of artifact knowledge are essentially context-independent and that this representation can be a foundation for robust knowledge-based systems in design.
Abstract: We report on the development of a knowledge representation model, which is based on the SHARED object model reported in Shared Workspaces for Computer-Aided Collaborative Engineering (Wong, A. and Sriram, D., Technical Report, IESL 93-06, Intelligent Engineering Systems Laboratory, Department of Civil Engineering, MIT, March, 1993) and Research in Engineering Design (Wong, A. and Sriram, D., SHARED: An Information Model for Cooperative Product Development, 1993, Fall, 21-39). Our current model is implemented as a layered scheme, that incorporates both an evolving artifact and its associated design process. To represent artifacts as they evolve, we define objects recursively without a pre-defined granularity on this recursive decomposition. This eliminates the need for translations between levels of abstraction in the design process. The SHARED model extends traditional OOP in three ways: 1. by allowing explicit relationship classes with inheritance hierarchies; 2. by permitting constraints to be associated with objects and relationships; and 3. by comparing `similar' objects at three different levels (form, function and behavior). Five primitive objects define the design process: goal, plan, specification, decision and context. Goal objects achieve function, introduce constraints, introduce new artifacts or modify existing ones, and create subgoals. Plan objects order goals and link a product hierarchy to a process hierarchy. Specification objects define user inputs as constraints. Decision objects relate goals to user decisions and context objects describe the design context. Operators that are applied to design objects collectively form a representation of the design process for a given context. The representation is robust enough to effectively model four design paradigms [described in Journal of CAD (Gorti, S. and Sriram, R. D., Symbol to Form Mapping: a Framework for Conceptual Design, 1996, 28 (11), 853–870)]: top-down decomposition, step-wise refinement, bottom-up composition and constraint propagation. To demonstrate this, we represent the designs of two TV remote controllers in the SHARED architecture. The example reveals that certain aspects of artifact knowledge are essentially context-independent and that this representation can be a foundation for robust knowledge-based systems in design.

Journal ArticleDOI
TL;DR: This paper provides a guide and tutorial to type 2 fuzzy sets, which allow for linguistic grades of membership thus assisting in knowledge representation and offer improvement on inferencing with type 1 sets.
Abstract: This paper provides a guide and tutorial to type 2 fuzzy sets. Type 2 fuzzy sets allow for linguistic grades of membership thus assisting in knowledge representation. They also offer improvement on inferencing with type 1 sets. The various approaches to knowledge representation and inferencing are discussed, with worked examples, and some of the applications of type 2 sets are reported.

Journal ArticleDOI
TL;DR: A design method based on constructing a genetic/evolutionary-design model whose idea is borrowed from natural genetics is described, and a schema concept is introduced for the representation of design knowledge in the model.

Journal ArticleDOI
TL;DR: In this article, the authors describe a multimodal presentation system WIP which allows the generation of alternate presentations of the same content taking into account various contextual factors, and discuss how the plan-based approach to presentation design can be exploited so that graphics generation influences the production of text.

Journal ArticleDOI
TL;DR: It is shown that the epistemic operator formalizes procedural rules, as provided in many knowledge representation systems, and enables sophisticated query formulation, including various forms of closed-world reasoning.

Book ChapterDOI
23 Sep 1998
TL;DR: This paper has a twofold goal: showing that it is indeed possible to define objective (rather than subjective) measures of discovered rule surprisingness, and proposing new ideas and methods for defining objectiveRule surprisingness measures.
Abstract: Most of the literature argues that surprisingness is an inherently subjective aspect of the discovered knowledge, which cannot be measured in objective terms. This paper departs from this view, and it has a twofold goal: (1) showing that it is indeed possible to define objective (rather than subjective) measures of discovered rule surprisingness; (2) proposing new ideas and methods for defining objective rule surprisingness measures.

Book ChapterDOI
01 Jun 1998
TL;DR: A process model of CBR and the used knowledge according to the different knowledge containers is introduced and the current models of adaptation are described and illustrated in an example domain.
Abstract: This paper presents a survey of different adaptation techniques and the used knowledge during adaptation. A process model of CBR and the used knowledge according to the different knowledge containers is introduced. The current models of adaptation are described and illustrated in an example domain.

Journal ArticleDOI
01 Oct 1998
TL;DR: The paper describes the MIKE (Model-based and Incremental Knowledge Engineering) approach for developing knowledge-based systems, which integrates semiformal and formal specification techniques together with prototyping into a coherent framework.
Abstract: The paper describes the MIKE (Model-based and Incremental Knowledge Engineering) approach for developing knowledge-based systems. MIKE integrates semiformal and formal specification techniques together with prototyping into a coherent framework. All activities in the building process of a knowledge-based system are embedded in a cyclic process model. For the semiformal representation we use a hypermedia-based formalism which serves as a communication basis between expert and knowledge engineer during knowledge acquisition. The semiformal knowledge representation is also the basis for formalization, resulting in a formal and executable model specified in the Knowledge Acquisition and Representation Language (KARL). Since KARL is executable, the model of expertise can be developed and validated by prototyping. A smooth transition from a semiformal to a formal specification and further on to design is achieved because all the description techniques rely on the same conceptual model to describe the functional and nonfunctional aspects of the system. Thus, the system is thoroughly documented at different description levels, each of which focuses on a distinct aspect of the entire development effort. Traceability of requirements is supported by linking the different models to each other.

Book
01 Mar 1998
TL;DR: In this paper, the development of memory: infantile amnesia, symbolic representation and different memory systems strategies for remembering, metamemory and cognitive development logical reasoning in childhood Piaget's theory of logical development.
Abstract: Part 1 Cognition in infancy: the building blocks of cognitive development knowledge representation, reasoning, problem solving and learning conceptual development the development of causal reasoning. Part 2 The development of memory: infantile amnesia, symbolic representation and different memory systems strategies for remembering, metamemory and cognitive development logical reasoning in childhood Piaget's theory of logical development.

Book
01 Jul 1998
TL;DR: It is important to focus on the common scientific principles and open problems arising from current tools, methodologies, and applications of ontology to provide a solid general foundation for this work.
Abstract: From the Publisher: Research on ontology is becoming increasingly widespread in the computer science community. While this term has been rather confined to the philosophical sphere in the past, it is now gaining a specific role in areas such as Artificial Intelligence, Computational Linguistics, and Databases. Its importance has been recognized in fields as diverse as knowledge engineering, knowledge representation, qualitative modeling, language engineering, database design, information integration, object-oriented analysis, information retrieval and extraction, knowledge management and organization, agent-based systems design. Current applications areas are disparate, including enterprise integration, natural language translation, medicine, mechanical engineering, electronic commerce, geographic information systems, legal information systems, and biological information systems. Various workshops addressing the engineering aspects of ontology have been held in the recent years. However, ontology by 'its very nature' ought to be a unifying discipline. Insights in this field have potential impacts on the whole area of information systems (taking this term in its broadest sense), as testified by the interest recently shown by international standards organizations. In order to provide a solid general foundation for this work, it is therefore important to focus on the common scientific principles and open problems arising from current tools, methodologies, and applications of ontology.

Book ChapterDOI
TL;DR: The role of case-based reasoning is presented in this paper as the collection of evidence for evidence-based medical practice and enhances the system by conferring an ability to learn from experience, and thus improve results over time.
Abstract: This paper presents the CARE-PARTNER system. Functionally, it offers via the WWW knowledge-support assistance to clinicians responsible for the long-term follow-up of stem-cell post-transplant patient care. CARE-PARTNER aims at implementing the concept of evidence-based medical practice, which recommends the practice of medicine based on proven and validated knowledge. From an artificial intelligence viewpoint, it proposes a multimodal reasoning framework for the cooperation of case-based reasoning, rule-based reasoning and information retrieval to solve problems. The role of case-based reasoning is presented in this paper as the collection of evidence for evidence-based medical practice. Case-based reasoning permits to refine and complete the knowledge of the system. It enhances the system by conferring an ability to learn from experience, and thus improve results over time.