scispace - formally typeset
Search or ask a question

Showing papers on "Knowledge representation and reasoning published in 2005"


01 Jan 2005
TL;DR: It is claimed that any manageable formalism for natural-language temporal descriptions will have to embody such an ontology, as will any usable temporal database for knowledge about events which is to be interrogated using natural language.
Abstract: A semantics of temporal categories in language and a theory of their use in defining the temporal relations between events both require a more complex structure on the domain underlying the meaning representations than is commonly assumed. This paper proposes an ontology based on such notions as causation and consequence, rather than on purely temporal primitives. A central notion in the ontology is that of an elementary event-complex called a "nucleus." A nucleus can be thought of as an association of a goal event, or "culmination," with a "preparatory process" by which it is accomplished, and a "consequent state," which ensues. Natural-language categories like aspects, futurates, adverbials, and when-clauses are argued to change the temporal/aspectual category of propositions under the control of such a nucleic knowledge representation structure. The same concept of a nucleus plays a central role in a theory of temporal reference, and of the semantics of tense, which we follow McCawley, Partee, and Isard in regarding as an anaphoric category. We claim that any manageable formalism for natural-language temporal descriptions will have to embody such an ontology, as will any usable temporal database for knowledge about events which is to be interrogated using natural language.

853 citations


Journal ArticleDOI
TL;DR: This work considers UML class diagrams, which are one of the most important components of UML, and addresses the problem of reasoning on such diagrams, using several results developed in the field of Knowledge Representation and Reasoning regarding Description Logics (DLs), a family of logics that admit decidable reasoning procedures.

591 citations


Journal ArticleDOI
TL;DR: This presentation uses OWL to represent the mutual relationships of scientific concepts and their ancillary space, time, and environmental descriptors, with application to locating NASA Earth science data.

447 citations


Journal ArticleDOI
TL;DR: Structural semantic interconnections (SSI) is presented, which creates structural specifications of the possible senses for each word in a context and selects the best hypothesis according to a grammar G, describing relations between sense specifications.
Abstract: Word sense disambiguation (WSD) is traditionally considered an AI-hard problem. A break-through in this field would have a significant impact on many relevant Web-based applications, such as Web information retrieval, improved access to Web services, information extraction, etc. Early approaches to WSD, based on knowledge representation techniques, have been replaced in the past few years by more robust machine learning and statistical techniques. The results of recent comparative evaluations of WSD systems, however, show that these methods have inherent limitations. On the other hand, the increasing availability of large-scale, rich lexical knowledge resources seems to provide new challenges to knowledge-based approaches. In this paper, we present a method, called structural semantic interconnections (SSI), which creates structural specifications of the possible senses for each word in a context and selects the best hypothesis according to a grammar G, describing relations between sense specifications. Sense specifications are created from several available lexical resources that we integrated in part manually, in part with the help of automatic procedures. The SSI algorithm has been applied to different semantic disambiguation problems, like automatic ontology population, disambiguation of sentences in generic texts, disambiguation of words in glossary definitions. Evaluation experiments have been performed on specific knowledge domains (e.g., tourism, computer networks, enterprise interoperability), as well as on standard disambiguation test sets.

369 citations


Journal ArticleDOI
TL;DR: This work proposes Appleseed, a novel proposal for local group trust computation that borrows many ideas from spreading activation models in psychology and relates their concepts to trust evaluation in an intuitive fashion.
Abstract: Semantic Web endeavors have mainly focused on issues pertaining to knowledge representation and ontology design. However, besides understanding information metadata stated by subjects, knowing about their credibility becomes equally crucial. Hence, trust and trust metrics, conceived as computational means to evaluate trust relationships between individuals, come into play. Our major contribution to Semantic Web trust management through this work is twofold. First, we introduce a classification scheme for trust metrics along various axes and discuss advantages and drawbacks of existing approaches for Semantic Web scenarios. Hereby, we devise an advocacy for local group trust metrics, guiding us to the second part which presents Appleseed, our novel proposal for local group trust computation. Compelling in its simplicity, Appleseed borrows many ideas from spreading activation models in psychology and relates their concepts to trust evaluation in an intuitive fashion. Moreover, we provide extensions for the Appleseed nucleus that make our trust metric handle distrust statements.

368 citations


Proceedings ArticleDOI
05 Dec 2005
TL;DR: A multi-hierarchical approach is presented to enable a mobile robot to acquire semantic information from its sensors, and to use it for navigation tasks, and the link between spatial and semantic information is established via anchoring.
Abstract: The success of mobile robots, and particularly of those interfacing with humans in daily environments (e.g., assistant robots), relies on the ability to manipulate information beyond simple spatial relations. We are interested in semantic information, which gives meaning to spatial information like images or geometric maps. We present a multi-hierarchical approach to enable a mobile robot to acquire semantic information from its sensors, and to use it for navigation tasks. In our approach, the link between spatial and semantic information is established via anchoring. We show experiments on a real mobile robot that demonstrate its ability to use and infer new semantic information from its environment, improving its operation.

318 citations


Proceedings Article
30 Jul 2005
TL;DR: HEX programs are introduced, which are nonmonotonic logic programs admitting higher-order atoms as well as external atoms, and the well-known answer-set semantics are extended to this class of programs.
Abstract: We introduce HEX programs, which are nonmonotonic logic programs admitting higher-order atoms as well as external atoms, and we extend the well-known answer-set semantics to this class of programs. Higher-order features are widely acknowledged as useful for performing meta-reasoning, among other tasks. Furthermore, the possibility to exchange knowledge with external sources in a fully declarative framework such as Answer-Set Programming (ASP) is nowadays important, in particular in view of applications in the Semantic Web area. Through external atoms, HEX programs can model some important extensions to ASP, and are a useful KR tool for expressing various applications. Finally, complexity and implementation issues for a preliminary prototype are discussed.

249 citations


Journal ArticleDOI
TL;DR: The main fuzzy approaches for defining spatial relationships including topological (set relationships, adjacency) and metrical relations (distances, directional relative position) are reviewed.

232 citations


Journal ArticleDOI
TL;DR: In this brief history, the beginnings of artificial intelligence are traced to philosophy, fiction, and imagination, and some early milestones include work in problems solving which included basic work in learning, knowledge representation, and inference.
Abstract: In this brief history, the beginnings of artificial intelligence are traced to philosophy, fiction, and imagination. Early inventions in electronics, engineering, and many other disciplines have influenced AI. Some early milestones include work in problems solving which included basic work in learning, knowledge representation, and inference as well as demonstration programs in language understanding, translation, theorem proving, associative memory, and knowledge-based systems. The article ends with a brief examination of influential organizations and current issues facing the field.

227 citations


Book ChapterDOI
TL;DR: It is shown how concept map-based knowledge models can be used to organize repositories of information in a way that makes them easily browsable, and how concept maps can improve searching algorithms for the Web.
Abstract: Information visualization has been a research topic for many years, leading to a mature field where guidelines and practices are well established. Knowledge visualization, in contrast, is a relatively new area of research that has received more attention recently due to the interest from the business community in Knowledge Management. In this paper we present the CmapTools software as an example of how concept maps, a knowledge visualization tool, can be combined with recent technology to provide integration between knowledge and information visualizations. We show how concept map-based knowledge models can be used to organize repositories of information in a way that makes them easily browsable, and how concept maps can improve searching algorithms for the Web. We also report on how information can be used to complement knowledge models and, based on the searching algorithms, improve the process of constructing concept maps.

220 citations


Book ChapterDOI
15 Jul 2005
TL;DR: Rule-based systems are the simplest form of artificial intelligence that represents knowledge in terms of a set of rules that tells what to do or what to conclude in different situations.
Abstract: Rule-based systems (also known as production systems or expert systems) are the simplest form of artificial intelligence. A rule based system uses rules as the knowledge representation for knowledge coded into the system [1][3][4] [13][14][16][17][18][20]. The definitions of rule-based system depend almost entirely on expert systems, which are system that mimic the reasoning of human expert in solving a knowledge intensive problem. Instead of representing knowledge in a declarative, static way as a set of things which are true, rule-based system represent knowledge in terms of a set of rules that tells what to do or what to conclude in different situations.

Book
22 Sep 2005
TL;DR: Semantic Characterization of Objects, Lexicon and Knowledge Representation, and Means for Expressing Classification and Stratification: Relational and Functional Means of Representation.
Abstract: Knowledge Representation with MultiNet.- Historical Roots.- Basic Concepts.- Semantic Characterization of Objects.- Semantic Characterization of Situations.- The Comparison of Entities.- The Spatio-temporal Characterization of Entities.- Modality and Negation.- Quantification and Pluralities.- The Role of Layer Information in Semantic Representations.- Relations Between Situations.- Lexicon and Knowledge Representation.- Question Answering and Inferences.- Software Tools for the Knowledge Engineer and Sample Applications.- Comparison Between MultiNet and Other Semantic Formalisms or Knowledge Representation Paradigms.- The Representational Means of MultiNet.- Overview and Representational Principles.- Means for Expressing Classification and Stratification.- Relational and Functional Means of Representation.

Book ChapterDOI
29 May 2005
TL;DR: The paper describes the design and implementation principles of a distributed reasoning system, called DRAGO (Distributed Reasoning Architecture for a Galaxy of Ontologies), that implements such distributed decision procedure.
Abstract: The paper addresses the problem of reasoning with multiple ontologies interconnected by semantic mappings. This problem is becoming more and more relevant due to the necessity of building the interoperable Semantic Web. In contrast to the so called global reasoning approach, in this paper we propose a distributed reasoning technique that accomplishes reasoning through a combination of local reasoning chunks, internally executed in each separate ontology. Using Distributed Description Logics as a formal framework for representation of multiple semantically connected ontologies, we define a sound and complete distributed tableau-based reasoning procedure which is built as an extension to standard Description Logic tableau. Finally, the paper describes the design and implementation principles of a distributed reasoning system, called DRAGO (Distributed Reasoning Architecture for a Galaxy of Ontologies), that implements such distributed decision procedure.

BookDOI
01 Jan 2005
TL;DR: This paper presents principles of Inductive Reasoning on the Semantic Web, a Framework for Learning in -Log, and a Geospatial World Model for theSemantic Web.
Abstract: Architectures.- SomeWhere in the Semantic Web.- A Framework for Aligning Ontologies.- A Revised Architecture for Semantic Web Reasoning.- Semantic Web Architecture: Stack or Two Towers?.- Languages.- Ten Theses on Logic Languages for the Semantic Web.- Semantic and Computational Advantages of the Safe Integration of Ontologies and Rules.- Logical Reconstruction of RDF and Ontology Languages.- Marriages of Convenience: Triples and Graphs, RDF and XML in Web Querying.- Descriptive Typing Rules for Xcerpt.- A General Language for Evolution and Reactivity in the Semantic Web.- Reasoning.- Use Cases for Reasoning with Metadata or What Have Web Services to Do with Integrity Constraints?.- Principles of Inductive Reasoning on the Semantic Web: A Framework for Learning in -Log.- Computational Treatment of Temporal Notions: The CTTN-System.- A Geospatial World Model for the Semantic Web.- Generating Contexts for Expression Data Using Pathway Queries.

Book ChapterDOI
05 Dec 2005
TL;DR: This paper describes the publicly available ‘Semantic Web for Research Communities’ (SWRC) ontology, in which research communities and relevant related concepts are modelled, and describes the design decisions that underlie the ontology.
Abstract: Representing knowledge about researchers and research communities is a prime use case for distributed, locally maintained, interlinked and highly structured information in the spirit of the Semantic Web. In this paper we describe the publicly available ‘Semantic Web for Research Communities’ (SWRC) ontology, in which research communities and relevant related concepts are modelled. We describe the design decisions that underlie the ontology and report on both experiences with and known usages of the SWRC Ontology. We believe that for making the Semantic Web reality the re-usage of ontologies and their continuous improvement by user communities is crucial. Our contribution aims to provide a description and usage guidelines to make the value of the SWRC explicit and to facilitate its re-use.

Journal ArticleDOI
TL;DR: This semantic analysis approach can be used in semantic annotation and transcoding systems, which take into consideration the users environment including preferences, devices used, available network bandwidth and content identity.
Abstract: An approach to knowledge-assisted semantic video object detection based on a multimedia ontology infrastructure is presented. Semantic concepts in the context of the examined domain are defined in an ontology, enriched with qualitative attributes (e.g., color homogeneity), low-level features (e.g., color model components distribution), object spatial relations, and multimedia processing methods (e.g., color clustering). Semantic Web technologies are used for knowledge representation in the RDF(S) metadata standard. Rules in F-logic are defined to describe how tools for multimedia analysis should be applied, depending on concept attributes and low-level features, for the detection of video objects corresponding to the semantic concepts defined in the ontology. This supports flexible and managed execution of various application and domain independent multimedia analysis tasks. Furthermore, this semantic analysis approach can be used in semantic annotation and transcoding systems, which take into consideration the users environment including preferences, devices used, available network bandwidth and content identity. The proposed approach was tested for the detection of semantic objects on video data of three different domains.

Journal ArticleDOI
TL;DR: A formal theory of robot perception as a form of abduction pins down the process whereby low-level sensor data is transformed into a symbolic representation of the external world, drawing together aspects such as incompleteness, top-down information flow, active perception, attention, and sensor fusion in a unifying framework.

Journal ArticleDOI
TL;DR: The judgment theorems of consistent sets are examined, and the discernibility matrix of a formal context is introduced, by which an approach to attribute reduction in the concept lattice is presented.
Abstract: The theory of the concept lattice is an efficient tool for knowledge representation and knowledge discovery, and is applied to many fields successfully. One focus of knowledge discovery is knowledge reduction. This paper proposes the theory of attribute reduction in the concept lattice, which extends the theory of the concept lattice. In this paper, the judgment theorems of consistent sets are examined, and the discernibility matrix of a formal context is introduced, by which we present an approach to attribute reduction in the concept lattice. The characteristics of three types of attributes are analyzed.

Book ChapterDOI
TL;DR: This introductory article seeks to provide a conceptual framework and a preview of the contributions of this volume as to why visualization may be effective in fostering, processing and managing knowledge and information.
Abstract: Visualization has proven to be an effective strategy for supporting users in coping with complexity in knowledge- and information-rich scenarios. Up to now, however, information visualization and knowledge visualization have been distinct research areas, which have been developed independently of each other. This book aims toward bringing both approaches together and looking for synergies, which may be used for fostering learning, instruction, and problem solving. This introductory article seeks to provide a conceptual framework and a preview of the contributions of this volume. The most important concepts referred to in this book are defined and a conceptual rationale is provided as to why visualization may be effective in fostering, processing and managing knowledge and information. The basic ideas underlying knowledge visualization and information visualization are outlined. The preview of each approach addresses its basic concept, as well as how it fits into the conceptual rationale of the book. The contributions are structured according to whether they belong to one of the following basic categories: “Background”, “Knowledge Visualization”, “Information Visualization”, and “Synergies”.

Book ChapterDOI
TL;DR: A first theoretical framework and a model for the new field of knowledge visualization are presented, which describes guidelines and principles derived from professional practice and previous research on how architects successfully use complementary visualizations to transfer and create knowledge among individuals from different social, cultural, and educational backgrounds.
Abstract: This article presents synergies between the research areas information visualization and knowledge visualization from a knowledge management and a communication science perspective. It presents a first theoretical framework and a model for the new field of knowledge visualization. It describes guidelines and principles derived from our professional practice and previous research on how architects successfully use complementary visualizations to transfer and create knowledge among individuals from different social, cultural, and educational backgrounds. The findings and insights are important for researchers and practitioners in the fields of information visualization, knowledge visualization, knowledge management, information design, media didactics, instructional psychology, and communication sciences.

Proceedings ArticleDOI
25 Jul 2005
TL;DR: The purpose of the paper is to highlight the need of users, with individual differences, facilitated by knowledge representations to reason about user situational awareness (SA).
Abstract: Subsequent revisions to the JDL model modified definitions for model usefulness that stressed differentiation between fusion (estimation) and sensor management (control). Two diverging groups include one pressing for fusion automation (JDL revisions) and one advocating the role of the user (user-fusion model). The center of debate is real-world delivery of fusion systems which requires presenting information fusion results for knowledge representation (fusion estimation) and knowledge reasoning (control management). The purpose of the paper is to highlight the need of users, with individual differences, facilitated by knowledge representations to reason about user situational awareness (SA). This position paper highlights: (1) addressing the user in system management/control, (2) assessing information quality (metrics) to support SA, (3) evaluating fusion systems to deliver user info needs, (4) planning knowledge delivery for dynamic updating, (5) designing SA interfaces to support user reasoning.

Journal ArticleDOI
TL;DR: In this paper, a general-purpose knowledge integration framework that employs Bayesian networks in integrating both low-level and semantic features is presented, and the efficacy of this framework is demonstrated via three applications involving semantic understanding of pictorial images.

Journal ArticleDOI
TL;DR: A formal method of cognitive-semantic analysis is presented for the identification and characterization of reasoning strategies deployed in medical tasks and its use through specific examples is demonstrated.

Proceedings ArticleDOI
28 Nov 2005
TL;DR: An ontology-driven model, which integrates Bayesian networks (BN) into the Ontology Web Language (OWL) to preserve the advantages of both and enable agents to act under uncertainty and complex structured open environments at the same time.
Abstract: This paper describes an ontology-driven model, which integrates Bayesian networks (BN) into the Ontology Web Language (OWL) to preserve the advantages of both. This model makes use of probability and dependency-annotated OWL to represent uncertain information in BN structures. These extensions enhance knowledge representation in OWL and enable agents to act under uncertainty and complex structured open environments at the same time. This paper presents the underlying principles and scratches the surface of the decision theoretic agent system design based on "OntoBayes"

Zhang, Wenxiu, Wei, Ling, Qi, Jianjun 
01 Jan 2005
TL;DR: The concept lattice is an efficient tool for knowledge representation and knowledge discovery, and has been applied to many fields successfully as mentioned in this paper, and one focus of knowledge discovery is knowledge reduction.
Abstract: The theory of the concept lattice is an efficient tool for knowledge representation and knowledge discovery, and is applied to many fields successfully. One focus of knowledge discovery is knowledge reduction. This paper proposes the theory of attribute reduction in the concept lattice, which extends the theory of the concept lattice. In this paper, the judgment theorems of consistent sets are examined, and the discernibility matrix of a formal context is introduced, by which we present an approach to attribute reduction in the concept lattice. The characteristics of three types of attributes are analyzed.

Proceedings Article
30 Jul 2005
TL;DR: A relatively small change of semantics is proposed which localizes inconsistency, and preserves directionality of "knowledge import", and a characterization of inferences using a fixed-point operator which can form the basis of a cache-based implementation for local reasoners.
Abstract: We investigate a formalism for reasoning with multiple local ontologies, connected by directional semantic mappings. We propose: (1) a relatively small change of semantics which localizes inconsistency (thereby making unnecessary global satisfiability checks), and preserves directionality of "knowledge import"; (2) a characterization of inferences using a fixed-point operator, which can form the basis of a cache-based implementation for local reasoners; (3) a truly distributed tableaux algorithm for cases when the local reasoners use subsets of SHIQ. Throughout, we indicate the applicability of the results to several recent proposals for knowledge representation and reasoning that support modularity, scalability and distributed reasoning.

Journal ArticleDOI
TL;DR: This paper relates the approaches, results, and goals of this stream of research, called functional representation (FR), with the functional modeling (FM) stream in engineering, and argues that the two streams are performing research that is mutually complementary.
Abstract: This paper is an informal description of some recent insights about what a device function is, how it arises in response to needs, and how function arises from the structure of a device and the functions of its components. These results formalize and clarify a set of contending intuitions about function that researchers have had. The paper relates the approaches, results, and goals of this stream of research, called functional representation (FR), with the functional modeling (FM) stream in engineering. Despite the occurrence of the term function in the two streams, often the results and techniques in the two streams appear not to have much to do with each other. I argue that, in fact, the two streams are performing research that is mutually complementary. FR research provides the basic layer for device ontology in a formal framework that helps to clarify the meanings of terms such as function and structure, and also to support representation of device knowledge for automated reasoning. FM research provides another layer in device ontology, by attempting to identify behavior primitives that are applicable to subsets of devices, with the hope that functions can be described in those domains with an economy of terms. This can lead to useful catalogs of functions and devices in specific areas of engineering. With increased attention to formalization, the work in FM can provide domain-specific terms for FR research in knowledge representation and automated reasoning.

Journal ArticleDOI
TL;DR: This work introduces a number of natural description logics and performs a detailed analysis of their decidability and computational complexity, finding that naive extensions with key constraints easily lead to undecidability, whereas more careful extensions yield NExpTime-complete DLs for a variety of useful concrete domains.
Abstract: Many description logics (DLs) combine knowledge representation on an abstract, logical level with an interface to "concrete" domains like numbers and strings with built-in predicates such as <, +, and prefix-of. These hybrid DLs have turned out to be useful in several application areas, such as reasoning about conceptual database models. We propose to further extend such DLs with key constraints that allow the expression of statements like "US citizens are uniquely identified by their social security number". Based on this idea, we introduce a number of natural description logics and perform a detailed analysis of their decidability and computational complexity. It turns out that naive extensions with key constraints easily lead to undecidability, whereas more careful extensions yield NExp-Time-complete DLs for a variety of useful concrete domains.

Book
24 Nov 2005
TL;DR: Computational Lexical Semantics is one of the first volumes to provide models for the creation of various kinds of computerized lexicons for the automatic treatment of natural language, with applications to machine translation, automatic indexing, and database front-ends, knowledge extraction, among other things.
Abstract: From the Publisher: Lexical semantics has become a major research area within computational linguistics, drawing from psycholinguistics, knowledge representation, and computer algorithms and architecture. Research programs whose goal is the definition of large lexicons are asking what the appropriate representation structure is for different facets of lexical information. Among these facets, semantic information is probably the most complex and the least explored. Computational Lexical Semantics is one of the first volumes to provide models for the creation of various kinds of computerized lexicons for the automatic treatment of natural language, with applications to machine translation, automatic indexing, and database front-ends, knowledge extraction, among other things. It focuses on semantic issues, as seen by linguists, psychologists, and computer scientists. Besides describing academic research, it also covers ongoing industrial projects.

Journal ArticleDOI
TL;DR: This work proposes the mixture of experts for case-based reasoning (MOE4CBR), a method that combines an ensemble of CBR classifiers with spectral clustering and logistic regression that achieves higher prediction accuracy and leads to the selection of a subset of features that have meaningful relationships with their class labels.
Abstract: Case-based reasoning (CBR) is a suitable paradigm for class discovery in molecular biology, where the rules that define the domain knowledge are difficult to obtain and the number and the complexity of the rules affecting the problem are too large for formal knowledge representation. To extend the capabilities of CBR, we propose the mixture of experts for case-based reasoning (MOE4CBR), a method that combines an ensemble of CBR classifiers with spectral clustering and logistic regression. Our approach not only achieves higher prediction accuracy, but also leads to the selection of a subset of features that have meaningful relationships with their class labels. We evaluate MOE4CBR by applying the method to a CBR system called TA3 - a computational framework for CBR systems. For two ovarian mass spectrometry data sets, the prediction accuracy improves from 80 percent to 93 percent and from 90 percent to 98.4 percent, respectively. We also apply the method to leukemia and lung microarray data sets with prediction accuracy improving from 65 percent to 74 percent and from 60 percent to 70 percent, respectively. Finally, we compare our list of discovered biomarkers with the lists of selected biomarkers from other studies for the mass spectrometry data sets.