scispace - formally typeset
Search or ask a question

Showing papers on "Knowledge representation and reasoning published in 2008"


Book
08 Jan 2008
TL;DR: The Handbook of Knowledge Representation is an up-to-date review of twenty-five key topics in knowledge representation written by the leaders of each field, an essential resource for students, researchers and practitioners in all areas of Artificial Intelligence.
Abstract: Knowledge Representation, which lies at the core of Artificial Intelligence, is concerned with encoding knowledge on computers to enable systems to reason automatically. The Handbook of Knowledge Representation is an up-to-date review of twenty-five key topics in knowledge representation, written by the leaders of each field.This book is an essential resource for students, researchers and practitioners in all areas of Artificial Intelligence. * Make your computer smarter* Handle qualitative and uncertain information* Improve computational tractability to solve your problems easily

785 citations


Proceedings Article
13 Jul 2008
TL;DR: Answer set programming (ASP) is a form of declarative programming oriented towards difficult search problems, particularly useful in knowledge-intensive applications.
Abstract: Answer set programming (ASP) is a form of declarative programming oriented towards difficult search problems. As an outgrowth of research on the use of nonmonotonic reasoning in knowledge representation, it is particularly useful in knowledge-intensive applications. ASP programs consist of rules that look like Prolog rules, but the computational mechanisms used in ASP are different: they are based on the ideas that have led to the creation of fast satisfiability solvers for propositional logic.

357 citations


Book
20 Oct 2008
TL;DR: This book addresses the question of how far it is possible to go in knowledge representation and reasoning by representing knowledge with graphs (in the graph theory sense) and reasoning with graph operations.
Abstract: This book addresses the question of how far it is possible to go in knowledge representation and reasoning by representing knowledge with graphs (in the graph theory sense) and reasoning with graph operations. The authors have carefully structured the book with the first part covering basic conceptual graphs, the second developing the computational aspects, and the final section pooling the kernel extensions. An appendix summarizes the basic mathematical notions. This is the first book to provide a comprehensive view on the computational facets of conceptual graphs. The mathematical prerequisites are minimal and the material presented can be used in artificial intelligence courses at graduate level upwards.

339 citations


Journal ArticleDOI
TL;DR: A hybrid formalism of MKNF+ knowledge bases is presented, which integrates DLs and rules in a coherent semantic framework by basing the semantics of the formalism on the logic of minimal knowledge and negation as failure (MKNF) by Lifschitz.
Abstract: Description logics (DLs) and rules are formalisms that emphasize different aspects of knowledge representation: whereas DLs are focused on specifying and reasoning about conceptual knowledge, rules are focused on nonmonotonic inference. Many applications, however, require features of both DLs and rules. Developing a formalism that integrates DLs and rules would be a natural outcome of a large body of research in knowledge representation and reasoning of the last two decades; however, achieving this goal is very challenging and the approaches proposed thus far have not fully reached it. In this paper, we present a hybrid formalism of MKNFpknowledge bases, which integrates DLs and rules in a coherent semantic framework. Achieving seamless integration is nontrivial, since DLs use an open-world assumption, while the rules are based on a closed-world assumption. We overcome this discrepancy by basing the semantics of our formalism on the logic of minimal knowledge and negation as failure (MKNF) by Lifschitz. We present several algorithms for reasoning with MKNFp knowledge bases, each suitable to different kinds of rules, and establish tight complexity bounds.

289 citations


Journal ArticleDOI
TL;DR: This paper defines a specific type of semantic maps, which integrates hierarchical spatial information and semantic knowledge, and describes how these semantic maps can improve task planning in two ways: extending the capabilities of the planner by reasoning about semantic information, and improving the planning efficiency in large domains.

285 citations


Journal ArticleDOI
TL;DR: Multi-Entity Bayesian Networks is presented, a first-order language for specifying probabilistic knowledge bases as parameterized fragments of Bayesian networks, and a proof is given that MEBN can represent a probability distribution on interpretations of any finitely axiomatizable first- order theory.

281 citations


Book ChapterDOI
TL;DR: Attempto Controlled English (ACE) is a controlled natural language, i.e. a precisely defined subset of English that can automatically and unambiguously be translated into first-order logic.
Abstract: Attempto Controlled English (ACE) is a controlled natural language, ie a precisely defined subset of English that can automatically and unambiguously be translated into first-order logic ACE may seem to be completely natural, but is actually a formal language, concretely it is a first-order logic language with an English syntax Thus ACE is human and machine understandable ACE was originally intended to specify software, but has since been used as a general knowledge representation language in several application domains, most recently for the semantic web ACE is supported by a number of tools, predominantly by the Attempto Parsing Engine (APE) that translates ACE texts into Discourse Representation Structures (DRS), a variant of first-order logic Other tools include the Attempto Reasoner RACE, the AceRules system, the ACE View plug-in for the Protege ontology editor, AceWiki, and the OWL verbaliser

264 citations


Journal ArticleDOI
TL;DR: This paper presents sound and complete algorithms for the main reasoning problems in the new probabilistic description logics, which are based on reductions to reasoning in their classical counterparts, and to solving linear optimization problems.

260 citations


Proceedings Article
16 Sep 2008
TL;DR: In this paper, the problem of answering conjunctive queries posed over knowledge bases where rules are an extension of Datalog rules is addressed, where rules may have existentially quantified variables in the head.
Abstract: A crucial task in Knowledge Representation is answering queries posed over a knowledge base, represented as a set of facts plus a set of rules. In this paper we address the problem of answering conjunctive queries posed over knowledge bases where rules are an extension of Datalog rules, called Datalog∃ rules, that may have existentially quantified variables in the head; this kind of rules are traditionally called tuple-generating dependencies (TGDs) in the database literature, but they are broadly used in description logics and in ontological reasoning. In this setting, the chase algorithm is an important tool for query answering. So far, most of the research has concentrated on cases where the chase terminates. We define and study large classes of TGDs under which the query evaluation problems remain decidable even in case the chase does not terminate. We provide tight complexity bounds for such cases. Our results immediately extend to query containment.

240 citations


Book ChapterDOI
01 Jan 2008
TL;DR: This article provides a self-contained first introduction to description logics with examples before the syntax and semantics of the DL SROIQ are defined in detail.
Abstract: Publisher Summary This chapter discusses description logics (DLs), which are a family of logic-based knowledge representation languages that can be used to represent the terminological knowledge of an application domain in a structured and formally well-structured way. It discusses their provenience and history and explains the way the field has developed. It describes the basic DL ALC in some detail, including definitions of syntax, semantics, and basic reasoning services, and discusses important extensions such as inverse roles, number restrictions, and concrete domains. It discusses (1) the relationship between DLs and other formalisms, in particular first order and modal logics, (2) the most commonly used reasoning techniques, in particular tableau, resolution, and automata based techniques, and (3) the computational complexity of basic reasoning problems. After reviewing some of the most prominent applications of DLs, in particular ontology language applications, the chapter discusses other aspects of DL research.

239 citations


Book ChapterDOI
01 Jan 2008
TL;DR: Answer Set Prolog is a language for knowledge representation and reasoning based on the answer set/stable model semantics of logic programs that allows expressing disjunction and classic or strong negation.
Abstract: Publisher Summary This chapter discusses Answer Set Prolog, which is a language for knowledge representation and reasoning based on the answer set/stable model semantics of logic programs. The language has roots in the declarative programming, syntax, and semantics of standard Prolog, disjunctive databases, and nonmonotonic logic. Unlike standard Prolog, it allows expressing disjunction and classic or strong negation. It differs from many other knowledge representation languages by its ability to represent defaults. A substantial part of education consists in learning various defaults, exceptions to these defaults, and the ways of using this information to draw reasonable conclusions about the world and the consequences of one's actions. Answer Set Prolog provides a powerful logical model of this process. Its syntax allows a simple representation of defaults and their exceptions, its consequence relation characterizes the corresponding set of valid conclusions, and its inference mechanisms allow a program to find these conclusions in a reasonable amount of time.

Journal ArticleDOI
TL;DR: In this paper, the authors examine the possible use of description logics (DLs) as a knowledge representation and reasoning system for high-level scene interpretation, and show that aggregates can be represented by concept expressions of a description logic which provides a concrete-domain extension for quantitative temporal and spatial constraints.

Journal ArticleDOI
TL;DR: This article reviews existing similarity measures in geometric, feature, network, alignment and transformational models, and evaluates the semantic similarity models with respect to the requirements for semantic similarity measurement between geospatial data.
Abstract: Semantic similarity is central for the functioning of semantically enabled processing of geospatial data. It is used to measure the degree of potential semantic interoperability between data or different geographic information systems (GIS). Similarity is essential for dealing with vague data queries, vague concepts or natural language and is the basis for semantic information retrieval and integration. The choice of similarity measurement influences strongly the conceptual design and the functionality of a GIS. The goal of this article is to provide a survey presentation on theories of semantic similarity measurement and review how these approaches – originally developed as psychological models to explain human similarity judgment – can be used in geographic information science. According to their knowledge representation and notion of similarity we classify existing similarity measures in geometric, feature, network, alignment and transformational models. The article reviews each of these models and outlines its notion of similarity and metric properties. Afterwards, we evaluate the semantic similarity models with respect to the requirements for semantic similarity measurement between geospatial data. The article concludes by comparing the similarity measures and giving general advice how to choose an appropriate semantic similarity measure. Advantages and disadvantages point to their suitability for different tasks.

Proceedings ArticleDOI
04 Aug 2008
TL;DR: This paper presents the plan of building the large knowledge collider, a platform for massive distributed incomplete reasoning that will remove scalability barriers, and discusses how the technologies of LarKC would move beyond the state-of-the-art of Web scale reasoning.
Abstract: Current semantic Web reasoning systems do not scale to the requirements of their hottest applications, such as analyzing data from millions of mobile devices, dealing with terabytes of scientific data, and content management in enterprises with thousands of knowledge workers. In this paper, we present our plan of building the large knowledge collider, a platform for massive distributed incomplete reasoning that will remove these scalability barriers. This is achieved by (i) enriching the current logic-based semantic Web reasoning methods, (ii) employing cognitively inspired approaches and techniques, and (iii) building a distributed reasoning platform and realizing it both on a high-performance computing cluster and via "computing at home". In this paper, we will discuss how the technologies of LarKC would move beyond the state-of-the-art of Web scale reasoning.

Journal ArticleDOI
TL;DR: A literature review in clinical decision support systems with a focus on the way knowledge bases are constructed and how inference mechanisms and group decision making methods are used in CDSSs, with particular attention to the uncertainty handling capability of commonly used knowledge representation and inference schemes.
Abstract: This paper provides a literature review in clinical decision support systems (CDSSs) with a focus on the way knowledge bases are constructed, and how inference mechanisms and group decision making methods are used in CDSSs. Particular attention is paid to the uncertainty handling capability of the commonly used knowledge representation and inference schemes. The definition of what constitute good CDSSs and how they can be evaluated and validated are also considered. Some future research directions for handling uncertainties in CDSSs are proposed.

Book ChapterDOI
15 Dec 2008
TL;DR: This work formally defines the new class of finitely-ground programs, allowing for a powerful (possibly recursive) use of function terms in the full ASP language with disjunction and negation, and proves that it is semi-decidable.
Abstract: Disjunctive Logic Programming (DLP) under the answer set semantics, often referred to as Answer Set Programming (ASP), is a powerful formalism for knowledge representation and reasoning (KRR). The latest years witness an increasing effort for embedding functions in the context of ASP. Nevertheless, at present no ASP system allows for a reasonably unrestricted use of function terms. Functions are either required not to be recursive or subject to severe syntactic limitations, if allowed at all in ASP systems. In this work we formally define the new class of finitely-ground programs, allowing for a powerful (possibly recursive) use of function terms in the full ASP language with disjunction and negation. We demonstrate that finitely-ground programs have nice computational properties: (i) both brave and cautious reasoning are decidable, and (ii) answer sets of finitely-ground programs are computable. Moreover, the language is highly expressive, as any computable function can be encoded by a finitely-ground program. Due to the high expressiveness, membership in the class of finitely-ground program is clearly not decidable (we prove that it is semi-decidable). We single out also a subset of finitely-ground programs, called finite-domain programs, which are effectively recognizable, while keeping computability of both reasoning and answer set computation. We implement all results in DLP, further extending the language in order to support list and set terms, along with a rich library of built-in functions for their manipulation. The resulting ASP system is very powerful: any computable function can be encoded in a rich and fully declarative KRR language, ensuring termination on every finitely-ground program. In addition, termination is "a priori" guaranteed if the user asks for the finite-domain check.

Journal ArticleDOI
01 Dec 2008
TL;DR: The article describes selected formalisms of the ContractLog KR and their adequacy for automated SLA management and presents results of experiments and examples from common industry use cases to demonstrate the expressiveness of the language and the scalability of the approach.
Abstract: Outsourcing of complex IT infrastructure to IT service providers has increased substantially during the past years. IT service providers must be able to fulfil their service-quality commitments based upon predefined Service Level Agreements (SLAs) with the service customer. They need to manage, execute and maintain thousands of SLAs for different customers and different types of services, which needs new levels of flexibility and automation not available with the current technology. The complexity of contractual logic in SLAs requires new forms of knowledge representation to automatically draw inferences and execute contractual agreemen ts. A logic-based approach provides several advantages including automated rule chaining allowing for compact knowledge representation as well as flexibility to adapt to rapidly changing business requirements. We suggest logical formalisms for the representation and enforcement of SLA rules and describe a proof-of-concept implementation. The article describes selected formalisms of the ContractLog KR and their adequacy for automated SLA management and presents results of experiments and examples from common industry use cases to demonstrate the expressiveness of the language and the scalability of the approach.

Patent
29 Sep 2008
TL;DR: A knowledge representation system as mentioned in this paper is a knowledge base in which knowledge is represented in a structured, machine-readable format that encodes meaning, e.g., a knowledge graph.
Abstract: Embodiments of the present invention relate to knowledge representation systems which include a knowledge base in which knowledge is represented in a structured, machine-readable format that encodes meaning.

Journal ArticleDOI
TL;DR: This work develops techniques to build maps that represent activity and navigability of the environment, and presents two methods, the first based on hidden Markov models and the second on support vector machines.
Abstract: Robotic mapping is the process of automatically constructing an environment representation using mobile robots. We address the problem of semantic mapping, which consists of using mobile robots to create maps that represent not only metric occupancy but also other properties of the environment. Specifically, we develop techniques to build maps that represent activity and navigability of the environment. Our approach to semantic mapping is to combine machine learning techniques with standard mapping algorithms. Supervised learning methods are used to automatically associate properties of space to the desired classification patterns. We present two methods, the first based on hidden Markov models and the second on support vector machines. Both approaches have been tested and experimentally validated in two problem domains: terrain mapping and activity-based mapping.

Journal ArticleDOI
TL;DR: An ontological knowledge framework is presented that covers healthcare domains that a hospital encompasses-from the medical or administrative tasks, to hospital assets, medical insurances, patient records, drugs, and regulations, and makes the vision of personalized healthcare possible.

Journal ArticleDOI
TL;DR: F fuzzy logic is proposed as an effective means of meeting challenges of meeting cross-layer optimization in cognitive radio networks, as far as both knowledge representation and control implementation are concerned.
Abstract: The search for the ultimate architecture for cross-layer optimization in cognitive radio networks is characterized by challenges such as modularity, interpretability, imprecision, scalability, and complexity constraints. In this article we propose fuzzy logic as an effective means of meeting these challenges, as far as both knowledge representation and control implementation are concerned.

Book ChapterDOI
01 Jan 2008
TL;DR: The notation, applications, and reasoning methods used with CGs are surveyed and their mapping to and from other versions of logic is surveyed.
Abstract: A conceptual graph (CG) is a graph representation for logic based on the semantic networks of artificial intelligence and the existential graphs of Charles Sanders Peirce. Several versions of CGs have been designed and implemented over the past thirty years. The simplest are the typeless core CGs, which correspond to Peirce's original existential graphs. More common are the extended CGs, which are a typed superset of the core. The research CGs have explored novel techniques for reasoning, knowledge representation, and natural language semantics. The semantics of the core and extended CGs is defined by a formal mapping to and from the ISO standard for Common Logic, but the research CGs are defined by a variety of formal and informal extensions. This article surveys the notation, applications, and reasoning methods used with CGs and their mapping to and from other versions of logic.

Book ChapterDOI
01 Jan 2008
TL;DR: This chapter presents the idea of knowledge representation and reasoning for the purpose of high-level robotic control to be central to cognitive robotics, which connects cognitive robotics not only to (traditional and less cognitive) robotics but also to other areas of artificial intelligence (AI) such as planning and agent-oriented programming.
Abstract: Publisher Summary Cognitive robotics is the study of the knowledge representation and reasoning problems faced by an autonomous robot (or an agent) in a dynamic and incompletely known world. This chapter presents the idea of knowledge representation and reasoning for the purpose of high-level robotic control to be central to cognitive robotics. This connects cognitive robotics not only to (traditional and less cognitive) robotics but also to other areas of artificial intelligence (AI) such as planning and agent-oriented programming. To illustrate the knowledge representation and reasoning issues relevant to high-level robotic control, the chapter discusses a Reiter's variant of the situation calculus. The situation calculus also deals with actions whose effects are deterministic—that is, where there is no doubt as to which fluents change and which do not. The chapter discusses some of the knowledge representation issues that arise in the context of cognitive robotics and describes problems in automated reasoning in the same setting.

Journal ArticleDOI
TL;DR: In this paper, strong consistence and weak consistence of decision formal context are defined respectively, the judgment theorems of consistent sets are examined, and approaches to reduction are given.
Abstract: The theory of concept lattices is an efficient tool for knowledge representation and knowledge discovery, and is applied to many fields successfully One focus of knowledge discovery is knowledge reduction Based on the reduction theory of classical formal context, this paper proposes the definition of decision formal context and its reduction theory, which extends the reduction theory of concept lattices In this paper, strong consistence and weak consistence of decision formal context are defined respectively For strongly consistent decision formal context, the judgment theorems of consistent sets are examined, and approaches to reduction are given For weakly consistent decision formal context, implication mapping is defined, and its reduction is studied Finally, the relation between reducts of weakly consistent decision formal context and reducts of implication mapping is discussed

Journal Article
TL;DR: This work introduces a novel feature space for representing control knowledge in terms of information computed via relaxed plan extraction, which has been a major source of success for non-learning planners and gives a new way of leveraging relaxed planning techniques in the context of learning.
Abstract: A number of today's state-of-the-art planners are based on forward state-space search. The impressive performance can be attributed to progress in computing domain independent heuristics that perform well across many domains. However, it is easy to find domains where such heuristics provide poor guidance, leading to planning failure. Motivated by such failures, the focus of this paper is to investigate mechanisms for learning domain-specific knowledge to better control forward search in a given domain. While there has been a large body of work on inductive learning of control knowledge for AI planning, there is a void of work aimed at forward-state-space search. One reason for this may be that it is challenging to specify a knowledge representation for compactly representing important concepts across a wide range of domains. One of the main contributions of this work is to introduce a novel feature space for representing such control knowledge. The key idea is to define features in terms of information computed via relaxed plan extraction, which has been a major source of success for non-learning planners. This gives a new way of leveraging relaxed planning techniques in the context of learning. Using this feature space, we describe three forms of control knowledge---reactive policies (decision list rules and measures of progress) and linear heuristics---and show how to learn them and incorporate them into forward state-space search. Our empirical results show that our approaches are able to surpass state-of-the-art non-learning planners across a wide range of planning competition domains.

Journal ArticleDOI
TL;DR: A declarative theory of forgetting for disjunctive logic programs under answer set semantics that is fully based on semantic grounds is established and shows how the semantics of inheritance logic programs and update logic programs from the literature can be characterized through forgetting.

Proceedings ArticleDOI
15 Dec 2008
TL;DR: This paper uses Wikipedia to create a concept-based representation of a text document, with each concept associated to a Wikipedia article to find pair-wise instance-level constraints for supervised clustering, guiding clustering towards the direction indicated by the constraints.
Abstract: Wikipedia has been applied as a background knowledge base to various text mining problems, but very few attempts have been made to utilize it for document clustering. In this paper we propose to exploit the semantic knowledge in Wikipedia for clustering, enabling the automatic grouping of documents with similar themes. Although clustering is intrinsically unsupervised, recent research has shown that incorporating supervision improves clustering performance, even when limited supervision is provided. The approach presented in this paper applies supervision using active learning. We first utilize Wikipedia to create a concept-based representation of a text document, with each concept associated to a Wikipedia article. We then exploit the semantic relatedness between Wikipedia concepts to find pair-wise instance-level constraints for supervised clustering, guiding clustering towards the direction indicated by the constraints. We test our approach on three standard text document datasets. Empirical results show that our basic document representation strategy yields comparable performance to previous attempts; and adding constraints improves clustering performance further by up to 20%.

16 Jun 2008
TL;DR: These investigations integrate the experience gained through its use in industrial and academic projects, the progress of natural language processing as well as the evolution of the ontology engineering to present the kind of conceptual model built with this method, and its knowledge representation.
Abstract: Designed about ten years ago, the TERMINAE method and workbench for ontology engineering from texts have been going on evolving since then. Our investigations integrate the experience gained through its use in industrial and academic projects, the progress of natural language processing as well as the evolution of the ontology engineering. Several new methodological guidelines, such as the reuse of core ontologies, have been added to the method and implemented in the workbench. It has also been modified in order to be compliant to some recent standards such as the OWL knowledge representation. The paper recalls the terminology engineering principles underlying TERMINAE and comments its originality. Then it presents the kind of conceptual model that is built with this method, and its knowledge representation. The method and the support provided by the workbench are detailed and illustrated with a case-study in law. With regard to the state of the art, TERMINAE is one of the most supervised methods in the trend of ontology learning. This option raises epistemological issues about how language and knowledge can be articulated and the distance that separate formal ontologies from learned conceptual models.

Journal ArticleDOI
TL;DR: This research develops a fuzzy case-based reasoning (FCBR) and explores its potential use in supporting a forecaster during the forecast process for forecasting the future sales of a printed circuit board factory.
Abstract: Reliable prediction of sales can improve the quality of business strategy. Case-based reasoning (CBR), one of the well known artificial intelligence (AI) techniques, has already proven its effectiveness in numerous studies. However, due to the uncertainties in knowledge representation, attribute description, and similarity measures in CBR, it is very difficult to find the similar cases from case bases. In order to deal with this problem, fuzzy theories have been incorporated into CBR allowing for more flexible and accurate models. This research develops a fuzzy case-based reasoning (FCBR) and explores its potential use in supporting a forecaster during the forecast process for forecasting the future sales of a printed circuit board factory. Numerical data of various affecting factors and actual demand of the past 5 years of the printed circuit board (PCB) factory are collected and input into the FCBR for future monthly sales forecasting. Experimental results show the effectiveness of the FCBR model when comparing it with other approaches.

Book ChapterDOI
30 Oct 2008
TL;DR: The Rule Interchange Format activity within the W3C aims to develop a standard for exchanging rules among disparate systems, especially on the Semantic Web.
Abstract: The Rule Interchange Format (RIF) activity within the World Wide Web Consortium (W3C) aims to develop a standard for exchanging rules among disparate systems, especially on the Semantic Web. The need for rule-based information processing on the Web has been felt ever since RDF was introduced in the late 90's. As ontology development picked up pace this decade and as the limitations of OWL became apparent, rules were firmly put back on the agenda. RIF is therefore a major opportunity for the introduction of rule based technologies into the main stream of knowledge representation and information processing on the Web.