scispace - formally typeset
Search or ask a question

Showing papers on "Knowledge representation and reasoning published in 2001"


Journal ArticleDOI
TL;DR: In this paper, the authors propose that conceptual and procedural knowledge develop in an iterative fashion and that improved problem representation is 1 mechanism underlying the relations between them, and demonstrate that children's initial conceptual knowledge predicted gains in procedural knowledge.
Abstract: The authors propose that conceptual and procedural knowledge develop in an iterative fashion and that improved problem representation is 1 mechanism underlying the relations between them. Two experiments were conducted with 5th- and 6th-grade students learning about decimal fractions. In Experiment 1, children's initial conceptual knowledge predicted gains in procedural knowledge, and gains in procedural knowledge predicted improvements in conceptual knowledge. Correct problem representations mediated the relation between initial conceptual knowledge and improved procedural knowledge. In Experiment 2, amount of support for correct problem representation was experimentally manipulated, and the manipulations led to gains in procedural knowledge. Thus, conceptual and procedural knowledge develop iteratively, and improved problem representation is 1 mechanism in this process.

1,012 citations


Book ChapterDOI
03 Dec 2001
TL;DR: The aim of this paper is to summarize and analyze some results obtained in 2000-2001 about decidable and undecidable fragments of various first-order temporal logics, give some applications in the field of knowledge representation and reasoning, and attract the attention of the 'temporal community' to a number of interesting open problems.
Abstract: The aim of this paper is to summarize and analyze some results obtained in 2000-2001 about decidable and undecidable fragments of various first-order temporal logics, give some applications in the field of knowledge representation and reasoning, and attract the attention of the 'temporal community' to a number of interesting open problems.

960 citations



Journal ArticleDOI
TL;DR: The paper is a overview of the major qualitative spatial representation and reasoning techniques including ontological aspects, topology, distance, orientation and shape, and qualitative spatial reasoning including reasoning about spatial change.
Abstract: The paper is a overview of the major qualitative spatial representation and reasoning techniques. We survey the main aspects of the representation of qualitative knowledge including ontological aspects, topology, distance, orientation and shape. We also consider qualitative spatial reasoning including reasoning about spatial change. Finally there is a discussion of theoretical results and a glimpse of future work. The paper is a revised and condensed version of [33,34].

745 citations


Journal ArticleDOI
TL;DR: Within this context, category-specific deficits emerge as a result of differences in the structure and content of concepts rather than from explicit divisions of conceptual knowledge in separate stores.

460 citations


01 Jan 2001
TL;DR: This paper discusses ontologies and requirements in their current instantiations on the web today, and describes some desirable properties of ontologies.
Abstract: Ontologies have moved beyond the domains of library science, philosophy, and knowledge representation. They are now the concerns of marketing departments, CEOs, and mainstream business. Research analyst companies such as Forrester Research report on the critical roles of ontologies in support of browsing and search for e-commerce and in support of interoperability for facilitation of knowledge management and configuration. One now sees ontologies used as central controlled vocabularies that are integrated into catalogues, databases, web publications, knowledge management applications, etc. Large ontologies are essential components in many online applications including search (such as Yahoo and Lycos), ecommerce (such as Amazon and eBay), configuration (such as Dell and PC-Order), etc. One also sees ontologies that have long life spans, sometimes in multiple projects (such as UMLS, SIC codes, etc.). Such diverse usage generates many implications for ontology environments. In this paper, we will discuss ontologies and requirements in their current instantiations on the web today. We will describe some desirable properties of ontologies. We will also discuss how both simple and complex ontologies are being and may be used to support varied applications. We will conclude with a discussion of emerging trends in ontologies and their environments and briefly mention our evolving ontology evolution environment.

439 citations


Posted Content
TL;DR: A number of novel complexity results and practical algorithms for expressive DLs that provide different forms of counting quantifiers are established and it is shown that, in many cases, adding local counting in the form of qualifying number restrictions to DLs does not increase the complexity of the inference problems, even if binary coding of numbers in the input is assumed.
Abstract: Description Logics (DLs) are used in knowledge-based systems to represent and reason about terminological knowledge of the application domain in a semantically well-defined manner In this thesis, we establish a number of novel complexity results and give practical algorithms for expressive DLs that provide different forms of counting quantifiers We show that, in many cases, adding local counting in the form of qualifying number restrictions to DLs does not increase the complexity of the inference problems, even if binary coding of numbers in the input is assumed On the other hand, we show that adding different forms of global counting restrictions to a logic may increase the complexity of the inference problems dramatically We provide exact complexity results and a practical, tableau based algorithm for the DL SHIQ, which forms the basis of the highly optimized DL system iFaCT Finally, we describe a tableau algorithm for the clique guarded fragment (CGF), which we hope will serve as the basis for an efficient implementation of a CGF reasoner

398 citations


Proceedings Article
04 Aug 2001
TL;DR: SHOQ(D) is an expressive description logic equipped with named individuals and concrete datatypes which has almost exactly the same expressive power as the latest web ontology languages (e.g., OIL and DAML).
Abstract: Ontologies are set to play a key role in the "Semantic Web" by providing a source of shared and precisely defined terms that can be used in descriptions of web resources. Reasoning over such descriptions will be essential if web resources are to be more accessible to automated processes. SHOQ(D) is an expressive description logic equipped with named individuals and concrete datatypes which has almost exactly the same expressive power as the latest web ontology languages (e.g., OIL and DAML). We present sound and complete reasoning services for this logic.

371 citations


Journal ArticleDOI
TL;DR: A novel, convenient fusion of natural language processing and fuzzy logic techniques for analyzing the affect content in free text and shows a good correspondence between affect sets and human judgments of affect content.
Abstract: We propose a novel, convenient fusion of natural language processing and fuzzy logic techniques for analyzing the affect content in free text. Our main goals are fast analysis and visualization of affect content for decision making. The main linguistic resource for fuzzy semantic typing is the fuzzy-affect lexicon, from which other important resources, the fuzzy thesaurus and affect category groups, are generated. Free text is tagged with affect categories from the lexicon and the affect categories' centralities and intensities are combined using techniques from fuzzy logic to produce affect sets: fuzzy sets representing the affect quality of a document. We show different aspects of affect analysis using news content and movie reviews. Our experiments show a good correspondence between affect sets and human judgments of affect content. We ascribe this to the representation of ambiguity in our fuzzy affect lexicon and the ability of fuzzy logic to deal successfully with the ambiguity of words in a natural language.

307 citations


Book ChapterDOI
05 Sep 2001
TL;DR: Preliminary results demonstrated that an approach which combines different learning methods in inductive logic programming (ILP) to allow a learner to produce more expressive hypotheses than that of each individual learner is promising.
Abstract: In this paper, we explored a learning approach which combines different learning methods in inductive logic programming (ILP) to allow a learner to produce more expressive hypotheses than that of each individual learner. Such a learning approach may be useful when the performance of the task depends on solving a large amount of classification problems and each has its own characteristics which may or may not fit a particular learning method. The task of semantic parser acquisition in two different domains was attempted and preliminary results demonstrated that such an approach is promising.

281 citations


Journal ArticleDOI
TL;DR: This paper presents a brief summary of previous work done on evaluating ontologies and the criteria (consistency, completeness, conciseness, expandability, and sensitiveness) used to evaluate and to assess ontologies.
Abstract: The evaluation of ontologies is an emerging field. At present, there is an absence of a deep core of preliminary ideas and guidelines for evaluating ontologies. This paper presents a brief summary of previous work done on evaluating ontologies and the criteria (consistency, completeness, conciseness, expandability, and sensitiveness) used to evaluate and to assess ontologies. It also addresses the possible types of errors made when domain knowledge is structured in taxonomies in an ontology and in knowledge bases: circularity errors, exhaustive and nonexhaustive class partition errors, redundancy errors, grammatical errors, semantic errors, and incompleteness errors. It also describes the process followed to evaluate the standard-units ontology already published at the Ontology Server.

Book
01 Jan 2001
TL;DR: A Reference Model Architecture for Sensory Processing, Value Judgment, and Knowledge Representation for Engineering Unmanned Ground Vehicles and Future Possibilities is presented.
Abstract: Preface. Emergence of a Theory. Knowledge. Perception. Goal Seeking and Planning. A Reference Model Architecture. Behavior Generation. World Modeling, Value Judgment, and Knowledge Representation. Sensory Processing. Engineering Unmanned Ground Vehicles. Future Possibilities. References. Index.

Journal ArticleDOI
TL;DR: This work has developed methods for mapping web sources into a uniform representation that makes it simple and efficient to integrate multiple sources and makes it easy to maintain these agents and incorporate new sources as they become available.
Abstract: The Web is based on a browsing paradigm that makes it difficult to retrieve and integrate data from multiple sites. Today, the only way to do this is to build specialized applications, which are time-consuming to develop and difficult to maintain. We have addressed this problem by creating the technology and tools for rapidly constructing information agents that extract, query, and integrate data from web sources. Our approach is based on a uniform representation that makes it simple and efficient to integrate multiple sources. Instead of building specialized algorithms for handling web sources, we have developed methods for mapping web sources into this uniform representation. This approach builds on work from knowledge representation, databases, machine learning and automated planning. The resulting system, called Ariadne, makes it fast and easy to build new information agents that access existing web sources. Ariadne also makes it easy to maintain these agents and incorporate new sources as they become available.

Book ChapterDOI
19 Sep 2001
TL;DR: An ontology of place is presented that combines limited coordinate data with qualitative spatial relationships between places and has been implemented with a semantic modelling system linking non-spatial conceptual hierarchies with the place ontology.
Abstract: Geographical context is required of many information retrieval tasks in which the target of the search may be documents, images or records which are referenced to geographical space only by means of place names. Often there may be an imprecise match between the query name and the names associated with candidate sources of information. There is a need therefore for geographical information retrieval facilities that can rank the relevance of candidate information with respect to geographical closeness as well as semantic closeness with respect to the topic of interest. Here we present an ontology of place that combines limited coordinate data with qualitative spatial relationships between places. This parsimonious model of place is intended to suppon information retrieval tasks that may be global in scope. The ontology has been implemented with a semantic modelling system linking non-spatial conceptual hierarchies with the place ontology. An hierarchical distance measure is combined with Euclidean distance between place centroids to create a hybrid spatial distance measure. This can be combined with thematic distance, based on classification semantics, to create an integrated semantic closeness measure that can be used for a relevance ranking of retrieved objects.

Journal ArticleDOI
01 Sep 2001
TL;DR: This paper intends to remove the gap between theory and practice and attempts to learn how to apply soft computing practically to industrial systems from examples/analogy, reviewing many application papers.
Abstract: Fuzzy logic, neural networks, and evolutionary computation are the core methodologies of soft computing (SC). SC is causing a paradigm shift in engineering and science fields since it can solve problems that have not been able to be solved by traditional analytic methods. In addition, SC yields rich knowledge representation, flexible knowledge acquisition, and flexible knowledge processing, which enable intelligent systems to be constructed at low cost. This paper reviews applications of SC in several industrial fields to show the various innovations by TR, HMIQ, and low cost in industries that have been made possible by the use of SC. Our paper intends to remove the gap between theory and practice and attempts to learn how to apply soft computing practically to industrial systems from examples/analogy, reviewing many application papers.

Book ChapterDOI
TL;DR: The main point is that in order to reason or compute about a complex system, some information must be lost, that is the observation of executions must be either partial or at a high level of abstraction.
Abstract: In order to contribute to the solution of the software reliability problem, tools have been designed to analyze statically the run-time behavior of programs. Because the correctness problem is undecidable, some form of approximation is needed. The purpose of abstract interpretation is to formalize this idea of approximation. We illustrate informally the application of abstraction to the semantics of programming languages as well as to static program analysis. The main point is that in order to reason or compute about a complex system, some information must be lost, that is the observation of executions must be either partial or at a high level of abstraction. A few challenges for static program analysis by abstract interpretation are finally briefly discussed. The electronic version of this paper includes a comparison with other formal methods: typing, model-checking and deductive methods.

Book ChapterDOI
TL;DR: A modest extension to the UML infrastructure for one of the most problematic differences is proposed, the DAML concept of property which is a first-class modeling element in DAML, while UML associations are not.
Abstract: There is rapidly growing momentum for web enabled agents that reason about and dynamically integrate the appropriate knowledge and services at run-time. The World Wide Web Consortium and the DARPA Agent Markup Language (DAML) program have been actively involved in furthering this trend. The dynamic integration of knowledge and services depends on the existence of explicit declarative semantic models (ontologies). DAML is an emerging language for specifying machine-readable ontologies on the web. DAML was designed to support tractable reasoning.We have been developing tools for developing ontologies in the Unified Modeling Language (UML) and generating DAML. This allows the many mature UML tools, models and expertise to be applied to knowledge representation systems, not only for visualizing complex ontologies but also for managing the ontology development process. Furthermore, UML has many features, such as profiles, global modularity and extension mechanisms that have yet to be considered in DAML.Our paper identifies the similarities and differences (with examples) between UML and DAML. To reconcile these differences, we propose a modest extension to the UML infrastructure for one of the most problematic differences. This is the DAML concept of property which is a first-class modeling element in DAML, while UML associations are not. For example, a DAML property can have more than one domain class. Our proposal is backward-compatible with existing UML models while enhancing its viability for ontology modeling.While we have focused on DAML in our research and development activities, the same issues apply to many of the knowledge representation languages. This is especially the case for semantic network and concept graph approaches to knowledge representations.

Proceedings ArticleDOI
01 Apr 2001
TL;DR: It is shown how RDFS can be extended to include a more expressive knowledge representation language, Ontology Inference Layer (OIL), which would enrich it with the required additional expressivity and the semantics of that language.
Abstract: Recently, a widespread interest has emerged in using ontologies on the Web. Resource Description Framework Schema (RDFS) is a basic tool that enables users to define vocabulary, structure and constraints for expressing meta data about Web resources. However, it includes no provisions for formal semantics, and its expressivity is not sufficient for full-fledged ontological modeling and reasoning. In this paper, we will show how RDFS can be extended to include a more expressive knowledge representation language. That, in turn, would enrich it with the required additional expressivity and the semantics of that language. We do this by describing the ontology language Ontology Inference Layer (OIL) as an extension of RDFS. An important advantage to our approach is that it ensures maximal sharing of meta data on the Web: even partial interpretation of an OIL ontology by less semantically aware processors will yield a correct partial interpretation of the meta data. � 2002 Elsevier Science B.V. All rights reserved.

Journal ArticleDOI
TL;DR: The work presented here is the only one to the authors' knowledge that describes language divergence phenomena in the framework of computational linguistics through a South Asian language.
Abstract: Interlingua and transfer-based approaches to machine translation have long been in use in competing and complementary ways. The former proves economical in situations where translation among multiple languages is involved, and can be used as a knowledge-representation scheme. But given a particular interlingua, its adoption depends on its ability (a) to capture the knowledge in texts precisely and accurately and (b) to handle cross-language divergences. This paper studies the language divergence between English and Hindi and its implication to machine translation between these languages using the Universal Networking Language (UNL). UNL has been introduced by the United Nations University, Tokyo, to facilitate the transfer and exchange of information over the internet. The representation works at the level of single sentences and defines a semantic net-like structure in which nodes are word concepts and arcs are semantic relations between these concepts. The language divergences between Hindi, an Indo-European language, and English can be considered as representing the divergences between the SOV and SVO classes of languages. The work presented here is the only one to our knowledge that describes language divergence phenomena in the framework of computational linguistics through a South Asian language.

Book ChapterDOI
09 Sep 2001
TL;DR: Continuous Bayesian logic programs are introduced, which extend the recently introduced Bayesian Logic programs to deal with continuous random variables and nicely seperates the qualitative component from the quantitative one.
Abstract: First order probabilistic logics combine a first order logic with a probabilistic knowledge representation. In this context, we introduce continuous Bayesian logic programs, which extend the recently introduced Bayesian logic programs to deal with continuous random variables. Bayesian logic programs tightly integrate definite logic programs with Bayesian networks. The resulting framework nicely seperates the qualitative (i.e. logical) component from the quantitative (i.e. the probabilistic) one. We also show how the quantitative component can be learned using a gradient-based maximum likelihood method.

Proceedings ArticleDOI
22 Oct 2001
TL;DR: A case study in which an ontology is attempted for a subset of art-object descriptions, namely antique furniture, using AAT as well as metadata standards as input, and the representation requirements and representational problems for the sample ontology are discussed.
Abstract: Thesauri such as the Art and Architecture Thesaurus (AAT) provide structured vocabularies for describing art objects. However, if we want to create a knowledge-rich description of an (image of an) art object, such as required by the "semantic web", thesauri turn out to provide only part of the knowledge needed. In this paper we look at problems related to capturing background knowledge for art resources. We describe a case study in which we attempt to construct an ontology for a subset of art-object descriptions, namely antique furniture, using AAT as well as metadata standards as input. We discuss the representation requirements for such an ontology as well as representational problems for our sample ontology with respect to the emerging web standards for knowledge representation (RDF, RDFS, OIL).

Book
01 Sep 2001
TL;DR: The areas covered are reasoning methods in first-order logic; equality and other built-in theories; methods of automated reasoning using induction; higher- order logic, which is used in a number of automatic and interactive proof-development systems; automated reasoning in nonclassical logics; decidable classes and model building; and implementation-related questions.
Abstract: From the Publisher: Automated reasoning has matured into one of the most advanced areas of computer science. It is used in many areas of the field, including software and hardware verification, logic and functional programming, formal methods, knowledge representation, deductive databases, and artificial intelligence. This handbook presents an overview of the fundamental ideas, techniques, and methods in automated reasoning and its applications. The material covers both theory and implementation. In addition to traditional topics, the book covers material that bridges the gap between automated reasoning and related areas. Examples include model checking, nonmonotonic reasoning, numerical constraints, description logics, and implementation of declarative programming languages. The book consists of eight parts. After an overview of the early history of automated deduction, the areas covered are reasoning methods in first-order logic; equality and other built-in theories; methods of automated reasoning using induction; higher-order logic, which is used in a number of automatic and interactive proof-development systems; automated reasoning in nonclassical logics; decidable classes and model building; and implementation-related questions.

Book ChapterDOI
13 Jul 2001
TL;DR: A content-based document representation is proposed as a starting point to build a model of the user's interests, which is language independent, allowing navigation in multilingual sites.
Abstract: SiteIF is a personal agent for a bilingual news web site that learns user's interests from the requested pages In this paper we propose to use a content-based document representation as a starting point to build a model of the user's interests Documents passed over are processed and relevant senses (disambiguated over WordNet) are extracted and then combined to form a semantic network A filtering procedure dynamically predicts new documents on the basis of the semantic network There are two main advantages of a content-based approach: first, the model predictions, being based on senses rather then words, are more accurate; second, the model is language independent, allowing navigation in multilingual sites We report the results of a comparative experiment that has been carried out to give a quantitative estimation of these improvements

Journal ArticleDOI
TL;DR: How the ontology languages of the Semantic Web can lead directly to more powerful agent-based approaches to using services offered on the Web-that is, to the realization of that speaker's "science fiction" vision is shown.
Abstract: At a recent colloquium, a speaker referred to a "science fiction vision" that consisted of sets of agents running around the Web performing complex actions for their users. He argued that we were far from the day when this would be real and that the infrastructure was not in place to make this happen. While his latter assessment is accurate, the former is far too pessimistic. Furthermore, a crucial component of this infrastructure, a "standardized" Web ontology language, is starting to emerge. In this article, the author provides a few pointers to this emerging area and shows how the ontology languages of the Semantic Web can lead directly to more powerful agent-based approaches to using services offered on the Web-that is, to the realization of that speaker's "science fiction" vision.

Proceedings ArticleDOI
01 Oct 2001
TL;DR: Real-time FRP is presented, a statically-typed language where the time and space cost of each execution step for a given program is statically bounded and how the typed version of the language is terminating and resource-bounded is shown.
Abstract: Functional reactive programming (FRP) is a declarative programming paradigm where the basic notions are continuous, time-varying behaviors and discrete, event-based reactivity. FRP has been used successfully in many reactive programming domains such as animation, robotics, and graphical user interfaces. The success of FRP in these domains encourages us to consider its use in real-time applications, where it is crucial that the cost of running a program be bounded and known before run-time. But previous work on the semantics and implementation of FRP was not explicitly concerned about the issues of cost. In fact, the resource consumption of FRP programs in the current implementation is often hard to predict. As a first step towards addressing these concerns, this paper presents real-time FRP (RT-FRP), a statically-typed language where the time and space cost of each execution step for a given program is statically bounded. To take advantage of existing work on languages with bounded resources, we split RT-FRP into two parts: a reactive part that captures the essential ingredients of FRP programs, and a base language part that can be instantiated to any generic programming language that has been shown to be terminating and resource-bounded. This allows us to focus on the issues specific to RT-FRP, namely, two forms of recursion. After presenting the operational explanation of what can go wrong due to the presence of recursion, we show how the typed version of the language is terminating and resource-bounded. Most of our FRP programs are expressible directly in RT. The rest are expressible via a simple mechanism that integrates RT-FRP with the base language.

Book ChapterDOI
TL;DR: Based on results about knowledge representation within the theoretical framework of Formal Concept Analysis, relatively small bases for association rules from which all rules can be deduced are presented.
Abstract: Association rules are used to investigate large databases. The analyst is usually confronted with large lists of such rules and has to find the most relevant ones for his purpose. Based on results about knowledge representation within the theoretical framework of Formal Concept Analysis, we present relatively small bases for association rules from which all rules can be deduced. We also provide algorithms for their calculation.

Patent
23 Jul 2001
TL;DR: In this article, the use of the binary representation is based on an algorithm of data clustering according to binary similarity indices, which are derived from the binary matrix, which is used to represent relationship patterns between objects and methods of its use.
Abstract: A knowledge tool, which includes a binary dataset for representing relationship patterns between objects and methods of its use. The use of the binary representation is based on an algorithm of data clustering according to binary similarity indices, which are derived from the binary matrix. Applications which are based on the binary representation and its compression capability include data mining, text mining, search engines, pattern recognition, enhancing data exchange rate between computerized devices, database implementation on hardware, saving storage space and adaptive network addressing.

Journal ArticleDOI
01 Dec 2001
TL;DR: An overview of the semantic integrity support in the most recent SQL-standard SQL:1999 is given, and it is shown to what extent the different concepts and language constructs proposed in this standard can be found in major commercial (object-)relational database management systems.
Abstract: The correctness of the data managed by database systems is vital to any application that utilizes data for business, research, and decision-making purposes. To guard databases against erroneous data not reflecting real-world data or business rules, semantic integrity constraints can be specified during database design. Current commercial database management systems provide various means to implement mechanisms to enforce semantic integrity constraints at database run-time.In this paper, we give an overview of the semantic integrity support in the most recent SQL-standard SQL:1999, and we show to what extent the different concepts and language constructs proposed in this standard can be found in major commercial (object-)relational database management systems. In addition, we discuss general design guidelines that point out how the semantic integrity features provided by these systems should be utilized in order to implement an effective integrity enforcing subsystem for a database.

Journal ArticleDOI
TL;DR: This paper presents an extended survey of connectionist inference systems, with particular reference to how they perform variable binding and rule-based reasoning and whether they involve distributed or localist representations.

Journal ArticleDOI
TL;DR: The authors discuss: taxonomies; information sources; future issues; business viewpoints; the e-marketplace; and B2B e-commerce standardisation and integration.
Abstract: Ontologies are the first step toward realizing the full power of online e-commerce. Ontologies enable machine-understandable semantics of data, and building this data infrastructure will enable completely new kinds of automated services. Software agents can search for products, form buyer and seller coalitions, negotiate about products, or help automatically configure products and services according to specified user requirements. The combination of machine-processable semantics of data based on ontologies and the development of many specialized reasoning services will bring the Web to its full power. The authors discuss: taxonomies; information sources; future issues; business viewpoints; the e-marketplace; and B2B e-commerce standardisation and integration.