scispace - formally typeset
Search or ask a question

Showing papers on "Knowledge representation and reasoning published in 2002"


Journal ArticleDOI
01 Mar 2002
TL;DR: This presentation discusses the design and implementation of machine learning algorithms in Java, as well as some of the techniques used to develop and implement these algorithms.
Abstract: 1. What's It All About? 2. Input: Concepts, Instances, Attributes 3. Output: Knowledge Representation 4. Algorithms: The Basic Methods 5. Credibility: Evaluating What's Been Learned 6. Implementations: Real Machine Learning Schemes 7. Moving On: Engineering The Input And Output 8. Nuts And Bolts: Machine Learning Algorithms In Java 9. Looking Forward

5,936 citations


Book ChapterDOI
TL;DR: This work expands on previous work by showing how DAML-S Service Profiles, that describe service capabilities within DAML, can be mapped into UDDI records providing therefore a way to record semantic information within U DDI records, and shows how this encoded information can be used within the UDDi registry to perform semantic matching.
Abstract: The web is moving from being a collection of pages toward a collection of services that interoperate through the Internet. A fundamental step toward this interoperation is the ability of automatically locating services on the bases of the functionalities that they provide. Such a functionality would allow services to locate each other and automatically interoperate. Location of web services is inherently a semantic problem, because it has to abstract from the superficial differences between representations of the services provided, and the services requested to recognize semantic similarities between the two.Current Web Services technology based on UDDI and WSDL does not make any use of semantic information and therefore fails to address the problem of matching between capabilities of services and allowing service location on the bases of what functionalities are sought, failing therefore to address the problem of locating web services. Nevertheless, previous work within DAML-S, a DAML-based language for service description, shows how ontological information collected through the semantic web can be used to match service capabilities. This work expands on previous work by showing how DAML-S Service Profiles, that describe service capabilities within DAML-S, can be mapped into UDDI records providing therefore a way to record semantic information within UDDI records. Furthermore we show how this encoded information can be used within the UDDI registry to perform semantic matching.

403 citations


Journal ArticleDOI
TL;DR: The goal is to help developers find the most suitable language for their representation needs for the semantic information that this Web requires-solving heterogeneous data exchange in this heterogeneous environment.
Abstract: Ontologies have proven to be an essential element in many applications. They are used in agent systems, knowledge management systems, and e-commerce platforms. They can also generate natural language, integrate intelligent information, provide semantic-based access to the Internet, and extract information from texts in addition to being used in many other applications to explicitly declare the knowledge embedded in them. However, not only are ontologies useful for applications in which knowledge plays a key role, but they can also trigger a major change in current Web contents. This change is leading to the third generation of the Web-known as the Semantic Web-which has been defined as the conceptual structuring of the Web in an explicit machine-readable way. New ontology-based applications and knowledge architectures are developing for this new Web. A common claim for all of these approaches is the need for languages to represent the semantic information that this Web requires-solving heterogeneous data exchange in this heterogeneous environment. Our goal is to help developers find the most suitable language for their representation needs.

378 citations


Book ChapterDOI
09 Jun 2002
TL;DR: This paper presents TRIPLE, a layered and modular rule language for the Semantic Web that is based on Horn logic and borrows many basic features from F-Logic but is especially designed for querying and transforming RDF models.
Abstract: This paper presents TRIPLE, a layered and modular rule language for the Semantic Web [1]. TRIPLE is based on Horn logic and borrows many basic features from F-Logic [11] but is especially designed for querying and transforming RDF models [20].TRIPLE can be viewed as a successor of SiLRI (Simple Logic-based RDF Interpreter [5]). One of the most important differences to F-Logic and SiLRI is that TRIPLE does not have a fixed semantics for object-oriented features like classes and inheritance. Its layered architecture allows such features to be easily defined for different object-oriented and other data models like UML, Topic Maps, or RDF Schema [19]. Description logics extensions of RDF (Schema) like OIL [17] and DAML+OIL [3] that cannot be fully handled by Horn logic are provided as modules that interact with a description logic classifier, e.g. FaCT [9], resulting in a hybrid rule language. This paper sketches syntax and semantics of TRIPLE.

370 citations


Book ChapterDOI
01 Oct 2002
TL;DR: M is presented, an annotation tool which provides both automated and semi-automated support for annotating web pages with semantic contents and integrates a web browser with an ontology editor and provides open APIs to link to ontology servers and for integrating information extraction tools.
Abstract: An important precondition for realizing the goal of a semantic web is the ability to annotate web resources with semantic information. In order to carry out this task, users need appropriate representation languages, ontologies, and support tools. In this paper we present MnM, an annotation tool which provides both automated and semi-automated support for annotating web pages with semantic contents. MnM integrates a web browser with an ontology editor and provides open APIs to link to ontology servers and for integrating information extraction tools. MnM can be seen as an early example of the next generation of ontology editors, being web-based, oriented to semantic markup and providing mechanisms for large-scale automatic markup of web pages.

352 citations


Book
01 Jan 2002
TL;DR: This book discusses theories of World Knowledge Representation, the computer as a Tool for Storing and Acquiring Spatial Knowledge, and some elements of Conceptual Structure that help clarify this process.
Abstract: Part I: Theories of World Knowledge Representation. Introduction to Part I. Representation versus Reality. Acquiring World Knowledge: The Overall Process. Storing World Knowledge: Some Elements of Conceptual Structure. Acquiring World Knowledge through Direct Experience. From Observation to Understanding. Acquiring Geographic Knowledge through Indirect Experience. How Spatial Knowledge is Encoded. Part II: The Computer as a Tool for Storing and Acquiring Spatial Knowledge. Introduction to Part II. The Computer as Medium. Storing Geographic Data. A New Perspective for Geographic Database Representation. Interacting with Databases. Issues for Implementing Advanced Geographic Databases. Epilogue: Moving Forward.

324 citations


Journal ArticleDOI
TL;DR: Centering resonance analysis (CRA) as discussed by the authors is a text analysis method that has broad scope and range and can be applied to large quantities of written text and transcribed conversation.
Abstract: Scholars increasingly theorize about the power of communication to organize and structure social collectives. However, two factors threaten to impede research on these theories: limitations in the scope and range of existing methods for studying complex systems of communication and the large volume of communication produced by even small collectives. Centering resonance analysis (CRA) is a new text analysis method that has broad scope and range and can be applied to large quantities of written text and transcribed conversation. It identifies discursively important words and represents these as a network, then uses structural properties of the network to index word importance. CRA networks can be directly visualized and can be scored for resonance with other networks to support a number of spatial analysis methods. Following a critique of existing methodologies, this paper describes the theoretical basis and operational details of CRA, describes its advantages relative to other techniques, demonstrates its face validity and representational validity, and demonstrates its utility in modeling organizational knowledge. The conclusion argues for its applicability in several organizational research contexts before describing its potential for use in a broader range of applications, including media content analysis, conversation analysis, computer simulations, and models of communication systems.

304 citations


Book ChapterDOI
15 Jul 2002
TL;DR: This paper proposes an approach for semantic search by matching conceptual graphs by calculating semantic similarities between concepts, relations and conceptual graphs using the detailed definitions of semantic similarity.
Abstract: Semantic search becomes a research hotspot. The combined use of linguistic ontologies and structured semantic matching is one of the promising ways to improve both recall and precision. In this paper, we propose an approach for semantic search by matching conceptual graphs. The detailed definitions of semantic similarities between concepts, relations and conceptual graphs are given. According to these definitions of semantic similarity, we propose our conceptual graph matching algorithm that calculates the semantic similarity. The computation complexity of this algorithm is constrained to be polynomial. A prototype of our approach is currently under development with IBM China Research Lab.

256 citations


Journal ArticleDOI
TL;DR: It is identified that shallow information extraction and natural language processing techniques are deployed to extract concepts or classes from free-text or semi-structured data, but relation extraction is a very complex and difficult issue to resolve and it has turned out to be the main impediment to ontology learning and applicability.
Abstract: Ontology is an important emerging discipline that has the huge potential to improve information organization, management and understanding. It has a crucial role to play in enabling content-based access, interoperability, communications, and providing qualitatively new levels of services on the next wave of web transformation in the form of the Semantic Web. The issues pertaining to ontology generation, mapping and maintenance are critical key areas that need to be understood and addressed. This survey is presented in two parts. The first part reviews the state-of-the-art techniques and work done on semi-automatic and automatic ontology generation, as well as the problems facing such research. The second complementary survey is dedicated to ontology mapping and ontology ‘evolving’. Through this survey, we have identified that shallow information extraction and natural language processing techniques are deployed to extract concepts or classes from free-text or semi-structured data. However, relation extrac...

247 citations


01 Jan 2002
TL;DR: In this paper, the interpretation rules are characterized as implicit rules for mapping knowledge about a base domain into a target domain, and the rules depend only on syntactic properties of the knowledge representation, and not on specific content of the domains.
Abstract: A theory of analogy must describe how the meaning oi on analogy is derived from the meanings of Its parts. In the structure-mopping theory, the interpretation rules are characterized as implicit rules for mapping knowledge about a base domain Into a target domain. Two important features of the theory are (a) the rules depend only on syntactic properties of the knowledge representation, and not on the specific content of the domains: and (b) the theoretical framework allows analogies to be distinguished cleanly from literal similarity statements, applications of abstractions, and other kinds of comparisons.

243 citations


Journal ArticleDOI
TL;DR: In the new millennium more and more researchers will attempt to capture Type 2 representation and develop reasoning with Type 2 formulas that reveal the rich information content available in information granules, as well as expose the risk associated with the graded representation of words and computing with words.

01 Jan 2002
TL;DR: The endurants/perdurants distinction introduced in the previous section provides evidence for the general necessity of having two kinds of parthood, and a basic primitive relation is an immediate relation that spans multiple application domains.
Abstract: entities The main characteristic of abstract entities is that they do not have spatial nor temporal qualities, and they are not qualities themselves. The only class of abstract entities we consider in the present version of DOLCE is that of quality regions (or simply regions). Quality spaces are special kinds of quality regions, being mereological sums of all the regions related to a certain quality type. The other examples of abstract entities reported in Figure 2 (sets and facts) are only indicative. 2.3 Basic functions and relations According to the general methodology introduced in [Gangemi et al. 2001], before discussing the DOLCE backbone properties, we have first to introduce a set of basic primitive relations, suitable to characterize our ontological commitments as neutrally as possible. We believe that these relations should be, as much as possible, • general enough to be applied to multiple domains; • such that they do not rest on questionable ontological assumptions about the ontological nature of their arguments; • sufficiently intuitive and well studied in the philosophical literature; • hold as soon as their relata are given, without mediating additional entities. In the past, we adopted the term formal relation (as opposite to material relation) for a relation that can be applied to all possible domains. Recently, however, [Degen et al. 2001] proposed a different notion of formal relation: “A relation is formal if it holds as soon as its relata are given. Formal relations are called equivalently immediate relations, since they hold of their relata without mediating additional individuals”. The notion of basic primitive relation proposed above combines together the two notions. Roughly, a basic primitive relation is an immediate relation that spans multiple application domains. The axioms constraining the arguments of primitive relations and functions are reported in Table 3, and summarized in Figure 4. 1 The notion of ‘immediate relation’ seems to be equivalent to what Johansson called ground relation [Johansson 1989]. According to Johansson, a ground relation “is derivable from its relata”. We understand that the very existence of the arguments is sufficient to conclude whether the relation holds or not. This notion seems also equivalent to that of “internal relation”. IST Project 2001-33052 WonderWeb: Ontology Infrastructure for the Semantic Web 14 Parthood: “x is part of y” P(x, y) → (AB(x) ∨ PD(x)) ∧ (AB(y) ∨ PD(y)) Temporary Parthood: “x is part of y during t” P(x, y, t) → (ED(x) ∧ ED(y) ∧ T(t)) Constitution: “x constitutes y during t” K(x, y, t) → ((ED(x) ∨ PD(x)) ∧ (ED(y) ∨ PD(y)) ∧ T(t)) Participation: “x participates in y during t” PC(x, y, t) → (ED(x) ∧ PD(y) ∧ T(t)) Quality: “x is a quality of y” qt(x, y) → (Q(x) ∧ (Q(y) ∨ ED(y) ∨ PD(y))) Quale: “x is the quale of y (during t)” ql(x, y) → (TR(x) ∧ TQ(y)) ql(x, y, t) → ((PR(x) ∨ AR(x)) ∧ (PQ(y) ∨ AQ(y)) ∧ T(t)) Table 3. Basic axioms on argument restrictions of primitives. Parthood and Temporary Parthood The endurants/perdurants distinction introduced in the previous section provides evidence for the general necessity of having two kinds of parthood relations: a-temporal and timeindexed parthood. The latter will hold for endurants, since for them it is necessary t o know when a specific parthood relationship holds. Consider for instance the classical example of Tibbles the cat [Simons 1987]: Tail is part of Tibbles before the cut but not after it. Formally, we can write P(Tail, Tibbles, before(cut)) and ¬P(Tail, Tibbles, after(cut)). Atemporal parthood, on the other hand, will be used for entities which do not properly change in time (occurrences and abstracts). In the present version, parthood will not be defined for qualities. With respect to time-indexed parthood, two useful notions can be defined. We shall say that an endurant is mereologically constant iff all its parts remains the same during its life, and mereologically invariant iff they remain the same across all possible worlds. For example, we usually take ordinary material objects as mereologically variable, because during their life they can lose or gain parts. On the other hand, amounts of matter are taken as mereologically invariant (all their parts are essential parts). Dependence and Spatial Dependence There are basically two approaches to characterizing the notion of ontological dependence: • non-modal accounts (cf. [Fine and Smith 1983] and [Simons 1987] , pp. 310-318) • modal accounts (cf. [Simons 1987]). Non-modal approaches treat the dependence relation as a quasi-mereological primitive whose formal properties are characterized by axioms. However, as Simons has justly observed, such axiomatizations cannot rule out non-intended interpretations that are purely topological in nature. The only way to save them is actually to link them with modal accounts. In a modal approach, dependence of an entity x on an entity y might be defined as follows: x depends on y iff, necessarily, y is present whenever x is present. Such a definition seems to be in harmony both with commonsense intuition as well as philosophical tradition (Aristotle, Husserl), despite the fact that there are some cases where, as Kit Fine has shown, this characterization is vacuous. Indeed, according to the definition, everything is trivially dependent on necessarily existing or always present IST Project 2001-33052 WonderWeb: Ontology Infrastructure for the Semantic Web 15 objects. However, Simons has shown that it is possible to exclude such vacuous examples and while this move might be philosophically dubious, it makes perfect sense in an engineering approach to ontologies of everyday contingent objects. Our concept of dependence involves the notion of presence in time as well as modality. We mainly use two variants of dependence, adapted from [Thomasson 1999]: specific and generic constant dependence. The former is defined both for particulars and properties, while the latter only for properties. A particular x is specifically constantly dependent on another particular y iff, at any time t, x can't be present at t unless y is also present at t. For example, a person might be specifically constantly dependent on its brain. This notion is naturally extended to properties by defining that a property φ is specifically constantly dependent on a property ψ iff every φer is specifically constantly dependent on a ψer. A property φ is generically constantly dependent on a property ψ iff, for any instance x of φ, at any time t, x can't be present at t, unless a certain instance y of ψ is also present at t. For example, a person might be generically constantly dependent on having a heart. We define spatial dependence as a particular kind of dependence which is grounded not only in time (presence), but also in space. The definitions are as above with the further requirement that y has to be spatially co-localised with x in addition of being co-present. This notion is defined both for endurants and perdurants. Constitution Constitution has been extensively discussed in the philosophical literature: • Doepke (cit. in [Simons 1987] p.238) “x constitutes y at time t iff x could be a substratum of y’s destruction.” • Simons (cit. in [Simons 1987] p.239) “When x constitutes y, there are certain properties of x which are accidental to x, but essential to y. (...) Where the essential properties concern the type and disposition of parts, this is often a case of composition, but in other cases, such as that of body/person, it is not.” Constitution is not Identity – Consider the following classical example. I buy a portion of clay (LUMPL) at 9am. At 2pm I made a statue (GOLIATH) out of LUMPL and I put GOLIATH on a table. At 3pm I replace the left hand of GOLIATH with a new one and I throw the old hand in the dustbin. There are three reasons to support the claim that LUMPL is not GOLIATH: (i) Difference in histories LUMPL is present a 9am, but GOLIATH is not [Thomson 1998] (ii) Difference in persistence conditions At 3pm GOLIATH is wholly present on the table, but LUMPL is not wholly present on the table (a statue can undergo replacements of certain parts, but not an amount (portion) of matter, i.e. all parts of LUMPL are essential but not all parts of GOLIATH are essential [Thomson 1998]. LUMPL can survive a change of shape, GOLIATH not. (iii) Difference in essential relational properties It is metaphysically possible for LUMPL, but not for GOLIATH, to exist in the absence of an artworld or an artist or anybody's intentions [Baker 2000]. Participation The usual intuition about participation is that there are endurants “involved” in an occurrence. Linguistics has extensively investigated the relation between occurrences and their participants in order to classify verbs and verbal expressions. Fillmore's Case Grammar [Fillmore 1984] and its developments (Construction Grammar, FrameNet) is one of the best attempts at building a systematic model of language-oriented participants. On the other hand, the first systematic investigation goes back at least to Aristotle, that defined four “causes” (aitiai), expressing the initiator, the destination, the instrument, and IST Project 2001-33052 WonderWeb: Ontology Infrastructure for the Semantic Web 16 the substrate or host of an event. Sowa further specified subsets of aitiai on the basis of properties borrowed from linguistics (cfr. [Sowa 1999]). In an ontology based on a strict distinction between endurants and perdurants, participation cannot be simply parthood; the participating endurants are not parts of the occurrences: only occurrences can be parts of other occurrences. Moreover, the primitive participation we introduce is time-indexed, in order to account for the varieties of participation in time (temporary participation, constant participation). Quality inherence and quality value Finally, three primitive relations are introduced in order to account for qualities: a generalize

Book ChapterDOI
23 Sep 2002
TL;DR: This paper presents a probabilistic extension of SHOQ(D), called P-SHOQ (D), to allow for dealing with Probabilistic ontologies in the semantic web, and presents sound and complete reasoning techniques that show in particular that reasoning in P- SHOZ(D) is decidable.
Abstract: Ontologies play a central role in the development of the semantic web, as they provide precise definitions of shared terms in web resources. One important web ontology language is DAML+OIL; it has a formal semantics and a reasoning support through a mapping to the expressive description logic SHOQ(D) with the addition of inverse roles. In this paper, we present a probabilistic extension of SHOQ(D), called P-SHOQ(D), to allow for dealing with probabilistic ontologies in the semantic web. The description logic P-SHOQ(D) is based on the notion of probabilistic lexicographic entailment from probabilistic default reasoning. It allows to express rich probabilistic knowledge about concepts and instances, as well as default knowledge about concepts. We also present sound and complete reasoning techniques for P-SHOQ(D), which are based on reductions to classical reasoning in SHOQ(D) and to linear programming, and which show in particular that reasoning in P-SHOQ(D) is decidable.

Journal ArticleDOI
TL;DR: Although explicit knowledge cannot turn into implicit knowledge through practice, it is argued that explicit learning and practice often form efficient ways of mastering an L2 by creating opportunities for implicit learning.
Abstract: This article argues for the need to reconcile symbolist and connectionist accounts of (second) language learning by propounding nine claims, aimed at integrating accounts of the representation, processing and acquisition of second language (L2) knowledge. Knowledge representation is claimed to be possible both in the form of symbols and rules and in the form of networks with layers of hidden units representing knowledge in a distributed, subsymbolic way. Implicit learning is the construction of knowledge in the form of such networks. The strength of association between the network nodes changes in the beginning stages of learning with accumulating exposure, following a power law (automatization). Network parts may attain the status equivalent to ‘symbols’. Explicit learning is the deliberate construction of verbalizable knowledge in the form of symbols (concepts) and rules. The article argues for a nonnativist, emergentist view of first language learning and adopts its own version of what could be called a non-interface position in L2 learning: although explicit knowledge cannot turn into implicit knowledge through practice, it is argued that explicit learning and practice often form efficient ways of mastering an L2 by creating opportunities for implicit learning.

Journal ArticleDOI
TL;DR: A two-dimensional logic capable of describing topological relationships that change over time, called PSTL (Propositional Spatio-Temporal Logic), is constructed and it is shown that it contains decidable fragments into which various temporal extensions of the spatial logic RCC-8 can be embedded.
Abstract: In this paper we advocate the use of multi-dimensional modal logics as a framework for knowledge representation and, in particular, for representing spatio-temporal information. We construct a two-dimensional logic capable of describing topological relationships that change over time. This logic, called PSTL (Propositional Spatio-Temporal Logic) is the Cartesian product of the well-known temporal logic PTL and the modal logic S4u, which is the Lewis system S4 augmented with the universal modality. Although it is an open problem whether the full PSTL is decidable, we show that it contains decidable fragments into which various temporal extensions (both point-based and interval based) of the spatial logic RCC-8 can be embedded. We consider known decidability and complexity results that are relevant to computation with multi-dimensional formalisms and discuss possible directions for further research.

Journal ArticleDOI
24 Jan 2002
TL;DR: This survey describes the typical components of a Go program, and discusses knowledge representation, search methods and techniques for solving specific subproblems in this domain.
Abstract: Computer Go is one of the biggest challenges faced by game programmers. This survey describes the typical components of a Go program, and discusses knowledge representation, search methods and techniques for solving specific subproblems in this domain. Along with a summary of the development of computer Go in recent years, areas for future research are pointed out.

Journal ArticleDOI
TL;DR: The components of the Instruction-Based Learning architecture are described and issues of knowledge representation, the selection of primitives and the conversion of natural language into robot-understandable procedures are discussed.

Book ChapterDOI
27 May 2002
TL;DR: DAML+OIL is an ontology language specifically designed for use on the Web that exploits existing Web standards (XML and RDF), adding the familiar ontological primitives of object oriented and frame based systems, and the formal rigor of a very expressive description logic.
Abstract: Ontologies are set to play a key role in the "Semantic Web", extending syntactic interoperability to semantic interoperability by providing a source of shared and precisely defined terms. DAML+OIL is an ontology language specifically designed for use on the Web; it exploits existing Web standards (XML and RDF), adding the familiar ontological primitives of object oriented and frame based systems, and the formal rigor of a very expressive description logic. The logical basis of the language means that reasoning services can be provided, both to support ontology design and to make DAML+OIL described Web resources more accessible to automated processes.

Proceedings ArticleDOI
04 Nov 2002
TL;DR: A new mechanism that can generate ontology automatically is proposed in order to make the approach scalable and it is observed that the modified SOTA outperforms hierarchical agglomerative clustering (HAC) and an automatic concept selection algorithm from WordNet called linguistic ontology is proposed.
Abstract: Technology in the field of digital media generates huge amounts of non-textual information, audio, video, and images, along with more familiar textual information. The potential for exchange and retrieval of information is vast and daunting. The key problem in achieving efficient and user-friendly retrieval is the development of a search mechanism to guarantee delivery of minimal irrelevant information (high precision) while ensuring relevant information is not overlooked (high recall). The traditional solution employs keyword-based search. The only documents retrieved are those containing user specified keywords. But many documents convey desired semantic information without containing these keywords. One can overcome this problem by indexing documents according to meanings rather than words, although this will entail a way of converting words to meanings and the creation of ontology. We have solved the problem of an index structure through the design and implementation of a concept-based model using domain-dependent ontology. Ontology is a collection of concepts and their interrelationships, which provide an abstract view of an application domain. We propose a new mechanism that can generate ontology automatically in order to make our approach scalable. For this we modify the existing self-organizing tree algorithm (SOTA) that constructs a hierarchy from top to bottom. Furthermore, in order to find an appropriate concept for each node in the hierarchy we propose an automatic concept selection algorithm from WordNet called linguistic ontology. To illustrate the effectiveness of our automatic ontology construction method, we have explored our ontology construction in text documents. The Reuters21578 text document corpus has been used. We have observed that our modified SOTA outperforms hierarchical agglomerative clustering (HAC).

Journal ArticleDOI
TL;DR: This work studied 11 types of guideline representation models that can be used to encode guidelines in computer-interpretable formats and consistently found that primitives for representation of actions and decisions are necessary components of a guideline representation model.

Book ChapterDOI
23 Sep 2002
TL;DR: This paper describes a simple though quite powerful approach to modelling the updates of knowledge bases expressed by generalized logic programs, by means of a new language, hereby christened EVOLP (after EVOlving Logic Programs).
Abstract: Logic programming has often been considered less than adequate for modelling the dynamics of knowledge changing over time. In this paper we describe a simple though quite powerful approach to modelling the updates of knowledge bases expressed by generalized logic programs, by means of a new language, hereby christened EVOLP (after EVOlving Logic Programs). The approach was first sparked by a critical analysis of previous efforts and results in this direction [1,2,7,11], and aims to provide a simpler, and at once more general, formulation of logic program updating, which runs closer to traditional logic programming (LP) doctrine. From the syntactical point of view, evolving programs are just generalized logic programs (i.e. normal LPs plus default negation also in rule heads), extended with (possibly nested) assertions, whether in heads or bodies of rules. From the semantics viewpoint, a model-theoretic characterization is offered of the possible evolutions of such programs. These evolutions arise both from self (or internal) updating, and from external updating too, originating in the environment. This formulation sets evolving programs on a firm basis in which to express, implement, and reason about dynamic knowledge bases, and opens up a number of interesting research topics that we brush on.

Journal ArticleDOI
TL;DR: A family of extensions of Sowa's model, based on rules and constraints, keeping graph homomorphism as the basic operation is presented, including their operational semantics and relationships with FOL.
Abstract: Simple conceptual graphs are considered as the kernel of most knowledge representation formalisms built upon Sowa's model. Reasoning in this model can be expressed by a graph homomorphism called projection, whose semantics is usually given in terms of positive, conjunctive, existential FOL. We present here a family of extensions of this model, based on rules and constraints, keeping graph homomorphism as the basic operation. We focus on the formal definitions of the different models obtained, including their operational semantics and relationships with FOL, and we analyze the decidability and complexity of the associated problems (consistency and deduction). As soon as rules are involved in reasonings, these problems are not decidable, but we exhibit a condition under which they fall in the polynomial hierarchy. These results extend and complete the ones already published by the authors. Moreover we systematically study the complexity of some particular cases obtained by restricting the form of constraints and/or rules.

Journal ArticleDOI
11 Nov 2002
TL;DR: The solution is based on a simple view definition language that allows for automatic generation of views and enables a distributed implementation of the view system that is scalable both in terms of data and load.
Abstract: We are interested in defining and querying views in a huge and highly heterogeneous XML repository (Web scale). In this context, view definitions are very large, involving lots of sources, and there is no apparent limitation to their size. This raises interesting problems that we address in the paper: (i) how to distribute views over several machines without having a negative impact on the query translation process; (ii) how to quickly select the relevant part of a view given a query; (iii) how to minimize the cost of communicating potentially large queries to the machines where they will be evaluated. The solution that we propose is based on a simple view definition language that allows for automatic generation of views. The language maps paths in the view abstract DTD to paths in the concrete source DTDs. It enables a distributed implementation of the view system that is scalable both in terms of data and load. In particular, the query translation algorithm is shown to have a good (linear) complexity.

Book ChapterDOI
01 Oct 2002
TL;DR: This article proposes to use a methodology introducing a clear semantic commitment to normalize the meaning of the concepts of ontologies, and implemented this methodology in an editor, DOE, complementary to other existing tools, and used it to develop several ontologies.
Abstract: The French institute ina is interested in ontologies in order to describe the content of audiovisual documents Methodologies and tools for building such objects exist, but few propose complete guidelines to help the user to organize the key components of ontologies: subsumption hierarchies This article proposes to use a methodology introducing a clear semantic commitment to normalize the meaning of the concepts We have implemented this methodology in an editor, DOE, complementary to other existing tools, and used it to develop several ontologies

01 Jan 2002
TL;DR: As the hype of past decades fades, the current heir to the artificial intelligence legacy may well be ontologies, structured depictions or models of known (and accepted) facts being built today to make a number of applications more capable of handling complex and disparate information.
Abstract: As the hype of past decades fades, the current heir to the artificial intelligence legacy may well be ontologies. Evolving from semantic network notions, modern ontologies are proving quite useful. And they are doing so without relying on the jumble of rule-based techniques common in earlier knowledge representation efforts. These structured depictions or models of known (and accepted) facts are being built today to make a number of applications more capable of handling complex and disparate information. They appear most effective when the semantic distinctions that humans take for granted are crucial to the application's purpose. This may mean handling the common sense lurking in natural language excerpts or the expertise embedded in domain-specific explications and working repositories.

Book ChapterDOI
30 Oct 2002
TL;DR: This paper presents a specifically database-inspired approach (called DOGMA) for engineering formal ontologies, implemented as shared resources used to express agreed formal semantics for a real world domain, and claims it leads to methodological approaches that naturally extend key aspects of database modeling theory and practice.
Abstract: This paper presents a specifically database-inspired approach (called DOGMA) for engineering formal ontologies, implemented as shared resources used to express agreed formal semantics for a real world domain. We address several related key issues, such as knowledge reusability and shareability, scalability of the ontology engineering process and methodology, efficient and effective ontology storage and management, and coexistence of heterogeneous rule systems that surround an ontology mediating between it and application agents. Ontologies should represent a domain's semantics independently from "language", while any process that creates elements of such an ontology must be entirely rooted in some (natural) language, and any use of it will necessarily be through a (in general an agent's computer) language.To achieve the claims stated, we explicitly decompose ontological resources into ontology bases in the form of simple binary facts called lexons and into socalled ontological commitments in the form of description rules and constraints. Ontology bases in a logic sense, become "representationless" mathematical objects which constitute the range of a classical interpretation mapping from a first order language, assumed to lexically represent the commitment or binding of an application or task to such an ontology base. Implementations of ontologies become database-like on-line resources in the model-theoretic sense. The resulting architecture allows to materialize the (crucial) notion of commitment as a separate layer of (software agent) services, mediating between the ontology base and those application instances that commit to the ontology. We claim it also leads to methodological approaches that naturally extend key aspects of database modeling theory and practice. We discuss examples of the prototype DOGMA implementation of the ontology base server and commitment server.

Book
01 Jan 2002
TL;DR: This chapter discusses relationships in knowledge representation and reasoning from Classical Mereology to Complex Part-Whole Relations, and compares Sets of Semantic Relations in Ontologies.
Abstract: Introduction * List of Contributors * Part I: Types of Relationships. 1. Hyponymy and Its Varieties. 2. On the Semantics of Troponymy * 3. Meronymic Relationships: From Classical Mereology to Complex Part-Whole Relations. 4. The Many Facets of the Cause-Effect Relation * Part II: Relationships in Knowledge Representation and Reasoning. 5. Internally-Structured Conceptual Models in Cognitive Semantics.6. Comparing Sets of Semantic Relations in Ontologies. 7. Identity and Subsumption. 8. Logic of Relationships * Part III: Applications of Relationships. 9. Thesaural Relations in Information Retrieval. 10. Identifying Semantic Relations in Text for Information Retrieval and Information Extraction. 11. A Conceptual Framework for the Biomedical Domain. 12. Visual Analysis and Exploration of Relationships * Index.

Book ChapterDOI
23 Sep 2002
TL;DR: This paper presents a new logic programming language for modelling Agents and Multi-Agent systems in computational logic, and introduces a novel approach to the language semantics, called the evolutionary semantics.
Abstract: This paper presents a new logic programming language for modelling Agents and Multi-Agent systems in computational logic. The basic objective of the specification of this new language has been the identification and the formalization of what we consider to be the basic patterns for reactivity, proactivity, internal "thinking", and "memory". The formalization models these concepts by introducing different kinds of events, with a suitable treatment. We introduce a novel approach to the language semantics, called the evolutionary semantics.

Journal ArticleDOI
TL;DR: This work presents the concept of explanation in a deductive way, and defines multiple revision operators with respect to sets of sentences (representing explanations), giving representation theorems.

Book ChapterDOI
08 Apr 2002
TL;DR: A semantic analysis of a recently proposed formalism for local reasoning, where a specification can concentrate on only those cells that a program accesses, shows the soundness and completeness of a rule that allows frame axioms, which describe invariant properties of portions of heap memory, to be inferred automatically.
Abstract: We present a semantic analysis of a recently proposed formalism for local reasoning, where a specification (and hence proof) can concentrate on only those cells that a program accesses. Our main results are the soundness and, in a sense, completeness of a rule that allows frame axioms, which describe invariant properties of portions of heap memory, to be inferred automatically; thus, these axioms can be avoided when writing specifications.