scispace - formally typeset
Search or ask a question

Showing papers on "Knowledge representation and reasoning published in 2006"


Journal ArticleDOI
TL;DR: The experimental results confirm the solidity of DLV and highlight its potential for emerging application areas like knowledge management and information integration, and the main international projects investigating the potential of the system for industrial exploitation are described.
Abstract: Disjunctive Logic Programming (DLP) is an advanced formalism for knowledge representation and reasoning, which is very expressive in a precise mathematical sense: it allows one to express every property of finite structures that is decidable in the complexity class ΣP2 (NPNP). Thus, under widely believed assumptions, DLP is strictly more expressive than normal (disjunction-free) logic programming, whose expressiveness is limited to properties decidable in NP. Importantly, apart from enlarging the class of applications which can be encoded in the language, disjunction often allows for representing problems of lower complexity in a simpler and more natural fashion.This article presents the DLV system, which is widely considered the state-of-the-art implementation of disjunctive logic programming, and addresses several aspects. As for problem solving, we provide a formal definition of its kernel language, function-free disjunctive logic programs (also known as disjunctive datalog), extended by weak constraints, which are a powerful tool to express optimization problems. We then illustrate the usage of DLV as a tool for knowledge representation and reasoning, describing a new declarative programming methodology which allows one to encode complex problems (up to ΔP3-complete problems) in a declarative fashion. On the foundational side, we provide a detailed analysis of the computational complexity of the language of DLV, and by deriving new complexity results we chart a complete picture of the complexity of this language and important fragments thereof.Furthermore, we illustrate the general architecture of the DLV system, which has been influenced by these results. As for applications, we overview application front-ends which have been developed on top of DLV to solve specific knowledge representation tasks, and we briefly describe the main international projects investigating the potential of the system for industrial exploitation. Finally, we report about thorough experimentation and benchmarking, which has been carried out to assess the efficiency of the system. The experimental results confirm the solidity of DLV and highlight its potential for emerging application areas like knowledge management and information integration.

1,306 citations


Journal ArticleDOI
TL;DR: Experiments demonstrate that the proposed method provides a similarity measure that shows a significant correlation to human intuition and can be used in a variety of applications that involve text knowledge representation and discovery.
Abstract: Sentence similarity measures play an increasingly important role in text-related research and applications in areas such as text mining, Web page retrieval, and dialogue systems. Existing methods for computing sentence similarity have been adopted from approaches used for long text documents. These methods process sentences in a very high-dimensional space and are consequently inefficient, require human input, and are not adaptable to some application domains. This paper focuses directly on computing the similarity between very short texts of sentence length. It presents an algorithm that takes account of semantic information and word order information implied in the sentences. The semantic similarity of two sentences is calculated using information from a structured lexical database and from corpus statistics. The use of a lexical database enables our method to model human common sense knowledge and the incorporation of corpus statistics allows our method to be adaptable to different domains. The proposed method can be used in a variety of applications that involve text knowledge representation and discovery. Experiments on two sets of selected sentence pairs demonstrate that the proposed method provides a similarity measure that shows a significant correlation to human intuition

850 citations


Journal ArticleDOI
01 Mar 2006
TL;DR: A generic rule-base inference methodology using the evidential reasoning (RIMER) approach is proposed, capable of capturing vagueness, incompleteness, and nonlinear causal relationships, while traditional if-then rules can be represented as a special case.
Abstract: In this paper, a generic rule-base inference methodology using the evidential reasoning (RIMER) approach is proposed. Existing knowledge-base structures are first examined, and knowledge representation schemes under uncertainty are then briefly analyzed. Based on this analysis, a new knowledge representation scheme in a rule base is proposed using a belief structure. In this scheme, a rule base is designed with belief degrees embedded in all possible consequents of a rule. Such a rule base is capable of capturing vagueness, incompleteness, and nonlinear causal relationships, while traditional if-then rules can be represented as a special case. Other knowledge representation parameters such as the weights of both attributes and rules are also investigated in the scheme. In an established rule base, an input to an antecedent attribute is transformed into a belief distribution. Subsequently, inference in such a rule base is implemented using the evidential reasoning (ER) approach. The scheme is further extended to inference in hierarchical rule bases. A numerical study is provided to illustrate the potential applications of the proposed methodology.

606 citations


Journal ArticleDOI
TL;DR: The current position in ontologies is reviewed and how they have become institutionalized within biomedicine and what benefits it might bring to ontologies and their use within biomedical informatics.
Abstract: In recent years, as a knowledge-based discipline, bioinformatics has been made more computationally amenable. After its beginnings as a technology advocated by computer scientists to overcome problems of heterogeneity, ontology has been taken up by biologists themselves as a means to consistently annotate features from genotype to phenotype. In medical informatics, artifacts called ontologies have been used for a longer period of time to produce controlled lexicons for coding schemes. In this article, we review the current position in ontologies and how they have become institutionalized within biomedicine. As the field has matured, the much older philosophical aspects of ontology have come into play. With this and the institutionalization of ontology has come greater formality. We review this trend and what benefits it might bring to ontologies and their use within biomedicine.

388 citations


Proceedings ArticleDOI
01 Mar 2006
TL;DR: Spring Symposium on Formalizing and Compiling Background Knowledge and Its Applications to Knowledge Representation and Question Answering, Stanford, CA, March 2006.
Abstract: Spring Symposium on Formalizing and Compiling Background Knowledge and Its Applications to Knowledge Representation and Question Answering, Stanford, CA, March 2006.

386 citations


Journal ArticleDOI
TL;DR: The FOGA (fuzzy ontology generation framework) is proposed for automatic generation of fuzzy ontology on uncertainty information and a fuzzy-based technique for integrating other attributes of database to the ontology is proposed.
Abstract: Ontology is an effective conceptualism commonly used for the semantic Web. Fuzzy logic can be incorporated to ontology to represent uncertainty information. Typically, fuzzy ontology is generated from a predefined concept hierarchy. However, to construct a concept hierarchy for a certain domain can be a difficult and tedious task. To tackle this problem, this paper proposes the FOGA (fuzzy ontology generation framework) for automatic generation of fuzzy ontology on uncertainty information. The FOGA framework comprises the following components: fuzzy formal concept analysis, concept hierarchy generation, and fuzzy ontology generation. We also discuss approximating reasoning for incremental enrichment of the ontology with new upcoming data. Finally, a fuzzy-based technique for integrating other attributes of database to the ontology is proposed

376 citations


Journal Article
TL;DR: A use-case model for an architectural knowledge base, together with its underlying ontology, is described and a small case study in which available architectural knowledge is model in a commercial tool, the Aduna Cluster Map Viewer, which is aimed at ontology-based visualization.
Abstract: Architectural knowledge consists of architecture design as well as the design decisions, assumptions, context, and other factors that together determine why a particular solution is the way it is. Except for the architecture design part, most of the architectural knowledge usually remains hidden, tacit in the heads of the architects. We conjecture that an explicit representation of architectural knowledge is helpful for building and evolving quality systems. If we had a repository of architectural knowledge for a system, what would it ideally contain, how would we build it, and exploit it in practice? In this paper we describe a use-case model for an architectural knowledge base, together with its underlying ontology. We present a small case study in which we model available architectural knowledge in a commercial tool, the Aduna Cluster Map Viewer, which is aimed at ontology-based visualization. Putting together ontologies, use cases and tool support, we are able to reason about which types of architecting tasks can be supported, and how this can be done.

354 citations


Journal ArticleDOI
TL;DR: The chapters on reasoning under uncertainty and inexact reasoning are very well constructed and are the most attractive feature of the book for me, and shouldn't be a deterrent to anyone interested in the subject considering it has become one of the standard textbooks on expert systems.
Abstract: Expert Systems: Principles and Programming Joseph C. Giarratano and Gary D. Riley Fourth Edition, Course Technology, Boston, MA, 2004 856 pages, $131.95 Language: English ISBN: 0534384471 Expert systems is one of the most successful, practical, and recognizable subsets of classical Artificial Intelligence. The ability to supply decisions or descision-making support in a specific domain has seen a vast application of expert systems in various fields such as healthcare, military, business, accounting, production, video games, and human resources. The theoretical and practical knowledge of expert systems is indispensable for a computer science, management information science or software engineering student. In its fourth edition, Expert Systems: Principles and Programming, as the title suggests, aims to be used as a complete textbook on this topic. The authors are respected authorities on expert systems, and were involved in the development of the popular CLIPS expert system tool which is dealt with thoroughly in this book. The book itself is divided into two major sections: the first six chapters deal with the theory of expert systems, the rationale behind their historical development and the current state of research; the next six sections are an introduction to CLIPS and how to use it to develop practical applications. The clear division between theory and practice serves to guide the student in choosing specific topics throughout the book. Each chapter ends with a set of problems and a useful bibliography. Appendix G provides a comprehensive list of software resources and will prove to be a very valuable asset to the student interested in exploring the practical aspects of expert systems as well as those who will be developing commercial applications incorporating expert systems. The first six chapters provide an indepth intoduction to expert systems, and deal with the representation of knowledge and methods of inference and reasoning. Each of these topics are introduced from scratch—for example, when dealing with knowledge representation, logic is described from its very basics, starting from propositional logic. The chapters on reasoning under uncertainty and inexact reasoning are very well constructed and are the most attractive feature of the book for me. Topics such as fuzzy logic and Dempster-Shafer theory are explained in good detail along with their practical significance. There is an objective flow through the chapters which helps to tie in the concepts and give an understanding on the progress of topics. Also, the disadvantages and pitfalls behind expert systems in general, and specific topics are well documented. The second section focusing on the CLIPS expert system tool is meant to be an aid in understanding and reinforcing the concepts of the first section, but does not require a thorough reading of the latter. CLIPS, developed in part by the authors at NASA, has become quite popular as a tool for studying expert system in many university courses as well as being used in several commercial and industrial applications. The expert system programming in CLIPS does not require much experience with programming and can be picked up rapidly thanks to its simple syntax and the helpful examples in the book. A new feature of the fourth edition is the introduction of COOL, the CLIPS Object-Oriented Language, which allows expert systems programmers to develop their systems in an object-oriented environment. The section is supplemented by a CD containing Windows and MacOS executables for CLIPS and reference guides—all of which can be downloaded from the Internet as well. Although the CLIPS examples deal with some problems of uncertainty in reasoning, there is no mention of using fuzzy logic or Dempster-Shafer theory in a practical setting—a serious disadvantage to the effectiveness of the book, considering the availability of tools such as FuzzyClips. The book itself is expensive—on the wrong side of a hundred dollars, but that seems to be the trend for academic books these days and shouldn't be a deterrent to anyone interested in the subject considering it has become one of the standard textbooks on expert systems. Raheel Ahmad Department of Computer Science Southern Illinois University Carbondale, IL 62901, USA

301 citations


Proceedings Article
02 Jun 2006
TL;DR: DL +log is defined, a general framework for the integration of Description Logics and disjunctive Datalog rules that allows for a tighter form of integration between DL-KBs and Datalogs rules which overcomes the main representational limits of the approaches based on the safety condition.
Abstract: The integration of Description Logics and Datalog rules presents many semantic and computational problems. In particular, reasoning in a system fully integrating Description Logics knowledge bases (DL-KBs) and Datalog programs is undecidable. Many proposals have overcomed this problem through a "safeness" condition that limits the interaction between the DL-KB and the Datalog rules. Such a safe integration of Description Logics and Datalog provides for systems with decidable reasoning, at the price of a strong limitation in terms of expressive power. In this paper we define DL +log, a general framework for the integration of Description Logics and disjunctive Datalog. From the knowledge representation viewpoint, DL +log extends previous proposals, since it allows for a tighter form of integration between DL-KBs and Datalog rules which overcomes the main representational limits of the approaches based on the safeness condition. From the reasoning viewpoint, we present algorithms for reasoning in DL +log, and prove decidability and complexity of reasoning in DL +log for several Description Logics. To the best of our knowledge, DL+log constitutes the most powerful decidable combination of Description Logics and disjunctive Datalog rules proposed so far.

246 citations


Book
01 Jun 2006
TL;DR: The authors provides an in-depth examination of core text mining and link detection algorithms and operations, and examines advanced pre-processing techniques, knowledge representation considerations, and visualization approaches for text mining.
Abstract: Providing an in-depth examination of core text mining and link detection algorithms and operations, this text examines advanced pre-processing techniques, knowledge representation considerations, and visualization approaches.

206 citations


Proceedings ArticleDOI
Philip V. Ogren1
04 Jun 2006
TL;DR: A general-purpose text annotation tool called Knowtator is introduced that facilitates the manual creation of annotated corpora that can be used for evaluating or training a variety of natural language processing systems.
Abstract: A general-purpose text annotation tool called Knowtator is introduced. Knowtator facilitates the manual creation of annotated corpora that can be used for evaluating or training a variety of natural language processing systems. Building on the strengths of the widely used Protege knowledge representation system, Knowtator has been developed as a Protege plug-in that leverages Protege's knowledge representation capabilities to specify annotation schemas. Knowtator's unique advantage over other annotation tools is the ease with which complex annotation schemas (e.g. schemas which have constrained relationships between annotation types) can be defined and incorporated into use. Knowtator is available under the Mozilla Public License 1.1 at http://bionlp.sourceforge.net/Knowtator.

Book ChapterDOI
13 Nov 2006
TL;DR: An algorithm for reducing a DL knowledge base to a disjunctive datalog program is developed and shown to show good performance; in contrast, on knowledge bases with large and complex TBoxes, existing techniques still perform better.
Abstract: Many modern applications of description logics (DLs) require answering queries over large data quantities, structured according to relatively simple ontologies. For such applications, we conjectured that reusing ideas of deductive databases might improve scalability of DL systems. Hence, in our previous work, we developed an algorithm for reducing a DL knowledge base to a disjunctive datalog program. To test our conjecture, we implemented our algorithm in a new DL reasoner KAON2, which we describe in this paper. Furthermore, we created a comprehensive test suite and used it to conduct a performance evaluation. Our results show that, on knowledge bases with large ABoxes but with simple TBoxes, our technique indeed shows good performance; in contrast, on knowledge bases with large and complex TBoxes, existing techniques still perform better. This allowed us to gain important insights into strengths and weaknesses of both approaches.

Book ChapterDOI
01 Jun 2006
TL;DR: In this framework, a critical locus of proficiency lies in the representation of domain-novice knowledge, that is, how their knowledge is organized or structured, and how their representations might differ from those of novices.
Abstract: Introduction Expertise, by definition, refers to the manifestation of skills and understanding resulting from the accumulation of a large body of knowledge. This implies that in order to understand how experts perform and why they are more capable than non-experts, we must understand the representation of their knowledge, that is, how their knowledge is organized or structured, and how their representations might differ from those of novices. For example, if a child who is fascinated with dinosaurs and has learned a lot about them correctly infers attributes about some dinosaurs that was new to them by reasoning analogically to some known dinosaurs (e.g., the shape of teeth for carnivores versus vegetarians), we would not conclude that the “expert” child has a more powerful analogical reasoning strategy. Instead, we would conclude that such a global or domain-general reasoning strategy is available to all children, but that novice children might reason analogically to some other familiar domain, such as animals (rather than dinosaurs), as our data have shown (Chi, Hutchinson, & Robin, 1989). Thus, the analogies of domain-novice are less powerful not necessarily because they lack adequate analogical reasoning strategies, although they may, but because they lack the appropriate domain knowledge from which analogies can be drawn. Thus, in this framework, a critical locus of proficiency lies in the representation of their domain knowledge.

Journal Article
TL;DR: The Web Service Modeling Language (WSML) as mentioned in this paper is a formal language for the specification of different aspects of Semantic Web Services, starting from the intersection of Datalog and the Description Logic S-HIQ.
Abstract: The Web Service Modeling Language (WSML) is a language for the specification of different aspects of Semantic Web Services. It provides a formal language for the Web Service Modeling Ontology WSMO which is based on well-known logical formalisms, specifying one coherent language framework for the semantic description of Web Services, starting from the intersection of Datalog and the Description Logic S-HIQ. This core language is extended in the directions of Description Logics and Logic Programming in a principled manner with strict layering. WSML distinguishes between conceptual and logical modeling in order to support users who are not familiar with formal logic, while not restricting the expressive power of the language for the expert user. IRIs play a central role in WSML as identifiers. Furthermore, WSML defines XML and RDF serializations for inter-operation over the Semantic Web.

Journal ArticleDOI
TL;DR: Fuzzy OWL is created, a fuzzy extension to OWL that can capture imprecise and vague knowledge, and the reasoning platform, fuzzy reasoning engine (FiRE), lets FuzzY OWL capture and reason about such knowledge.
Abstract: The semantic Web must handle information from applications that have special knowledge representation needs and that face uncertain, imprecise knowledge. More precisely, some applications deal with random information and events, others deal with imprecise and fuzzy knowledge, and still others deal with missing or distorted information - resulting in uncertainty. To deal with uncertainty in the semantic Web and its applications, many researchers have proposed extending OWL and the description logic (DL) formalisms with special mathematical frameworks. Researchers have proposed probabilistic, possibilistic, and fuzzy extensions, among others. Researchers have studied fuzzy extensions most extensively, providing impressive results on semantics, reasoning algorithms, and implementations. Building on these results, we've created a fuzzy extension to OWL called Fuzzy OWL. Fuzzy OWL can capture imprecise and vague knowledge. Moreover, our reasoning platform, fuzzy reasoning engine (FiRE), lets Fuzzy OWL capture and reason about such knowledge

ReportDOI
TL;DR: Issues associated with Level 2 Information Fusion (Situation Assessment) including: user perception and perceptual reasoning representation, knowledge discovery process models, procedural versus logical reasoning about relationships, userfusion interaction through performance metrics, and syntactic and semantic representations are presented.
Abstract: : Situation assessment (SA) involves deriving relations among entities, e.g., the aggregation of object states (i.e., classification and location). While SA has been recognized in the information fusion and human factors literature, there still exist open questions regarding knowledge representation and reasoning methods to afford SA. For instance, while lots of data is collected over a region of interest, how does this information get presented to an attention constrained user? The information overload can deteriorate cognitive reasoning so a pragmatic solution to knowledge representation is needed for effective and efficient situation understanding. In this paper, we present issues associated with Level 2 Information Fusion (Situation Assessment) including: (1) user perception and perceptual reasoning representation, (2) knowledge discovery process models, (3) procedural versus logical reasoning about relationships, (4) userfusion interaction through performance metrics, and (5) syntactic and semantic representations. While a definitive conclusion is not the aim of the paper, many critical issues are proposed in order to characterize future successful strategies for knowledge representation, presentation, and reasoning for situation assessment.

Book ChapterDOI
11 Jun 2006
TL;DR: A general method for combining and evaluating sub-programs belonging to arbitrary classes is introduced, thus enlarging the variety of programs whose execution is practicable and keeping the desirable advantages of the full language.
Abstract: Towards providing a suitable tool for building the Rule Layer of the Semantic Web, hex-programs have been introduced as a special kind of logic programs featuring capabilities for higher-order reasoning, interfacing with external sources of computation, and default negation. Their semantics is based on the notion of answer sets, providing a transparent interoperability with the Ontology Layer of the Semantic Web and full declarativity. In this paper, we identify classes of hex-programs feasible for implementation yet keeping the desirable advantages of the full language. A general method for combining and evaluating sub-programs belonging to arbitrary classes is introduced, thus enlarging the variety of programs whose execution is practicable. Implementation activity on the current prototype is also reported.

Posted Content
TL;DR: In this paper, the authors propose a logic-based approach for representing and enforcing SLA rules and describe a proof-of-concept implementation of the ContractLog KR for automated SLA management.
Abstract: Outsourcing of complex IT infrastructure to IT service providers has increased substantially during the past years. IT service providers must be able to fulfil their service-quality commitments based upon predefined Service Level Agreements (SLAs) with the service customer. They need to manage, execute and maintain thousands of SLAs for different customers and different types of services, which needs new levels of flexibility and automation not available with the current technology. The complexity of contractual logic in SLAs requires new forms of knowledge representation to automatically draw inferences and execute contractual agreements. A logic-based approach provides several advantages including automated rule chaining allowing for compact knowledge representation as well as flexibility to adapt to rapidly changing business requirements. We suggest adequate logical formalisms for representation and enforcement of SLA rules and describe a proof-of-concept implementation. The article describes selected formalisms of the ContractLog KR and their adequacy for automated SLA management and presents results of experiments to demonstrate flexibility and scalability of the approach.

Journal Article
TL;DR: An ontology-based framework for bridging learning design and learning object content is described, and how this use of ontologies can result in more effective (semi-)automatic tools and services that increase the level of reusability is shown.
Abstract: The paper describes an ontology-based framework for bridging learning design and learning object content. In present solutions, researchers have proposed conceptual models and developed tools for both of those subjects, but without detailed discussions of how they can be used together. In this paper we advocate the use of ontologies to explicitly specify all learning designs, learning objects, and the relations between them, and show how this use of ontologies can result in more effective (semi-)automatic tools and services that increase the level of reusability. We first define a three-part conceptual model that introduces an intermediate level between learning design and learning objects called the learning object context. We then use ontologies to facilitate the representation of these concepts: LOCO is a new ontology based on IMS-LD, ALOCoM is an existing ontology for learning objects, and LOCO-Cite is a new ontology for the learning object contextual model. We conclude by showing the applicability of the proposed framework in a use case study.

Journal Article
TL;DR: An ontology to represent the semantics of the IMS Learning Design (IMS LD) specification, a meta-language used to describe the main elements of the learning design process, is presented and its implementation in OWL, the standard language of the Semantic Web, is provided.
Abstract: In this paper, we present an ontology to represent the semantics of the IMS Learning Design (IMS LD) specification, a meta-language used to describe the main elements of the learning design process. The motivation of this work relies on the expressiveness limitations found on the current XML-Schema implementation of the IMS LD conceptual model. To solve these limitations, we have developed an ontology using Protege at the knowledge level. In addition, we provide its implementation in OWL, the standard language of the Semantic Web, and the set of associated axioms in first-order logic. The OWL file is available at http://www.eume.net/ontology/imsld_a.owl.

Proceedings ArticleDOI
13 Nov 2006
TL;DR: This work presents a general approach for representing the knowledge of a potential expert as a mixture of language models from associated documents, which allows the expert to exploit their underlying structure and complex language features.
Abstract: Enterprise corpora contain evidence of what employees work on and therefore can be used to automatically find experts on a given topic. We present a general approach for representing the knowledge of a potential expert as a mixture of language models from associated documents. First we retrieve documents given the expert?s name using a generative probabilistic technique and weight the retrieved documents according to expert-specific posterior distribution. Then we model the expert indirectly through the set of associated documents, which allows us to exploit their underlying structure and complex language features. Experiments show that our method has excellent performance on TREC 2005 expert search task and that it effectively collects and combines evidence for expertise in a heterogeneous collection.

Book ChapterDOI
Jian Zhou1, Li Ma2, Qiaoling Liu1, Lei Zhang2, Yong Yu1, Yue Pan2 
03 Sep 2006
TL;DR: Minerva as discussed by the authors is a storage and inference system for large-scale OWL ontologies on top of relational databases, which aims to meet scalability requirements of real applications and provide practical reasoning capability as well as high query performance.
Abstract: With the increasing use of ontologies in Semantic Web and enterprise knowledge management, it is critical to develop scalable and efficient ontology management systems In this paper, we present Minerva, a storage and inference system for large-scale OWL ontologies on top of relational databases It aims to meet scalability requirements of real applications and provide practical reasoning capability as well as high query performance The method combines Description Logic reasoners for the TBox inference with logic rules for the ABox inference Furthermore, it customizes the database schema based on inference requirements User queries are answered by directly retrieving materialized results from the back-end database The effective integration of ontology inference and storage is expected to improve reasoning efficiency, while querying without runtime inference guarantees satisfactory response time Extensive experiments on University Ontology Benchmark show the high efficiency and scalability of Minerva system.

01 Jan 2006
TL;DR: This thesis presents methods for introducing ontologies in information retrieval, and appears that the fuzzy set model comprises the flexibility needed when generalizing to an ontology-based retrieval model and, with the introduction of a hierarchical fuzzy aggregation principle, compound concepts can be handled in a straightforward and natural manner.
Abstract: In this thesis, we will present methods for introducing ontologies in information retrieval. The main hypothesis is that the inclusion of conceptual knowledge such as ontologies in the information retrieval process can contribute to the solution of major problems currently found in information retrieval. This utilization of ontologies has a number of challenges. Our focus is on the use of similarity measures derived from the knowledge about relations between concepts in ontologies, the recognition of semantic information in texts and the mapping of this knowledge into the ontologies in use, as well as how to fuse together the ideas of ontological similarity and ontological indexing into a realistic information retrieval scenario. To achieve the recognition of semantic knowledge in a text, shallow natural language processing is used during indexing that reveals knowledge to the level of noun phrases. Furthermore, we briefly cover the identification of semantic relations inside and between noun phrases, as well as discuss which kind of problems are caused by an increase in compoundness with respect to the structure of concepts in the evaluation of queries. Measuring similarity between concepts based on distances in the structure of the ontology is discussed. In addition, a shared nodes measure is introduced and, based on a set of intuitive similarity properties, compared to a number of different measures. In this comparison the shared nodes measure appears to be superior, though more computationally complex. Some of the major problems of shared nodes which relate to the way relations differ with respect to the degree they bring the concepts they connect closer are discussed. A generalized measure called weighted shared nodes is introduced to deal with these problems. Finally, the utilization of concept similarity in query evaluation is discussed. A semantic expansion approach that incorporates concept similarity is introduced and a generalized fuzzy set retrieval model that applies expansion during query evaluation is presented. While not commonly used in present information retrieval systems, it appears that the fuzzy set model comprises the flexibility needed when generalizing to an ontology-based retrieval model and, with the introduction of a hierarchical fuzzy aggregation principle, compound concepts can be handled in a straightforward and natural manner.

Journal Article
TL;DR: A pair of XSLT stylesheets have been developed to map from XML Metadata Interchange encodings of class diagrams to corresponding RDF schemas and to Java classes representing the concepts in the ontologies.
Abstract: This paper proposes the use of the Unified Modeling Language (UML) as a language for modelling ontologies for Web resources and the knowledge contained within them To provide a mechanism for serialising and processing object diagrams representing knowledge, a pair of XSLT stylesheets have been developed to map from XML Metadata Interchange (XMI) encodings of class diagrams to corresponding RDF schemas and to Java classes representing the concepts in the ontologies The Java code includes methods for marshalling and unmarshalling object-oriented information between in-memory data structures and RDF serialisations of that information This provides a convenient mechanism for Java applications to share knowledge on the Web

Book ChapterDOI
01 Jan 2006
TL;DR: In this paper, a 3D laser range finder and scan matching method for the robot Kurt3D is presented, where surface attributes are extracted and incorporated in a forest of search trees in order to associate the data.
Abstract: A basic task of rescue robot systems is mapping of the environment. Localizing injured persons, guiding rescue workers and excavation equipment requires a precise 3D map of the environment. This paper presents a new 3D laser range finder and novel scan matching method for the robot Kurt3D [9]. Compared to previous machinery [12], the apex angle is enlarged to 360°. The matching is based on semantic information. Surface attributes are extracted and incorporated in a forest of search trees in order to associate the data, i.e., to establish correspondences. The new approach results in advances in speed and reliability.

Book ChapterDOI
16 Jul 2006
TL;DR: This paper describes the DOGMA-MESS methodology and system for scalable, community-grounded ontology engineering and illustrates this methodology with examples taken from a case of interorganizational competency ontology evolution in the vocational training domain.
Abstract: In this paper, we explore the process of interorganizational ontology engineering. Scalable ontology engineering is hard to do in interorganizational settings where there are many pre-existing organizational ontologies and rapidly changing collaborative requirements. A complex socio-technical process of ontology alignment and meaning negotiation is therefore required. In particular, we are interested in how to increase the efficiency and relevance of this process using context dependencies between ontological elements. We describe the DOGMA-MESS methodology and system for scalable, community-grounded ontology engineering. We illustrate this methodology with examples taken from a case of interorganizational competency ontology evolution in the vocational training domain.

Proceedings ArticleDOI
25 Mar 2006
TL;DR: This paper presents a model, a methodology and a software framework for the semantic web (Intelligent 3D Visualization Platform - I3DVP) for the development of interoperable intelligent visualization applications that support the coupling of graphics and virtual reality scenes with domain knowledge of different domains.
Abstract: A great challenge in information visualization today is to provide models and software that effectively integrate the graphics content of scenes with domain-specific knowledge so that the users can effectively query, interpret, personalize and manipulate the visualized information [1]. Moreover, it is important that the intelligent visualization applications are interoperable in the semantic web environment and thus, require that the models and software supporting them integrate state-of-the-art international standards for knowledge representation, graphics and multimedia. In this paper, we present a model, a methodology and a software framework for the semantic web (Intelligent 3D Visualization Platform - I3DVP) for the development of interoperable intelligent visualization applications that support the coupling of graphics and virtual reality scenes with domain knowledge of different domains. The graphics content and the semantics of the scenes are married into a consistent and cohesive ontological model while at the same time knowledge- based techniques for the querying, manipulation, and semantic personalization of the scenes are introduced. We also provide methods for knowledge driven information visualization and visualization- aided decision making based on inference by reasoning.

Book ChapterDOI
05 Nov 2006
TL;DR: The collection of statistical data allows us to perform analysis and report some trends and it is noted that of the largest ontologies surveyed here, most do not exceed the description logic expressivity of $\mathcal{ALC}$.
Abstract: We survey nearly 1300 OWL ontologies and RDFS schemas. The collection of statistical data allows us to perform analysis and report some trends. Though most of the documents are syntactically OWL Full, very few stay in OWL Full when they are syntactically patched by adding type triples. We also report the frequency of occurrences of OWL language constructs and the shape of class hierarchies in the ontologies. Finally, we note that of the largest ontologies surveyed here, most do not exceed the description logic expressivity of $\mathcal{ALC}$.

Book ChapterDOI
TL;DR: The principles on which the CRM is based are discussed followed by a more detailed look at the actual mechanisms employed and the structure is compared with other biomedical ontologies in use or proposed.
Abstract: GALEN seeks to provide re-usable terminology resources for clinical systems. The heart of GALEN is the Common Reference Model (CRM) formulated in a specialised description logic. The CRM is based on a set of principles that have evolved over the period of the project and illustrate key issues to be addressed by any large medical ontology. The principles on which the CRM is based are discussed followed by a more detailed look at the actual mechanisms employed. Finally the structure is compared with other biomedical ontologies in use or proposed.

Book
01 Jan 2006
TL;DR: In this article, the MIEL++ architecture is proposed for RDB, CGs and XML meet for the Sake of Risk Assessment in Food Products, where RDB is used for risk assessment in food products.
Abstract: Invited Papers.- Formal Ontology, Knowledge Representation and Conceptual Modelling: Old Inspirations, Still Unsolved Problems.- The Persuasive Expansion - Rhetoric, Information Architecture, and Conceptual Structure.- Revision Forever!.- Ontological Constitutions for Classes and Properties.- Peirce's Contributions to the 21st Century.- Two Iconicity Notions in Peirce's Diagrammatology.- Contributed Papers.- Simple Conceptual Graphs and Simple Concept Graphs.- Rules Dependencies in Backward Chaining of Conceptual Graphs Rules.- Thresholds and Shifted Attributes in Formal Concept Analysis of Data with Fuzzy Attributes.- Formal Concept Analysis with Constraints by Closure Operators.- Mining a New Fault-Tolerant Pattern Type as an Alternative to Formal Concept Discovery.- The MIEL++ Architecture When RDB, CGs and XML Meet for the Sake of Risk Assessment in Food Products.- Some Notes on Proofs with Alpha Graphs.- DOGMA-MESS: A Meaning Evolution Support System for Interorganizational Ontology Engineering.- FCA-Based Browsing and Searching of a Collection of Images.- Semantology: Basic Methods for Knowledge Representations.- The Teridentity and Peircean Algebraic Logic.- Transaction Agent Modelling: From Experts to Concepts to Multi-Agent Systems.- Querying Formal Contexts with Answer Set Programs.- Towards an Epistemic Logic of Concepts.- Development of Intelligent Systems and Multi-Agents Systems with Amine Platform.- Ontologies in Amine Platform: Structures and Processes.- Building a Pragmatic Methodology for KR Tool Research and Development.- Simple Conceptual Graphs with Atomic Negation and Difference.- A Pattern-Based Approach to Conceptual Clustering in FOL.- Karl Popper's Critical Rationalism in Agile Software Development.- On Lattices in Access Control Models.- An Application of Relation Algebra to Lexical Databases.- A Framework for Analyzing and Testing Requirements with Actors in Conceptual Graphs.- Query-Based Multicontexts for Knowledge Base Browsing: An Evaluation.- Representation and Reasoning on Role-Based Access Control Policies with Conceptual Graphs.- Representing Wholes by Structure.