scispace - formally typeset
Search or ask a question

Showing papers on "Suggested Upper Merged Ontology published in 2005"


Proceedings Article
01 Jan 2005
TL;DR: This article comprehensively reviews and provides insights on the pragmatics of ontology mapping and elaborate on a theoretical approach for defining ontology mapped.
Abstract: Ontology mapping is seen as a solution provider in today's landscape of ontology research. As the number of ontologies that are made publicly available and accessible on the Web increases steadily, so does the need for applications to use them. A single ontology is no longer enough to support the tasks envisaged by a distributed environment like the Semantic Web. Multiple ontologies need to be accessed from several applications. Mapping could provide a common layer from which several ontologies could be accessed and hence could exchange information in semantically sound manners. Developing such mapping has beeb the focus of a variety of works originating from diverse communities over a number of years. In this article we comprehensively review and present these works. We also provide insights on the pragmatics of ontology mapping and elaborate on a theoretical approach for defining ontology mapping.

748 citations


01 Jan 2005
TL;DR: A survey of the state of the art in ontology evaluation is presented, typically in order to determine which of several ontologies would best suit a particular purpose.
Abstract: An ontology is an explicit formal conceptualization of some domain of interest Ontologies are increasingly used in various fields such as knowledge management, information extraction, and the semantic web Ontology evaluation is the problem of assessing a given ontology from the point of view of a particular criterion of application, typically in order to determine which of several ontologies would best suit a particular purpose This paper presents a survey of the state of the art in ontology evaluation

641 citations


Book
01 Jul 2005
TL;DR: This volume presents current research in ontology learning, addressing three perspectives, including methodologies that have been proposed to automatically extract information from texts and to give a structured organization to such knowledge, including approaches based on machine learning techniques.
Abstract: This volume brings together ontology learning, knowledge acquisition and other related topics It presents current research in ontology learning, addressing three perspectives The first perspective looks at methodologies that have been proposed to automatically extract information from texts and to give a structured organization to such knowledge, including approaches based on machine learning techniques Then there are evaluation methods for ontology learning, aiming at defining procedures and metrics for a quantitative evaluation of the ontology learning task; and finally application scenarios that make ontology learning a challenging area in the context of real applications such as bio-informatics According to the three perspectives mentioned above, the book is divided into three sections, each including a selection of papers addressing respectively the methods, the applications and the evaluation of ontology learning approaches

488 citations


Journal ArticleDOI
TL;DR: An ontology for cell types that covers the prokaryotic, fungal, animal and plant worlds and is designed to be used in the context of model organism genome and other biological databases.
Abstract: We describe an ontology for cell types that covers the prokaryotic, fungal, animal and plant worlds. It includes over 680 cell types. These cell types are classified under several generic categories and are organized as a directed acyclic graph. The ontology is available in the formats adopted by the Open Biological Ontologies umbrella and is designed to be used in the context of model organism genome and other biological databases. The ontology is freely available at http://obo.sourceforge.net/ and can be viewed using standard ontology visualization tools such as OBO-Edit and COBrA.

437 citations


Journal ArticleDOI
01 Oct 2005
TL;DR: The experimental results show that the news agent based on the fuzzy ontology can effectively operate for news summarization and an experimental website is constructed to test the approach.
Abstract: In this paper, a fuzzy ontology and its application to news summarization are presented. The fuzzy ontology with fuzzy concepts is an extension of the domain ontology with crisp concepts. It is more suitable to describe the domain knowledge than domain ontology for solving the uncertainty reasoning problems. First, the domain ontology with various events of news is predefined by domain experts. The document preprocessing mechanism will generate the meaningful terms based on the news corpus and the Chinese news dictionary defined by the domain expert. Then, the meaningful terms will be classified according to the events of the news by the term classifier. The fuzzy inference mechanism will generate the membership degrees for each fuzzy concept of the fuzzy ontology. Every fuzzy concept has a set of membership degrees associated with various events of the domain ontology. In addition, a news agent based on the fuzzy ontology is also developed for news summarization. The news agent contains five modules, including a retrieval agent, a document preprocessing mechanism, a sentence path extractor, a sentence generator, and a sentence filter to perform news summarization. Furthermore, we construct an experimental website to test the proposed approach. The experimental results show that the news agent based on the fuzzy ontology can effectively operate for news summarization.

377 citations


Book Chapter
01 Jan 2005
TL;DR: This volume brings together a collection of extended versions of selected papers from two workshops on ontology learning, knowledge acquisition and related topics that were organized in the context of the European Conference on Artificial Intelligence (ECAI) 2004 and the International Conference on Knowledge Engineering and Management (EKAW) 2004.
Abstract: This volume brings together a collection of extended versions of selected papers from two workshops on ontology learning, knowledge acquisition and related topics that were organized in the context of the European Conference on Artificial Intelligence (ECAI) 2004 and the International Conference on Knowledge Engineering and Management (EKAW) 2004. The volume presents current research in ontology learning, addressing three perspectives: methodologies that have been proposed to automatically extract information from texts and to give a structured organization to such knowledge, including approaches based on machine learning techniques; evaluation methods for ontology learning, aiming at defining procedures and metrics for a quantitative evaluation of the ontology learning task; and finally application scenarios that make ontology learning a challenging area in the context of real applications such as bio-informatics. According to the three perspectives mentioned above, the book is divided into three sections, each including a selection of papers addressing respectively the methods, the applications and the evaluation of ontology learning approaches. However, all selected papers pay considerably attention to the evaluation perspective, as this was a central topic of the ECAI 2004 workshop out of which most of the papers in this volume originate.

292 citations


Book ChapterDOI
29 May 2005
TL;DR: A model for the semantics of change for OWL ontologies, considering structural, logical, and user-defined consistency is presented, and resolution strategies to ensure that consistency is maintained as the ontology evolves are introduced.
Abstract: Support for ontology evolution is extremely important in ontology engineering and application of ontologies in dynamic environments. A core aspect in the evolution process is the to guarantee consistency of the ontology when changes occur. In this paper we discuss the consistent evolution of OWL ontologies. We present a model for the semantics of change for OWL ontologies, considering structural, logical, and user-defined consistency. We introduce resolution strategies to ensure that consistency is maintained as the ontology evolves.

269 citations


Book ChapterDOI
TL;DR: OntoMerge, an online system for ontology merging and automated reasoning, can implement ontology translation with inputs and outputs in OWL or other web languages.
Abstract: Ontologies are a crucial tool for formally specifying the vocabulary and relationship of concepts used on the Semantic Web. In order to share information, agents that use different vocabularies must be able to translate data from one ontological framework to another. Ontology translation is required when translating datasets, generating ontology extensions, and querying through different ontologies. OntoMerge, an online system for ontology merging and automated reasoning, can implement ontology translation with inputs and outputs in OWL or other web languages. Ontology translation can be thought of in terms of formal inference in a merged ontology. The merge of two related ontologies is obtained by taking the union of the concepts and the axioms defining them, and then adding bridging axioms that relate their concepts. The resulting merged ontology then serves as an inferential medium within which translation can occur. Our internal representation, Web-PDDL, is a strong typed first-order logic language for web application. Using a uniform notation for all problems allows us to factor out syntactic and semantic translation problems, and focus on the latter. Syntactic translation is done by an automatic translator between Web-PDDL and OWL or other web languages. Semantic translation is implemented using an inference engine (OntoEngine) which processes assertions and queries in Web-PDDL syntax, running in either a data-driven (forward chaining) or demand-driven (backward chaining) way.

207 citations


Book ChapterDOI
01 Jan 2005
TL;DR: This paper presents how to build an ontology in the legal domain following the ontology development methodology METHONTOLOGY and using the ontological engineering workbench WebODE.
Abstract: This paper presents how to build an ontology in the legal domain following the ontology development methodology METHONTOLOGY and using the ontology engineering workbench WebODE. Both of them have been widely used to develop ontologies in many other domains. The ontology used to illustrate this paper has been extracted from an existing class taxonomy proposed by Breuker, and adapted to the Spanish legal domain.

203 citations


Book ChapterDOI
31 Oct 2005
TL;DR: The NRL Security Ontology is more comprehensive and better organized than existing security ontologies, capable of representing more types of security statements and can be applied to any electronic resource.
Abstract: Annotation with security-related metadata enables discovery of resources that meet security requirements. This paper presents the NRL Security Ontology, which complements existing ontologies in other domains that focus on annotation of functional aspects of resources. Types of security information that could be described include mechanisms, protocols, objectives, algorithms, and credentials in various levels of detail and specificity. The NRL Security Ontology is more comprehensive and better organized than existing security ontologies. It is capable of representing more types of security statements and can be applied to any electronic resource. The class hierarchy of the ontology makes it both easy to use and intuitive to extend. We applied this ontology to a Service Oriented Architecture to annotate security aspects of Web service descriptions and queries. A refined matching algorithm was developed to perform requirement-capability matchmaking that takes into account not only the ontology concepts, but also the properties of the concepts.

183 citations


Journal Article
TL;DR: The intention of this essay is to give an overview of different methods that learn ontologies or ontology-like structures from unstructured text.
Abstract: After the vision of the Semantic Web was broadcasted at the turn of the millennium, ontology became a synonym for the solution to many problems concerning the fact that computers do not understand human language: if there were an ontology and every document were marked up with it and we had agents that would understand the markup, then computers would finally be able to process our queries in a really sophisticated way. Some years later, the success of Google shows us that the vision has not come true, being hampered by the incredible amount of extra work required for the intellectual encoding of semantic mark-up – as compared to simply uploading an HTML page. To alleviate this acquisition bottleneck, the field of ontology learning has since emerged as an important sub-field of ontology engineering. It is widely accepted that ontologies can facilitate text understanding and automatic processing of textual resources. Moving from words to concepts not only mitigates data sparseness issues, but also promises appealing solutions to polysemy and homonymy by finding non-ambiguous concepts that may map to various realizations in – possibly ambiguous – words. Numerous applications using lexical-semantic databases like WordNet (Miller, 1990) and its non-English counterparts, e.g. EuroWordNet (Vossen, 1997) or CoreNet (Choi and Bae, 2004) demonstrate the utility of semantic resources for natural language processing. Learning semantic resources from text instead of manually creating them might be dangerous in terms of correctness, but has undeniable advantages: Creating resources for text processing from the texts to be processed will fit the semantic component neatly and directly to them, which will never be possible with general-purpose resources. Further, the cost per entry is greatly reduced, giving rise to much larger resources than an advocate of a manual approach could ever afford. On the other hand, none of the methods used today are good enough for creating semantic resources of any kind in a completely unsupervised fashion, albeit automatic methods can facilitate manual construction to a large extent. The term ontology is understood in a variety of ways and has been used in philosophy for many centuries. In contrast, the notion of ontology in the field of computer science is younger – but almost used as inconsistently, when it comes to the details of the definition. The intention of this essay is to give an overview of different methods that learn ontologies or ontology-like structures from unstructured text. Ontology learning from other sources, issues in description languages, ontology editors, ontology merging and ontology evolving transcend the scope of this article. Surveys on ontology learning from text and other sources can be found in Ding and Foo (2002) and Gomez-Perez

Book ChapterDOI
06 Nov 2005
TL;DR: Omen, an Ontology Mapping ENhancer, is based on a set of meta-rules that captures the influence of the ontology structure and the existing matches to match nodes that are neighbours to matched nodes in the two ontologies.
Abstract: Most existing ontology mapping tools are inexact. Inexact ontology mapping rules, if not rectified, result in imprecision in the applications that use them. We describe a framework to probabilistically improve existing ontology mappings using a Bayesian Network. Omen, an Ontology Mapping ENhancer, is based on a set of meta-rules that captures the influence of the ontology structure and the existing matches to match nodes that are neighbours to matched nodes in the two ontologies. We have implemented a protype ontology matcher that can either map concepts across two input ontologies or enhance existing matches between ontology concepts. Preliminary experiments demonstrate that Omen enhances existing ontology mappings in our test cases.

Journal ArticleDOI
TL;DR: The cohesion metrics examine the fundamental quality of cohesion as it relates to ontologies in order to effectively make use of domain specific ontology development.
Abstract: Recently, domain specific ontology development has been driven by research on the Semantic Web. Ontologies have been suggested for use in many application areas targeted by the Semantic Web, such as dynamic web service composition and general web service matching. Fundamental characteristics of these ontologies must be determined in order to effectively make use of them: for example, Sirin, Hendler and Parsia have suggested that determining fundamental characteristics of ontologies is important for dynamic web service composition. Our research examines cohesion metrics for ontologies. The cohesion metrics examine the fundamental quality of cohesion as it relates to ontologies.

Book ChapterDOI
06 Nov 2005
TL;DR: In this paper, the source and target ontologies are first translated into Bayesian networks (BN) and the concept mapping between the two ontologies is treated as evidential reasoning between the translated BNs.
Abstract: This paper presents our ongoing effort on developing a principled methodology for automatic ontology mapping based on BayesOWL, a probabilistic framework we developed for modeling uncertainty in semantic web. In this approach, the source and target ontologies are first translated into Bayesian networks (BN); the concept mapping between the two ontologies are treated as evidential reasoning between the two translated BNs. Probabilities needed for constructing conditional probability tables (CPT) during translation and for measuring semantic similarity during mapping are learned using text classification techniques where each concept in an ontology is associated with a set of semantically relevant text documents, which are obtained by ontology guided web mining. The basic ideas of this approach are validated by positive results from computer experiments on two small real-world ontologies.

Proceedings ArticleDOI
07 Nov 2005
TL;DR: Compared with existing methods, the approach can acquire ontology from relational database automatically by using a group of learning rules instead of using a middle model and can obtain OWL ontology, including the classes, properties, properties characteristics, cardinality and instances, while none of existing methods can acquire all of them.
Abstract: Ontology provides a shared and reusable piece of knowledge about a specific domain, and has been applied in many fields, such as semantic Web, e-commerce and information retrieval, etc. However, building ontology by hand is a very hard and error-prone task. Learning ontology from existing resources is a good solution. Because relational database is widely used for storing data and OWL is the latest standard recommended by W3C, this paper proposes an approach of learning OWL ontology from data in relational database. Compared with existing methods, the approach can acquire ontology from relational database automatically by using a group of learning rules instead of using a middle model. In addition, it can obtain OWL ontology, including the classes, properties, properties characteristics, cardinality and instances, while none of existing methods can acquire all of them. The proposed learning rules have been proven to be correct by practice.

Patent
20 May 2005
TL;DR: In this paper, a data query system including a first storage medium including a data schema having a query language associated therewith, a second storage medium comprising an ontology model including classes and properties, the ontology models having an ontologiespecific query languages associated there with, wherein constructs of the database schema are mapped to corresponding classes, properties or compositions of properties of the ontologistspecific model, and a query processor generating a query expressing in the data schema query language corresponding to a specified query expressed in ontology query language.
Abstract: A data query system including a first storage medium including a data schema having a data schema query language associated therewith, a second storage medium including an ontology model including classes and properties, the ontology model having an ontology query language associated therewith, wherein constructs of the database schema are mapped to corresponding classes, properties or compositions of properties of the ontology model, and an ontology query processor generating a query expressed in the data schema query language corresponding to a specified query expressed in the ontology query language. A method is also described and claimed.

01 Jan 2005
TL;DR: This paper demonstrates how spreading activation improves the result by naturally integrating the mentioned methods to semi-automatically extend and refine ontologies by mining textual data from the Web sites of international online media.
Abstract: This paper describes a system to semi-automatically extend and refine ontologies by mining textual data from the Web sites of international online media. Expanding a seed ontology creates a semantic network through co-occurrence analysis, trigger phrase analysis, and disambiguation based on the WordNet lexical dictionary. Spreading activation then processes this semantic network to find the most probable candidates for inclusion in an extended ontology. Approaches to identifying hierarchical relationships such as subsumption, head noun analysis and WordNet consultation are used to confirm and classify the found relationships. Using a seed ontology on "climate change" as an example, this paper demonstrates how spreading activation improves the result by naturally integrating the mentioned methods.

Book ChapterDOI
29 May 2005
TL;DR: This paper presents an ontology which formalizes the main concepts which are used in an DILIGENT ontology engineering discussion and thus enables tracking arguments and allows for inconsistency detection, and enables the integration of manual, semi-automatic and automatic ontology creation approaches.
Abstract: A prerequisite to the success of the Semantic Web are shared ontologies which enable the seamless exchange of information between different parties. Engineering a shared ontology is a social process. Since its participants have slightly different views on the world, a harmonization effort requires discussing the resulting ontology. During the discussion, participants exchange arguments which may support or object to certain ontology engineering decisions. Experience from software engineering shows that tracking exchanged arguments can help users at a later stage to better understand the assumptions underlying the design decisions. Furthermore, as the constructed ontology becomes larger, ontology engineers might argue in a contradictory way without knowing so. In this paper we present an ontology which formalizes the main concepts which are used in an DILIGENT ontology engineering discussion and thus enables tracking arguments and allows for inconsistency detection. We provide an example which is drawn from experiments in an ontology engineering process to construct an ontology for knowledge management in our institute. Having constructed the ontology we also show how automated ontology learning algorithms could be taken as participants in the OE discussion. Hence, we enable the integration of manual, semi-automatic and automatic ontology creation approaches.

Journal ArticleDOI
TL;DR: The ontology engineering methodology DILIGENT is presented, a methodology focussing on the evolution of ontologies instead of the initial design, thus recognizing that knowledge is a tangible and moving target.
Abstract: Purpose – Aims to present the ontology engineering methodology DILIGENT, a methodology focussing on the evolution of ontologies instead of the initial design, thus recognizing that knowledge is a tangible and moving target. Design/methodology/approach – First describes the methodology as a whole, then detailing one of the five main steps of DILIGENT. The second part describes case studies, either already performed or planned, and what we learned (or expect to learn) from them. Findings – With the case studies it was discovered the strengths and weaknesses of DILIGENT. During the evolution of ontologies, arguments need to be exchanged about the suggested changes. Identifies those kind of arguments which work best for the discussion of ontology changes. Research implications – DILIGENT recognizes ontology engineering methodologies like OnToKnowledge or Methontology as proven useful for the initial design, but expands them with its strong focus on the user-centric further development of the ontology and the provided integration of automatic agents in the process of ontology evolution. Practical implications – With DILIGENT the experience distilled from a number of case studies and this offers the knowledge manager a methodology to work in an ever-changing environment. Originality/value – DILIGENT is the first methodology to put focus not on the initial development of the ontology, but on the user and his usage of the ontology, and on the changes introduced by the user. We take the user’s own view seriously and enable feedback towards the evolution of the ontology, stressing the ontology’s role as a shared conceptualisation.

Proceedings Article
01 Jan 2005
TL;DR: This paper focuses on patterns in the field of Ontology Engineering and proposes a classification scheme forOntology Engineering, which aims to facilitate and support reuse.
Abstract: In Software Engineering, patterns are an accepted way to facilitate and support reuse. This paper focuses on patterns in the field of Ontology Engineering and proposes a classification scheme for o ...

Journal ArticleDOI
TL;DR: A method for corpus-driven ontology design: extracting conceptual hierarchies from arbitrary domain-specific collections of texts, employing statistical techniques initially to elicit a conceptual hierarchy, which is then augmented through linguistic analysis.
Abstract: This paper discusses a method for corpus-driven ontology design: extracting conceptual hierarchies from arbitrary domain-specific collections of texts. These hierarchies can form the basis for a concept-oriented (onomasiological) terminology collection, and hence may be used as the basis for developing knowledge-based systems using ontology editors. This reference to ontology is explored in the context of collections of terms. The method presented is a hybrid of statistical and linguistic techniques, employing statistical techniques initially to elicit a conceptual hierarchy, which is then augmented through linguistic analysis. The result of such an extraction may be useful in information retrieval, knowledge management, or in the discipline of terminology science itself.

Journal Article
TL;DR: The main goal of this thesis is to present methodological principles for ontology engineering to guide ontology builders towards building ontologies that are both highly reusable and usable, easier to build, and smoother to maintain.
Abstract: The Internet and other open connectivity environments create a strong demand for the sharing of data semantics. Emerging ontologies are increasingly becoming essential for computer science applications. Organizations are looking towards them as vital machine-processable semantics for many application areas. An ontology in general, is an agreed understanding (i.e. semantics) of a certain domain, axiomatized and represented formally as logical theory in a computer resource. By sharing an ontology, autonomous and distributed applications can meaningfully communicate to exchange data and make transactions interoperate independently of their internal technologies. The main goal of this thesis is to present methodological principles for ontology engineering to guide ontology builders towards building ontologies that are both highly reusable and usable, easier to build, and smoother to maintain. First, we investigate three foundational challenges in ontology engineering (namely, ontology reusability, ontology application-independence, and ontology evolution). Based on these challenges, we derive six ontology-engineering requirements. Fulfilling these requirements is the goal and motivation of our methodological principles. Second, we present two methodological principles for ontology engineering: 1) ontology double articulation, and 2) ontology modularization. The double articulation principle suggests that an ontology be built as separate domain axiomatizations and application axiomatizations. While a domain axiomatization focuses on the characterization of the intended meaning (i.e. intended models) of a vocabulary at the domain level, application axiomatizations mainly focus on the usability of this vocabulary according to certain application/usability perspectives. An application axiomatization is intended to specify the legal models (a subset of the intended models) of the application(s)' interest. The modularization principle suggests that application axiomatizations be built in a modular manner. Axiomatizations should be developed as a set of small modules and later composed to form, and be used as, one modular axiomatization. We define a composition operator for automatic module composition. It combines all axioms introduced in the composed modules. Third, to illustrate the implementation of our methodological principles, we develop a conceptual markup language called ORM-ML, an ontology engineering tool prototype called DogmaModeler and a customer complaint ontology that serves as a real-life case study. This research is a contribution to the DOGMA research project, which is a research framework for modeling, engineering, and deploying ontologies. In addition, we find we have benefited enormously from our participation in several European projects. It was through the CCFORM project (discussed extensively in chapter 7) that we were able to test and debug many ideas that resulted in this thesis. The Network of Excellence KnowledgeWeb has also proved to be a fruitful brainstorming environment that has undoubtedly improved the quality of the analyses performed and the results obtained.

Journal ArticleDOI
01 Sep 2005
TL;DR: This work describes how to transform messages exchanged in the healthcare domain into OWL (Web Ontology Language) ontology instances, and demonstrates how to mediate between any incompatible healthcare standards that are currently in use.
Abstract: One of the most challenging problems in the healthcare domain is providing interoperability among healthcare information systems. In order to address this problem, we propose the semantic mediation of exchanged messages. Given that most of the messages exchanged in the healthcare domain are in EDI (Electronic Data Interchange) or XML format, we describe how to transform these messages into OWL (Web Ontology Language) ontology instances. The OWL message instances are then mediated through an ontology mapping tool that we developed, namely, OWLmt. OWLmt uses OWL-QL engine which enables the mapping tool to reason over the source ontology instances while generating the target ontology instances according to the mapping patterns defined through a GUI.Through a prototype implementation, we demonstrate how to mediate between HL7 Version 2 and HL7 Version 3 messages. However, the framework proposed is generic enough to mediate between any incompatible healthcare standards that are currently in use.

01 Jan 2005
TL;DR: This paper briefly introduces the system FOAM and its underlying techniques, and discusses the results returned from the evaluation, which were very promising and at the same time clarifying.
Abstract: This paper briefly introduces the system FOAM and its underlying techniques. We then discuss the results returned from the evaluation. They were very promising and at the same time clarifying. Concisely: labels are very important; structure helps in cases where labels do not work; dictionaries may provide additional evidence; ontology management systems need to deal with OWL-Full. The results of this paper will also be very interesting for other participants, showing specific strengths and weaknesses of our approach. 1. PRESENTATION OF THE SYSTEM 1.1 State, purpose, general statement In recent years, we have seen a range of research work on methods proposing alignments [1; 2]. When we tried to apply these methods to some of the real-world scenarios we address in other research contributions [3], we found that existing alignment methods did not suit the given requirements: • high quality results; • efficiency; • optional user-interaction; • flexibility with respect to use cases; • and easy adjusting and parameterizing. We wanted to provide the end-user with a tool taking ontologies as input and returning alignments (with explanations) as output meeting these requirements. 1.2 Specific techniques used We have observed that alignment methods like QOM [4] or PROMPT [2] may be mapped onto a generic alignment process (Figure 1). Here we will only mention the six major steps to clarify the underlying approach for the FOAM tool. We refer to [4] for a detailed description. 1. Feature Engineering, i.e. select excerpts of the overall ontology definition to describe a specific. This includes individual features, e.g. labels, structural features, e.g. subsumption, but also more complex features as used in OWL, e.g. restrictions. 2. Search Step Selection, i.e. choose two entities from the two ontologies to compare (e1,e2). 3. Similarity Assessment, i.e. indicate a similarity for a given description (feature) of two entities (e.g., simsuperConcept(e1,e2)=1.0). 4. Similarity Aggregation, i.e. aggregate the multiple similarity assessments for one pair of entities into a single measure. 5. Interpretation, i.e. use all aggregated numbers, a threshold and an interpretation strategy to propose the alignment (align(e1)=‘ e2’). This may also include a user validation. 6. Iteration, i.e. as the similarity of one alignment influences the similarity of neighboring entity pairs; the equality is propagated through the ontologies. Finally, we receive alignments linking the two ontologies. This general process was extended to meet the mentioned requirements. • High quality results were achieved through a combination of a rule-based approach and a machine learning approach. Underlying individual rules such as, if the super-concepts are similar the entities are similar, have been assigned weights by a machine learnt decision tree [5]. Especially steps 1, 3 and 4 were adjusted for this. Currently, our approach does not make use of additional background knowledge such as dictionaries here. • Efficiency was mainly achieved through an intelligent selection of candidate alignments in 2, the search step selection [4]. • User-interaction allows the user intervening during the interpretation step. By presenting the doubtable alignments (and only these) to the user, overall quality can be considerably increased. Yet this happens in a minimal invasive manner. • The system can automatically set its parameters according to a list of given use cases, such as ontology merging, versioning, ontology mapping, etc. The parameters also change according to the ontologies to align, e.g., big ontologies always require the efficient approach, whereas smaller ones do not [6]. • All these parameters may be set manually. This allows using the implementation for very specific tasks as well. • Finally, FOAM has been implemented in Java and is freely available, thus extensible. 1.3 Adaptations made for the contest No special adjustments have been made for the contest. However, some elements have been deactivated. Due to the small size of the benchmark and directory ontologies efficiency was not used, userinteraction was removed for the initiative, and no specific use case parameters were taken. A general alignment procedure was applied. The system used for the evaluation is a derivative of the ontology alignment tool used in last year’s contests I3Con [7] and EONOAC [8].

Book ChapterDOI
06 Nov 2005
TL;DR: In this article, the authors propose a new ontology evolution approach that combines a top-down and a bottom-up approach, where the manual request for changes (top-down) by the ontology engineer is complemented with an automatic change detection mechanism (bottom-up).
Abstract: In this article, we propose a new ontology evolution approach that combines a top-down and a bottom-up approach. This means that the manual request for changes (top-down) by the ontology engineer is complemented with an automatic change detection mechanism (bottom-up). The approach is based on keeping track of the different versions of ontology concepts throughout their lifetime (called virtual versions). In this way, changes can be defined in terms of these virtual versions.

Book ChapterDOI
TL;DR: The aim of this chapter is to give a general introduction to some of the ontology languages that play a prominent role on the Semantic Web, and to discuss the formal foundations of these languages.
Abstract: The aim of this chapter is to give a general introduction to some of the ontology languages that play a prominent role on the Semantic Web, and to discuss the formal foundations of these languages. Web ontology languages will be the main carriers of the information that we will want to share and integrate.

01 Jan 2005
TL;DR: Text2Onto remains independent of a concrete target language while being able to translate the instantiated primitives into any (reasonably expressive) knowledge representation formalism, and allows a user to trace the evolution of the ontology with respect to the changes in the underlying corpus.
Abstract: In this paper we present Text2Onto, a framework for ontology learning from textual resources. Three main features distinguish Text2Onto from our earlier framework TextToOnto as well as other state-of-the-art ontology learning frameworks. First, by representing the learned knowledge at a meta-level in the form of instantiated modeling primitives within a so called Probabilistic Ontology Model (POM), we remain independent of a concrete target language while being able to translate the instantiated primitives into any (reasonably expressive) knowledge representation formalism. Second, user interaction is a core aspect of Text2Onto and the fact that the system calculates a confidence for each learned object allows to design sophisticated visualizations of the POM. Third, by incorporating strategies for data-driven change discovery, we avoid processing the whole corpus from scratch each time it changes, only selectively updating the POM according to the corpus changes instead. Besides increasing efficiency in this way, it also allows a user to trace the evolution of the ontology with respect to the changes in the underlying corpus.

Book ChapterDOI
06 Nov 2005
TL;DR: In this paper, a framework for reasoning with multi-version ontology, in which a temporal logic is developed to serve as its semantic foundation, is proposed, which can provide a solid semantic foundation which can support various requirements on multiview ontology reasoning.
Abstract: In this paper we propose a framework for reasoning with multi-version ontology, in which a temporal logic is developed to serve as its semantic foundation. We show that the temporal logic approach can provide a solid semantic foundation which can support various requirements on multi-version ontology reasoning. We have implemented the prototype of MORE (Multi-version Ontology REasoner), which is based on the proposed framework. We have tested MORE with several realistic ontologies. In this paper, we also discuss the implementation issues and report the experiments with MORE.

Book ChapterDOI
01 Jan 2005
TL;DR: The Omega ontology, a large terminological ontology obtained by remerging WordNet and Mikrokosmos, adding information from various other sources, and subordinating the result to a newly designed feature-oriented upper model is presented.
Abstract: We present the Omega ontology, a large terminological ontology obtained by remerging WordNet and Mikrokosmos, adding information from various other sources, and subordinating the result to a newly designed feature-oriented upper model. We explain the organizing principles of the representation used for Omega and discuss the methodology used to merge the constituent conceptual hierarchies. We survey a range of auxiliary knowledge sources (including instances, verb frame annotations, and domainspecific sub-ontologies) incorporated into the basic conceptual structure and applications that have benefited from Omega. Omega is available for browsing at http://omega.isi.edu/.

Proceedings ArticleDOI
25 Jul 2005
TL;DR: It is claimed that the concepts of the legal world can be used to model the social world, through the extension of the concept of legal rule to social norm and the internalization of social control mechanisms in the agent's mind, so far externalized in legal institutions.
Abstract: This paper proposes a functional ontology of reputation for agents. The goal of this ontology is twofold. First, to put together the broad knowledge about reputation produced in some areas of interest such as psychology and artificial intelligence, mainly multi-agent systems. Second, to represent that knowledge in a structured form. The functional ontology of reputation employs the primitive categories of knowledge used in the Functional Ontology of Law proposed by Valente [16]. We claim that the concepts of the legal world can be used to model the social world, through the extension of the concept of legal rule to social norm and the internalization of social control mechanisms in the agent's mind, so far externalized in legal institutions.