scispace - formally typeset
Search or ask a question

Showing papers on "Upper ontology published in 2011"


Journal ArticleDOI
TL;DR: The National Center for Biomedical Ontology (NCBO) has developed BioPortal, a web portal that provides access to a library of biomedical ontologies and terminologies via the NCBO Web services.
Abstract: The National Center for Biomedical Ontology (NCBO) is one of the National Centers for Biomedical Computing funded under the NIH Roadmap Initiative. Contributing to the national computing infrastructure, NCBO has developed BioPortal, a web portal that provides access to a library of biomedical ontologies and terminologies (http://bioportal.bioontology.org) via the NCBO Web services. BioPortal enables community participation in the evaluation and evolution of ontology content by providing features to add mappings between terms, to add comments linked to specific ontology terms and to provide ontology reviews. The NCBO Web services (http://www.bioontology.org/wiki/index.php/NCBO_REST_services) enable this functionality and provide a uniform mechanism to access ontologies from a variety of knowledge representation formats, such as Web Ontology Language (OWL) and Open Biological and Biomedical Ontologies (OBO) format. The Web services provide multi-layered access to the ontology content, from getting all terms in an ontology to retrieving metadata about a term. Users can easily incorporate the NCBO Web services into software applications to generate semantically aware applications and to facilitate structured data collection.

692 citations


Book ChapterDOI
TL;DR: This paper reports results and lessons learned from the Ontology Alignment Evaluation Initiative (OAEI), a benchmarking initiative for ontology matching, and describes the evaluation design used in the OAEI campaigns in terms of datasets, evaluation criteria and workflows.
Abstract: In the area of semantic technologies, benchmarking and systematic evaluation is not yet as established as in other areas of computer science, e.g., information retrieval. In spite of successful attempts, more effort and experience are required in order to achieve such a level of maturity. In this paper, we report results and lessons learned from the Ontology Alignment Evaluation Initiative (OAEI), a benchmarking initiative for ontology matching. The goal of this work is twofold: on the one hand, we document the state of the art in evaluating ontology matching methods and provide potential participants of the initiative with a better understanding of the design and the underlying principles of the OAEI campaigns. On the other hand, we report experiences gained in this particular area of semantic technologies to potential developers of benchmarking for other kinds of systems. For this purpose, we describe the evaluation design used in the OAEI campaigns in terms of datasets, evaluation criteria and workflows, provide a global view on the results of the campaigns carried out from 2005 to 2010 and discuss upcoming trends, both specific to ontology matching and generally relevant for the evaluation of semantic technologies. Finally, we argue that there is a need for a further automation of benchmarking to shorten the feedback cycle for tool developers.

290 citations


Journal ArticleDOI
TL;DR: MASTRO is a Java tool for ontology-based data access (OBDA) developed at Sapienza Universita di Roma and at the Free University of Bozen-Bolzano that provides optimized algorithms for answering expressive queries, as well as features for intensional reasoning and consistency checking.
Abstract: In this paper we present MASTRO, a Java tool for ontology-based data access (OBDA) developed at Sapienza Universita di Roma and at the Free University of Bozen-Bolzano. MASTRO manages OBDA systems in which the ontology is specified in DL-Lite A,id, a logic of the DL-Lite family of tractable Description Logics specifically tailored to ontology-based data access, and is connected to external JDBC enabled data management systems through semantic mappings that associate SQL queries over the external data to the elements of the ontology. Advanced forms of integrity constraints, which turned out to be very useful in practical applications, are also enabled over the ontologies. Optimized algorithms for answering expressive queries are provided, as well as features for intensional reasoning and consistency checking. MASTRO provides a proprietary API, an OWLAPI compatible interface, and a plugin for the Protege 4 ontology editor. It has been successfully used in several projects carried out in collaboration with important organizations, on which we briefly comment in this paper.

282 citations


Journal ArticleDOI
TL;DR: This paper analyzes ontology-based approaches for IC computation and proposes several improvements aimed to better capture the semantic evidence modelled in the ontology for the particular concept.
Abstract: The information content (IC) of a concept provides an estimation of its degree of generality/concreteness, a dimension which enables a better understanding of concept's semantics. As a result, IC has been successfully applied to the automatic assessment of the semantic similarity between concepts. In the past, IC has been estimated as the probability of appearance of concepts in corpora. However, the applicability and scalability of this method are hampered due to corpora dependency and data sparseness. More recently, some authors proposed IC-based measures using taxonomical features extracted from an ontology for a particular concept, obtaining promising results. In this paper, we analyse these ontology-based approaches for IC computation and propose several improvements aimed to better capture the semantic evidence modelled in the ontology for the particular concept. Our approach has been evaluated and compared with related works (both corpora and ontology-based ones) when applied to the task of semantic similarity estimation. Results obtained for a widely used benchmark show that our method enables similarity estimations which are better correlated with human judgements than related works.

256 citations


Proceedings ArticleDOI
16 Jul 2011
TL;DR: The combined approach is described, which incorporates the information given by the ontology into the data and employs query rewriting to eliminate spurious answers in ontology-based data access.
Abstract: The use of ontologies for accessing data is one of the most exciting new applications of description logics in databases and other information systems. A realistic way of realising sufficiently scalable ontology-based data access in practice is by reduction to querying relational databases. In this paper, we describe the combined approach, which incorporates the information given by the ontology into the data and employs query rewriting to eliminate spurious answers. We illustrate this approach for ontologies given in the DL-Lite family of description logics and briefly discuss the results obtained for the EL family.

165 citations


Journal ArticleDOI
TL;DR: This paper proposes a set of guidelines for importing required terms from an external resource into a target ontology, describing the methodology, its implementation, and some examples of this application, and outline future work and extensions.
Abstract: While the Web Ontology Language OWL provides a mechanism to import ontologies, this mechanism is not always suitable. Current editing tools present challenges for working with large ontologies and direct OWL imports can prove impractical for day-to-day development. Furthermore, external ontologies often undergo continuous change which can introduce conflicts when integrated with multiple efforts. Finally, importing heterogeneous ontologies in their entirety may lead to inconsistencies or unintended inferences. In this paper we propose a set of guidelines for importing required terms from an external resource into a target ontology. We describe the methodology, its implementation, present some examples of this application, and outline future work and extensions.

165 citations


Book ChapterDOI
01 Jan 2011
TL;DR: This chapter presents a survey of the most relevant methods, techniques and tools used for the task of ontology learning, explaining how BOEMIE addresses problems observed in existing systems and contributes to issues that are not frequently considered by existing approaches.
Abstract: Ontology learning is the process of acquiring (constructing or integrating) an ontology (semi-) automatically. Being a knowledge acquisition task, it is a complex activity, which becomes even more complex in the context of the BOEMIE project1, due to the management of multimedia resources and the multi-modal semantic interpretation that they require. The purpose of this chapter is to present a survey of the most relevant methods, techniques and tools used for the task of ontology learning. Adopting a practical perspective, an overview of the main activities involved in ontology learning is presented. This breakdown of the learning process is used as a basis for the comparative analysis of existing tools and approaches. The comparison is done along dimensions that emphasize the particular interests of the BOEMIE project. In this context, ontology learning in BOEMIE is treated and compared to the state of the art, explaining how BOEMIE addresses problems observed in existing systems and contributes to issues that are not frequently considered by existing approaches.

158 citations


Journal ArticleDOI
TL;DR: A model for linguistic grounding of ontologies called LexInfo, implemented as an OWL ontology and freely available together with an API, which allows us to associate linguistic information to elements in an ontology with respect to any level of linguistic description and expressivity.

147 citations


Journal ArticleDOI
TL;DR: This work investigates the literature on both metamodelling and ontologies in order to identify ways in which they can be made compatible and linked in such a way as to benefit both communities and create a contribution to a coherent underpinning theory for software engineering.

143 citations


Journal ArticleDOI
TL;DR: The Annotation Ontology meets critical requirements for an open, freely shareable model in OWL, of annotation metadata created against scientific documents on the Web, and will enable biomedical domain ontologies to be used quite widely to annotate the scientific literature.
Abstract: Background There is currently a gap between the rich and expressive collection of published biomedical ontologies, and the natural language expression of biomedical papers consumed on a daily basis by scientific researchers. The purpose of this paper is to provide an open, shareable structure for dynamic integration of biomedical domain ontologies with the scientific document, in the form of an Annotation Ontology (AO), thus closing this gap and enabling application of formal biomedical ontologies directly to the literature as it emerges.

136 citations


Journal ArticleDOI
TL;DR: This article describes how to adapt a semi-automatic method for learning OWL class expressions to the ontology engineering use case and performs rigorous performance optimization of the underlying algorithms for providing instant suggestions to the user.

Journal ArticleDOI
TL;DR: Methods developed in the fields of Natural Language Processing, information extraction, information retrieval and machine learning provide techniques for automating the enrichment of an ontology from free-text documents.

Book ChapterDOI
01 Jul 2011
TL;DR: In the last decades, the use of ontologies in information systems has become more and more popular in various fields, such as web technologies, database integration, multi agent systems, natural language processing, etc.
Abstract: In the last decades, the use of ontologies in information systems has become more and more popular in various fields, such as web technologies, database integration, multi agent systems, natural language processing, etc. Artificial intelligent researchers have initially borrowed the word “ontology” from Philosophy, then the word spread in many scientific domain and ontologies are now used in several developments.

Journal ArticleDOI
TL;DR: Preliminary results of an ongoing effort to normalize the Gene Ontology by explicitly stating the definitions of compositional classes in a form that can be used by reasoners are presented.

Proceedings ArticleDOI
16 Jul 2011
TL;DR: A new approach is reported that enables us to efficiently extract a polynomial representation of the family of all locality-based modules of an ontology, and the fundamental algorithm to pursue this task is described.
Abstract: Extracting a subset of a given ontology that captures all the ontology's knowledge about a specified set of terms is a well-understood task. This task can be based, for instance, on locality-based modules. However, a single module does not allow us to understand neither topicality, connectedness, structure, or superfluous parts of an ontology, nor agreement between actual and intended modeling. The strong logical properties of locality-based modules suggest that the family of all such modules of an ontology can support comprehension of the ontology as a whole. However, extracting that family is not feasible, since the number of locality-based modules of an ontology can be exponential w.r.t. its size. In this paper we report on a new approach that enables us to efficiently extract a polynomial representation of the family of all locality-based modules of an ontology. We also describe the fundamental algorithm to pursue this task, and report on experiments carried out and results obtained.

Proceedings ArticleDOI
24 Oct 2011
TL;DR: This talk provides an introduction to ontology-based data management, illustrating the main ideas and techniques for using an ontology to access the data layer of an information system, and discusses several important issues that are still the subject of extensive investigations, including the need of inconsistency tolerant query answering methods, and theneed of supporting update operations expressed over the ontology.
Abstract: Ontology-based data management aims at accessing and using data by means of an ontology, i.e., a conceptual representation of the domain of interest in the underlying information system. This new paradigm provides several interesting features, many of which have been already proved effective in managing complex information systems. On the other hand, several important issues remain open, and constitute stimulating challenges for the research community. In this talk we first provide an introduction to ontology-based data management, illustrating the main ideas and techniques for using an ontology to access the data layer of an information system, and then we discuss several important issues that are still the subject of extensive investigations, including the need of inconsistency tolerant query answering methods, and the need of supporting update operations expressed over the ontology.

Journal ArticleDOI
TL;DR: The design and development of NanoParticle Ontology is discussed, which is developed within the framework of the Basic Formal Ontology (BFO), and implemented in the Ontology Web Language (OWL) using well-defined ontology design principles.

Book ChapterDOI
28 Jun 2011
TL;DR: Pythia compositionally constructs meaning representations using a vocabulary aligned to the vocabulary of a given ontology, which relies on a deep linguistic analysis that allows to construct formal queries even for complex natural language questions.
Abstract: In this paper we present the ontology-based question answering system Pythia. It compositionally constructs meaning representations using a vocabulary aligned to the vocabulary of a given ontology. In doing so it relies on a deep linguistic analysis, which allows to construct formal queries even for complex natural language questions (e.g. involving quantification and superlatives).

Journal ArticleDOI
03 Oct 2011-PLOS ONE
TL;DR: The work in developing an ontology of chemical information entities, with a primary focus on data-driven research and the integration of calculated properties (descriptors) of chemical entities within a semantic web context is described.
Abstract: Cheminformatics is the application of informatics techniques to solve chemical problems in silico. There are many areas in biology where cheminformatics plays an important role in computational research, including metabolism, proteomics, and systems biology. One critical aspect in the application of cheminformatics in these fields is the accurate exchange of data, which is increasingly accomplished through the use of ontologies. Ontologies are formal representations of objects and their properties using a logic-based ontology language. Many such ontologies are currently being developed to represent objects across all the domains of science. Ontologies enable the definition, classification, and support for querying objects in a particular domain, enabling intelligent computer applications to be built which support the work of scientists both within the domain of interest and across interrelated neighbouring domains. Modern chemical research relies on computational techniques to filter and organise data to maximise research productivity. The objects which are manipulated in these algorithms and procedures, as well as the algorithms and procedures themselves, enjoy a kind of virtual life within computers. We will call these information entities. Here, we describe our work in developing an ontology of chemical information entities, with a primary focus on data-driven research and the integration of calculated properties (descriptors) of chemical entities within a semantic web context. Our ontology distinguishes algorithmic, or procedural information from declarative, or factual information, and renders of particular importance the annotation of provenance to calculated data. The Chemical Information Ontology is being developed as an open collaborative project. More details, together with a downloadable OWL file, are available at http://code.google.com/p/semanticchemistry/ (license: CC-BY-SA).

Journal ArticleDOI
TL;DR: A survey for the different approaches in ontology learning from semi-structured and unstructured date is presented.
Abstract: The problem that ontology learning deals with is the knowledge acquisition bottleneck, that is to say the difficulty to actually model the knowledge relevant to the domain of interest. Ontologies are the vehicle by which we can model and share the knowledge among various applications in a specific domain. So many research developed several ontology learning approaches and systems. In this paper, we present a survey for the different approaches in ontology learning from semi-structured and unstructured date

Book ChapterDOI
23 Oct 2011
TL;DR: This paper discusses several approaches to learning a matching function between two ontologies using a small set of manually aligned concepts, and evaluates them on different pairs of financial accounting standards, showing that multilingual information can indeed improve the matching quality, even in cross-lingual scenarios.
Abstract: Ontology matching is a task that has attracted considerable attention in recent years. With very few exceptions, however, research in ontology matching has focused primarily on the development of monolingual matching algorithms. As more and more resources become available in more than one language, novel algorithms are required which are capable of matching ontologies which share more than one language, or ontologies which are multilingual but do not share any languages. In this paper, we discuss several approaches to learning a matching function between two ontologies using a small set of manually aligned concepts, and evaluate them on different pairs of financial accounting standards, showing that multilingual information can indeed improve the matching quality, even in cross-lingual scenarios. In addition to this, as current research on ontology matching does not make a satisfactory distinction between multilingual and cross-lingual ontology matching, we provide precise definitions of these terms in relation to monolingual ontology matching, and quantify their effects on different matching algorithms.

Journal ArticleDOI
TL;DR: The approach for ontology extraction on top of RDB by incorporating concept hierarchy as background knowledge is proposed, which is more efficient than the current approaches and can be applied in any of the fields such as eGoverment, eCommerce and so on.
Abstract: Relational Database (RDB) has been widely used as the back-end database of information system. Contains a wealth of high-quality information, RDB provides conceptual model and metadata needed in the ontology construction. However, most of the existing ontology building approaches convert RDB schema without considering the knowledge resided in the database. This paper proposed the approach for ontology extraction on top of RDB by incorporating concept hierarchy as background knowledge. Incorporating the background knowledge in the building process of Web Ontology Language (OWL) ontology gives two main advantages: (1) accelerate the building process, thereby minimizing the conversion cost; (2) background knowledge guides the extraction of knowledge resided in database. The experimental simulation using a gold standard shows that the Taxonomic F-measure (TF) evaluation reaches 90% while Relation Overlap (RO) is 83.33%. In term of processing time, this approach is more efficient than the current approaches. In addition, our approach can be applied in any of the fields such as eGoverment, eCommerce and so on.

Journal ArticleDOI
TL;DR: An approach to overcome semantic inconsistencies and incompleteness of the Supply Chain Operations Reference (SCOR) model and hence improve its usefulness and expand the application domain is presented.
Abstract: Reference models play an important role in the knowledge management of the various complex collaboration domains (such as supply chain networks). However, they often show a lack of semantic precision and, they are sometimes incomplete. In this article, we present an approach to overcome semantic inconsistencies and incompleteness of the Supply Chain Operations Reference (SCOR) model and hence improve its usefulness and expand the application domain. First, we describe a literal web ontology language (OWL) specification of SCOR concepts (and related tools) built with the intention to preserve the original approach in the classification of process reference model entities, and hence enable the effectiveness of usage in original contexts. Next, we demonstrate the system for its exploitation, in specific-tools for SCOR framework browsing and rapid supply chain process configuration. Then, we describe the SCOR-Full ontology, its relations with relevant domain ontology and show how it can be exploited for improvement of SCOR ontological framework competence. Finally, we elaborate the potential impact of the presented approach, to interoperability of systems in supply chain networks.

Book ChapterDOI
29 May 2011
TL;DR: This work presents a solution for automatically finding schema-level links between two LOD ontologies - in the sense of ontology alignment - and shows that this solution significantly outperformed existing ontology aligned solutions on this same task.
Abstract: The Linked Open Data (LOD) is a major milestone towards realizing the Semantic Web vision, and can enable applications such as robust Question Answering (QA) systems that can answer queries requiring multiple, disparate information sources. However, realizing these applications requires relationships at both the schema and instance level, but currently the LOD only provides relationships for the latter. To address this limitation, we present a solution for automatically finding schema-level links between two LOD ontologies - in the sense of ontology alignment. Our solution, called BLOOMS+, extends our previous solution (i.e. BLOOMS) in two significant ways. BLOOMS+ 1) uses a more sophisticated metric to determine which classes between two ontologies to align, and 2) considers contextual information to further support (or reject) an alignment. We present a comprehensive evaluation of our solution using schema-level mappings from LOD ontologies to Proton (an upper level ontology) - created manually by human experts for a real world application called FactForge. We show that our solution performed well on this task. We also show that our solution significantly outperformed existing ontology alignment solutions (including our previously published work on BLOOMS) on this same task.

Book ChapterDOI
26 May 2011

Journal ArticleDOI
TL;DR: A novel method, dubbed DiShIn, that effectively exploits the multiple inheritance relationships present in many biomedical ontologies by modifying the way traditional semantic similarity measures calculate the shared information content of two ontology concepts.
Abstract: The large-scale effort in developing, maintaining and making biomedical ontologies available motivates the application of similarity measures to compare ontology concepts or, by extension, the entities described therein. A common approach, known as semantic similarity, compares ontology concepts through the information content they share in the ontology. However, different disjunctive ancestors in the ontology are frequently neglected, or not properly explored, by semantic similarity measures. This paper proposes a novel method, dubbed DiShIn, that effectively exploits the multiple inheritance relationships present in many biomedical ontologies. DiShIn calculates the shared information content of two ontology concepts, based on the information content of the disjunctive common ancestors of the concepts being compared. DiShIn identifies these disjunctive ancestors through the number of distinct paths from the concepts to their common ancestors. DiShIn was applied to Gene Ontology and its performance was evaluated against state-of-the-art measures using CESSM, a publicly available evaluation platform of protein similarity measures. By modifying the way traditional semantic similarity measures calculate the shared information content, DiShIn was able to obtain a statistically significant higher correlation between semantic and sequence similarity. Moreover, the incorporation of DiShIn in existing applications that exploit multiple inheritance would reduce their execution time.

Journal ArticleDOI
01 Mar 2011
TL;DR: A new method for combining the WordNet and Fuzzy Formal Concept Analysis (FFCA) techniques for merging ontologies with the same domain, called FFCA-Merge is proposed, which can merge domain ontologies effectively.
Abstract: Many different contents and structures exist in constructed ontologies, including those that exist in the same domain. If extant domain ontologies can be used, time and money can be saved. However, domain knowledge changes fast. In addition, the extant domain ontologies may require updates to solve domain problems. The reuse of extant ontologies is an important topic for their application. Thus, the integration of extant domain ontologies is of considerable importance. In this paper, we propose a new method for combining the WordNet and Fuzzy Formal Concept Analysis (FFCA) techniques for merging ontologies with the same domain, called FFCA-Merge. Through the method, two extant ontologies can be converted into a fuzzy ontology. The new fuzzy ontology is more flexible than a general ontology. The experimental results indicate that our method can merge domain ontologies effectively.

Journal ArticleDOI
TL;DR: OntoCmaps is presented, a domain-independent and open ontology learning tool that extracts deep semantic representations from corpora and generates rich conceptual representations in the form of concept maps and proposes an innovative filtering mechanism based on metrics from graph theory.

Journal ArticleDOI
TL;DR: This paper proposes a new ontology, called OM (Ontology of units of Measure and related concepts), which defines the complete set of concepts in the domain as distinguished in the textual standards, and can answer a wider range of competency questions than the existing approaches do.

Journal ArticleDOI
01 Jul 2011
TL;DR: This paper presents the approach to extract relevant ontology concepts and their relationships from a knowledge base of heterogeneous text documents and shows the architecture of the implemented system and discusses the experiments in a real-world context.
Abstract: Ontologies have been frequently employed in order to solve problems derived from the management of shared distributed knowledge and the efficient integration of information across different applications However, the process of ontology building is still a lengthy and error-prone task Therefore, a number of research studies to (semi-)automatically build ontologies from existing documents have been developed In this paper, we present our approach to extract relevant ontology concepts and their relationships from a knowledge base of heterogeneous text documents We also show the architecture of the implemented system and discuss the experiments in a real-world context