scispace - formally typeset
Search or ask a question

Showing papers on "Suggested Upper Merged Ontology published in 2013"


Journal ArticleDOI
TL;DR: This paper summarises ENVO’s motivation, content, structure, adoption, and governance approach.
Abstract: As biological and biomedical research increasingly reference the environmental context of the biological entities under study, the need for formalisation and standardisation of environment descriptors is growing. The Environment Ontology (ENVO; http://www.environmentontology.org) is a community-led, open project which seeks to provide an ontology for specifying a wide range of environments relevant to multiple life science disciplines and, through an open participation model, to accommodate the terminological requirements of all those needing to annotate data using ontology classes. This paper summarises ENVO’s motivation, content, structure, adoption, and governance approach. The ontology is available from http://purl.obolibrary.org/obo/envo.owl - an OBO format version is also available by switching the file suffix to “obo”.

274 citations



Journal ArticleDOI
TL;DR: This paper defines an ontology alignment process based on a memetic algorithm able to efficiently aggregate similarity measures without using a priori knowledge about ontologies under alignment and yields high performance in terms of alignment quality with respect to top-performers of well-known Ontology Alignment Evaluation Initiative campaigns.

81 citations


Journal ArticleDOI
TL;DR: The ontology and how it is used for the personalization of user interfaces for developing transportation interactive systems by model-driven engineering is presented.
Abstract: Ontologies have been largely exploited in many domains and studies. In this paper, we present a new application of a domain ontology for generating personalized user interfaces for transportation interactive systems. The concepts, relationships and axioms of transportation ontology are exploited during the semi-automatic generation of personalized user interfaces. Personalization deals with the capacity of adaptation of a user interface, reflecting what is known about the user and the domain application. It can be performed on the interface container presentation (e.g., layout, colors, sizes) and in the content provided in their input/output (e.g., data, information, document). In this paper, the transportation ontology is used to provide the content personalization. This paper presents the ontology and how it is used for the personalization of user interfaces for developing transportation interactive systems by model-driven engineering.

73 citations


Book ChapterDOI
26 May 2013
TL;DR: Based on well-established works in Software Engineering, the notion of ontology patterns in Ontology Engineering is revisited to introduce the idea of ontolo- gy pattern language as a way to organize related ontological patterns.
Abstract: Ontology design patterns have been pointed out as a promising ap- proach for ontology engineering. The goal of this paper is twofold. Firstly, based on well-established works in Software Engineering, we revisit the notion of ontology patterns in Ontology Engineering to introduce the notion of ontolo- gy pattern language as a way to organize related ontology patterns. Secondly, we present an overview of a software process ontology pattern language.

67 citations


Journal ArticleDOI
TL;DR: A dynamic multi-strategies ontology alignment with automatic matcher selection and dynamic similarity aggregation is proposed and a practical ontology-driven framework for building SIL is described.

63 citations


Journal ArticleDOI
TL;DR: The goal of the Ontology Summit 2013 was to create guidance for ontology developers and users on how to evaluate ontologies.
Abstract: The goal of the Ontology Summit 2013 was to create guidance for ontology developers and users on how to evaluate ontologies. Over a period of four months a variety of approaches were discussed by participants, who represented a broad spectrum of ontology, software, and system developers and users. We explored how established best practices in systems engineering and in software engineering can be utilized in ontology development.

62 citations


21 Oct 2013
TL;DR: The goal of this paper is to clarify concepts and the terminology used in Ontology Engineering to talk about the notion of ontology patterns taking into account already well-established notions of patterns in Software Engineering.
Abstract: Ontology patterns have been pointed out as a promising approach for ontology engineering. The goal of this paper is to clarify concepts and the terminology used in Ontology Engineering to talk about the notion of ontology patterns taking into account already well-established notions of patterns in Software Engineering.

62 citations


Journal ArticleDOI
TL;DR: An ontology is a claim on/for knowledge that attempts to model what is known about a domain of discourse to build an abstract (yet extendable) philosophical and practical conceptualization of the essence of knowledge in a domain.
Abstract: An ontology is a claim on/for knowledge that attempts to model what is known about a domain of discourse. A domain ontology does not aim to exhaustively list all concepts in a domain, but rather to build an abstract (yet extendable) philosophical (yet practical) conceptualization of the essence of knowledge in a domain. At the core of any ontology is an ontological model—an architecture of how the world (in a domain) behaves (or becomes). The ontology categorizes construction knowledge across three main dimensions: concept, modality, and context. Concept encompasses five key terms: entity (further subdivided into generic and secondary), environmental element, abstract concept, attribute, and system (combinations of the previous four types). Modality is a means for generating a variety of types for each of the described concepts. Context allows for linking concepts in a variety of ways—creating different worlds.

60 citations


Journal ArticleDOI
TL;DR: The evaluation of OQuaRE is presented, performed by an international panel of experts in ontology engineering, and the results include the positive and negative aspects of the current version of O quaRE, the completeness and utility of the quality metrics included in OquaRE.
Abstract: The increasing importance of ontologies has resulted in the development of a large number of ontologies in both coordinated and non-coordinated efforts. The number and complexity of such ontologies make hard to ontology and tool developers to select which ontologies to use and reuse. So far, there are no mechanism for making such decisions in an informed manner. Consequently, methods for evaluating ontology quality are required. OQuaRE is a method for ontology quality evaluation which adapts the SQuaRE standard for software product quality to ontologies. OQuaRE has been applied to identify the strengths and weaknesses of different ontologies but, so far, this framework has not been evaluated itself. Therefore, in this paper we present the evaluation of OQuaRE, performed by an international panel of experts in ontology engineering. The results include the positive and negative aspects of the current version of OQuaRE, the completeness and utility of the quality metrics included in OQuaRE and the comparison between the results of the manual evaluations done by the experts and the ones obtained by a software implementation of OQuaRE.

55 citations


Proceedings ArticleDOI
17 Nov 2013
TL;DR: This work proposes a mechanism to support evaluating whether the ontology follows their correspondent CQs, particularly when these ontologies are defined in OWL (Ontology Web Language), under the Description Logic formalism.
Abstract: Competency Questions(CQs) play an important role in the ontology development lifecycle, as they represent the ontology requirements. Although the main methodologies describe and use CQs, the current practice of ontology engineering makes a superficial use of CQs. One of the main problems that hamper their proper use lies on the lack of tools that assist users to check if CQs are being fulfilled by the ontology being defined, particularly when these ontologies are defined in OWL (Ontology Web Language), under the Description Logic formalism. We propose a mechanism to support evaluating whether the ontology follows their correspondent CQs.

Journal ArticleDOI
TL;DR: A change history management framework for evolving ontologies; developed over the last couple of years is introduced, a comprehensive and methodological framework for managing issues related to change management in evolving ontology, such as versioning, provenance, consistency, recovery, change representation and visualization.
Abstract: Knowledge constantly grows in scientific discourse and is revised over time by different stakeholders, either collaboratively or through institutionalized efforts. The body of knowledge gets structured and refined as the Communities of Practice concerned with a field of knowledge develop a deeper understanding of the issues. As a result, the knowledge model moves from a loosely clustered terminology to a semi-formal or even formal ontology. Change history management in such evolving knowledge models is an important and challenging task. Different techniques have been introduced in the research literature to solve the issue. A comprehensive solution must address various multi-faceted issues, such as ontology recovery, visualization of change effects, and keeping the evolving ontology in a consistent state. More so because the semantics of changes and evolution behavior of the ontology are hard to comprehend. This paper introduces a change history management framework for evolving ontologies; developed over the last couple of years. It is a comprehensive and methodological framework for managing issues related to change management in evolving ontologies, such as versioning, provenance, consistency, recovery, change representation and visualization. The Change history log is central to our framework and is supported by a semantically rich and formally sound change representation scheme known as change history ontology. Changes are captured and then stored in the log in conformance with the change history ontology. The log entries are later used to revert ontology to a previous consistent state, and to visualize the effects of change on ontology during its evolution. The framework is implemented to work as a plug-in for ontology repositories, such as Joseki and ontology editors, such as Protege. The change detection accuracy of the proposed system Change Tracer has been compared with that of Changes Tab, Version Log Generator in Protege; Change Detection, and Change Capturing of NeOn Toolkit. The proposed system has shown better accuracy against the existing systems. A comprehensive evaluation of the methodology was designed to validate the recovery operations. The accuracy of Roll-Back and Roll-Forward algorithms was conducted using different versions of SWETO Ontology, CIDOC CRM Ontology, OMV Ontology, and SWRC Ontology. Experimental results and comparison with other approaches shows that the change management process of the proposed system is accurate, consistent, and comprehensive in its coverage.

Journal ArticleDOI
TL;DR: An overview of the GO-CCO, its overall design, and some recent extensions that make use of additional spatial information are provided.
Abstract: Background: The Gene Ontology (GO) (http://www.geneontology.org/) contains a set of terms for describing the activity and actions of gene products across all kingdoms of life. Each of these activities is executed in a location within a cell or in the vicinity of a cell. In order to capture this context, the GO includes a sub-ontology called the Cellular Component (CC) ontology (GO-CCO). The primary use of this ontology is for GO annotation, but it has also been used for phenotype annotation, and for the annotation of images. Another ontology with similar scope to the GO-CCO is the Subcellular Anatomy Ontology (SAO), part of the Neuroscience Information Framework Standard (NIFSTD) suite of ontologies. The SAO also covers cell components, but in the domain of neuroscience. Description: Recently, the GO-CCO was enriched in content and links to the Biological Process and Molecular Function branches of GO as well as to other ontologies. This was achieved in several ways. We carried out an amalgamation of SAO terms with GO-CCO ones; as a result, nearly 100 new neuroscience-related terms were added to the GO. The GO-CCO also contains relationships to GO Biological Process and Molecular Function terms, as well as connecting to external ontologies such as the Cell Ontology (CL). Terms representing protein complexes in the Protein Ontology (PRO) reference GO-CCO terms for their species-generic counterparts. GO-CCO terms can also be used to search a variety of databases. Conclusions: In this publication we provide an overview of the GO-CCO, its overall design, and some recent extensions that make use of additional spatial information. One of the most recent developments of the GO-CCO was the merging in of the SAO, resulting in a single unified ontology designed to serve the needs of GO annotators as well as the specific needs of the neuroscience community.

Proceedings Article
01 Aug 2013
TL;DR: A Natural Language Generation system that converts RDF data into natural language text based on an ontology and an associated ontology lexicon, and combines the use of such a lexicon with the choice of lexical items and syntactic structures based on statistical information extracted from a domain-specific corpus is developed.
Abstract: The increasing amount of machinereadable data available in the context of the Semantic Web creates a need for methods that transform such data into human-comprehensible text. In this paper we develop and evaluate a Natural Language Generation (NLG) system that converts RDF data into natural language text based on an ontology and an associated ontology lexicon. While it follows a classical NLG pipeline, it diverges from most current NLG systems in that it exploits an ontology lexicon in order to capture context-specific lexicalisations of ontology concepts, and combines the use of such a lexicon with the choice of lexical items and syntactic structures based on statistical information extracted from a domain-specific corpus. We apply the developed approach to the cooking domain, providing both an ontology and an ontology lexicon in lemon format. Finally, we evaluate fluency and adequacy of the generated recipes with respect to two target audiences: cooking novices and advanced cooks.

Patent
27 Nov 2013
TL;DR: In this article, a method and system for harmonizing and mediating ontologies to search across large data sources is presented, which includes translating the query into one or more translated queries, each translated query targeting a respective ontology different from the first ontology.
Abstract: A method and system for harmonizing and mediating ontologies to search across large data sources is disclosed. The method comprises receiving a query targeting a first ontology. The method further comprises translating the query into one or more translated queries, each translated query targeting a respective ontology different from the first ontology. For each of the queries, issuing the query to a respective database organized according to the respective ontology of the query, and receiving a respective result set for the query, wherein the respective result set corresponds to the respective ontology of the query. The method further comprises translating the respective result set into a translated result set corresponding to the first ontology, aggregating the result sets into an aggregated result set corresponding to the first ontology, and returning the aggregated results set corresponding to the first ontology.

Journal ArticleDOI
01 Jan 2013
TL;DR: The experimental results show that the new method can produce significantly better classification accuracy and the high performance demonstrates that the presented ontological operations based on the ontology graph knowledge model are effectively developed.
Abstract: A new ontology learning model called domain ontology graph (DOG) is proposed in this paper. There are two key components in the DOG, i.e., the definition of the ontology graph and the ontology learning process. The former defines the ontology and knowledge conceptualization model from the domain-specific text documents; the latter offers the necessary method of semiautomatic domain ontology learning and generates the corresponding ontology graphs. Two kinds of ontological operations are also defined based on the proposed DOG, i.e., document ontology graph generation and ontology-graph-based text classification. The simulation studies focused upon Chinese text data are used to demonstrate the potential effectiveness of our proposed strategy. This is accomplished by generating DOGs to represent the domain knowledge and conducting the text classifications based on the generated ontology graph. The experimental results show that the new method can produce significantly better classification accuracy (e.g., with 92.3% in f-measure) compared with other methods (such as 86.8% in f-measure for the term-frequency-inverse-document-frequency approach). The high performance demonstrates that our presented ontological operations based on the ontology graph knowledge model are effectively developed.

Journal ArticleDOI
TL;DR: Three tools are described, LODE, Parrot and the OWLDoc-based Ontology Browser, that can be used automatically to create documentation from a well-formed OWL ontology at any stage of its development.
Abstract: Ontologies are knowledge constructs essential for creation of the Web of Data. Good documentation is required to permit people to understand ontologies and thus employ them correctly, but this is costly to create by tradition authorship methods, and is thus inefficient to create in this way until an ontology has matured into a stable structure. The authors describe three tools, LODE, Parrot and the OWLDoc-based Ontology Browser, that can be used automatically to create documentation from a well-formed OWL ontology at any stage of its development. They contrast their properties and then report on the authors' evaluation of their effectiveness and usability, determined by two task-based user testing sessions.

Journal ArticleDOI
TL;DR: The notion of Ontology as a Service (OaaS), whereby the ontology tailoring process is a service in the cloud, is introduced, whereby multiple sub-ontologies are extracted from various source ontologies and merged to form a complete ontology to be used by the user.
Abstract: Cloud computing is a revolution in the information technology industry. It allows computing services provided as utilities. The traditional cloud services include Software as a Service, Platform as a Service, Hardware/Infrastructure as a Service, and Database as a Service. In this paper, we introduce the notion of Ontology as a Service (OaaS), whereby the ontology tailoring process is a service in the cloud. This is particularly relevant as we are moving toward Cloud 2.0--multi-cloud providers to provide an interoperable service to customers. To illustrate OaaS, in this paper we propose sub-ontology extraction and merging, whereby multiple sub-ontologies are extracted from various source ontologies, and then these extracted sub-ontologies are merged to form a complete ontology to be used by the user. We use the Minimum extraction method to facilitate this. A walkthrough case study using the UMLS meta-thesaurus ontology is elaborated, and its performance in the cloud is also discussed.

Proceedings Article
03 Aug 2013
TL;DR: Predictive reasoning is tackled as a correlation and interpretation of past semantics-augmented data over exogenous ontology streams over semantic augmented data streams to predict the future of knowledge.
Abstract: Recently, ontology stream reasoning has been introduced as a multidisciplinary approach, merging synergies from Artificial Intelligence, Database, World-Wide-Web to reason on semantic augmented data streams. Although knowledge evolution and real-time reasoning have been largely addressed in ontology streams, the challenge of predicting its future (or missing) knowledge remains open and yet unexplored. We tackle predictive reasoning as a correlation and interpretation of past semantics-augmented data over exogenous ontology streams. Consistent predictions are constructed as Description Logics entailments by selecting and applying relevant cross-streams association rules. The experiments have shown accurate prediction with real and live stream data from Dublin City in Ireland.

Journal ArticleDOI
TL;DR: This paper describes an approach to topopulate an existing ontology with instance information present in the natural language text provided as input, and demonstrates heuristics to extract information from the unstructured text and for adding it as structured information to the selected ontology.
Abstract: In this paper, we describe an approach to populate an existing ontology with instance information present in the natural language text provided as input. An ontology is defined as an explicit conceptualization of a shared domain. This approach starts with a list of relevant domain ontologies created by human experts, and techniques for identifying the most appropriate ontology to be extended with information from a given text. Then we demonstrate heuristics to extract information from the unstructured text and for adding it as structured information to the selected ontology. This identification of the relevant ontology is critical, as it is used in identifying relevant information in the text. We extract information in the form of semantic triples from the text, guided by the concepts in the ontology. We then convert the extracted information about the semantic class instances into Resource Description Framework (RDF3) and append it to the existing domain ontology. This enables us to perform more precise semantic queries over the semantic triple store thus created. We have achieved 95% accuracy of information extraction in our implementation.

Journal ArticleDOI
TL;DR: This paper introduces a technique to identify composite changes that not only assists in formulating ontology change log data in a more concise manner, but also helps in realizing the semantics and intent behind any applied change.
Abstract: Ontologies can support a variety of purposes, ranging from capturing the conceptual knowledge to the organisation of digital content and information. However, information systems are always subject to change and ontology change management can pose challenges. In this sense, the application and representation of ontology changes in terms of higher-level change operations can describe more meaningful semantics behind the applied change. In this paper, we propose a four-phase process that covers the operationalization, representation and detection of higher-level changes in ontology evolution life cycle. We present different levels of change operators based on the granularity and domain-specificity of changes. The first layer is based on generic atomic level change operators, whereas the next two layers are user-defined (generic/domain-specific) change patterns. We introduce layered change logs for the explicit operational representation of ontology changes. We formalised the change log using a graph-based approach. We introduce a technique to identify composite changes that not only assists in formulating ontology change log data in a more concise manner, but also helps in realizing the semantics and intent behind any applied change. Furthermore, we identify frequent change sequences that are applied as a reference to discover reusable, often domain-specific and usage-driven change patterns. We describe the pattern identification algorithms and evaluate their performance.


Proceedings ArticleDOI
12 Jun 2013
TL;DR: This paper presents a methodology for ontology design developed in the context of data integration and claims that it has made the design process much more efficient and that there is a high potential to reuse the methodology.
Abstract: Methods to design of formal ontologies have been in focus of research since the early nineties when their importance and conceivable practical application in engineering sciences had been understood. However, often significant customization of generic methodologies is required when they are applied in tangible scenarios. In this paper, we present a methodology for ontology design developed in the context of data integration. In this scenario, a targeting ontology is applied as a mediator for distinct schemas of individual data sources and, furthermore, as a reference schema for federated data queries. The methodology has been used and evaluated in a case study aiming at integration of buildings' energy and carbon emission related data. We claim that we have made the design process much more efficient and that there is a high potential to reuse the methodology.

Journal ArticleDOI
TL;DR: The ontology mapping method proposed in this paper uses description logic (DL) based bridging axioms between the ontologies and Atomic concept level similarity has been taken as input to establish the complex concepts and roles level mapping.

Journal ArticleDOI
01 Jul 2013
TL;DR: This work has developed a methodology that helps engineers to identify the type of general ontology to be reused and find out which axioms and definitions should be reused, and make a decision, using formal concept analysis, on whatgeneral ontology is going to be reuse.
Abstract: Currently, there is a great deal of well-founded explicit knowledge formalizing general notions, such as time concepts and the part_of relation. Yet, it is often the case that instead of reusing ontologies that implement such notions (the so-called general ontologies), engineers create procedural programs that implicitly implement this knowledge. They do not save time and code by reusing explicit knowledge, and devote effort to solve problems that other people have already adequately solved. Consequently, we have developed a methodology that helps engineers to: (a) identify the type of general ontology to be reused; (b) find out which axioms and definitions should be reused; (c) make a decision, using formal concept analysis, on what general ontology is going to be reused; and (d) adapt and integrate the selected general ontology in the domain ontology to be developed. To illustrate our approach we have employed use-cases. For each use case, we provide a set of heuristics with examples. Each of these heuristics has been tested in either OWL or Prolog. Our methodology has been applied to develop a pharmaceutical product ontology. Additionally, we have carried out a controlled experiment with graduated students doing a MCs in Artificial Intelligence. This experiment has yielded some interesting findings concerning what kind of features the future extensions of the methodology should have.

Journal ArticleDOI
TL;DR: This article represents the first attempt to develop a logical ontology with an indigenous group, and suggests future directions toward an inclusive semantic interoperability.
Abstract: There has been ample work in GIScience on the formalization of ontologies but a relatively neglected area is the influence of language and culture on ontologies of geography. Although this subject has been investigated for conceptual ontologies using indigenous words denoting geographic features, this article represents the first attempt to develop a logical ontology with an indigenous group. The process of developing logical ontologies is here referred to as formalization. A methodology for formalizing ontologies with indigenous peoples is presented. A conceptual human readable ontology and a logical axioms specified in mathematical logic ontology were developed using this methodology. Research was conducted with the Cree, the largest indigenous language grouping in Canada. Results show that the geospatial ontology developed from Cree geographic concepts possesses unique design considerations: no superordinate classes were found from archival sources or Cree speakers so ontologies are structurally flat; the ontology contains some unique classes of water bodies; and the ontology challenges our notions of the generalizability of ontologies within indigenous groups. Whereas these difficulties are not insurmountable to the establishment of a cross-cultural Geospatial Semantic Web, the current plans of the World Wide Web Consortium do not adequately address them. We suggest future directions toward an inclusive semantic interoperability.

Journal ArticleDOI
TL;DR: This paper presents research on the development of adomain ontology adaptation system for personalized knowledge search and recommendation that adapts a suitable domain ontology according to the previous browsing and reading behavior of users (i.e., usage history log).

Proceedings Article
21 Oct 2013
TL;DR: This paper briefly describes CIDER-CL, a schema-based ontology alignment system, and comment its results at the Ontology Alignment Evaluation Initiative 2013 campaign (OAEI'13).
Abstract: CIDER-CL is the evolution of CIDER, a schema-based ontology alignment system. Its algorithm compares each pair of ontology entities by analysing their similarity at different levels of their ontological context (linguistic description, superterms, subterms, related terms, etc.). Then, such elementary similarities are combined by means of artificial neural networks. In its current version, CIDER-CL uses SoftTFIDF for monolingual comparisons and Cross-Lingual Explicit Semantic Analysis for comparisons between entities documented in different natural languages. In this paper we briefly describe CIDER-CL and comment its results at the Ontology Alignment Evaluation Initiative 2013 campaign (OAEI'13).

Proceedings ArticleDOI
09 Sep 2013
TL;DR: This paper discusses how ROost was developed, and presents a fragment of Roost that concerns with software testing process, its activities, artifacts, and procedures.
Abstract: Software testing is a critical process for achieving product quality. Its importance is more and more recognized, and there is a growing concern in improving the accomplishment of this process. In this context, Knowledge Management emerges as an important supporting tool. However, managing relevant knowledge to reuse is difficult and it requires some means to represent and to associate semantics to a large volume of test information. In order to address this problem, we have developed a Reference Ontology on Software Testing (ROost). ROost is built reusing ontology patterns from the Software Process Ontology Pattern Language (SP-OPL). In this paper, we discuss how ROost was developed, and present a fragment of Roost that concerns with software testing process, its activities, artifacts, and procedures.

Journal ArticleDOI
TL;DR: A review of the recent progress on geologic time ontologies, discuss further improvements, and make recommendations for other geoscience ontology works.
Abstract: Semantic Web technologies bring innovative ideas to computer applications in geoscience As an essential part of the Semantic Web, ontologies are increasingly discussed in the geoscience community, in which geologic time scale is one of the topics that have received the most discussion and practices This paper aims to carry out a review of the recent progress on geologic time ontologies, discuss further improvements, and make recommendations for other geoscience ontology works Several models and ontologies of geologic time scale are collected and analyzed Items such as ontology evaluation, ontology mapping, ontology governance, ontology delivery and multilingual labels are discussed for advancing the geologic time ontologies We hope the discussion can be useful for other geoscience ontology works, and we also make a few further recommendations, such as referring to an ontology spectrum in ontology creation, collaborative working to improve interoperability, and balancing expressivity, implementability and maintainability to achieve better applications of ontologies