scispace - formally typeset
Search or ask a question

Showing papers on "Suggested Upper Merged Ontology published in 2011"


Journal ArticleDOI
TL;DR: MASTRO is a Java tool for ontology-based data access (OBDA) developed at Sapienza Universita di Roma and at the Free University of Bozen-Bolzano that provides optimized algorithms for answering expressive queries, as well as features for intensional reasoning and consistency checking.
Abstract: In this paper we present MASTRO, a Java tool for ontology-based data access (OBDA) developed at Sapienza Universita di Roma and at the Free University of Bozen-Bolzano. MASTRO manages OBDA systems in which the ontology is specified in DL-Lite A,id, a logic of the DL-Lite family of tractable Description Logics specifically tailored to ontology-based data access, and is connected to external JDBC enabled data management systems through semantic mappings that associate SQL queries over the external data to the elements of the ontology. Advanced forms of integrity constraints, which turned out to be very useful in practical applications, are also enabled over the ontologies. Optimized algorithms for answering expressive queries are provided, as well as features for intensional reasoning and consistency checking. MASTRO provides a proprietary API, an OWLAPI compatible interface, and a plugin for the Protege 4 ontology editor. It has been successfully used in several projects carried out in collaboration with important organizations, on which we briefly comment in this paper.

282 citations


Journal ArticleDOI
TL;DR: This paper proposes a set of guidelines for importing required terms from an external resource into a target ontology, describing the methodology, its implementation, and some examples of this application, and outline future work and extensions.
Abstract: While the Web Ontology Language OWL provides a mechanism to import ontologies, this mechanism is not always suitable. Current editing tools present challenges for working with large ontologies and direct OWL imports can prove impractical for day-to-day development. Furthermore, external ontologies often undergo continuous change which can introduce conflicts when integrated with multiple efforts. Finally, importing heterogeneous ontologies in their entirety may lead to inconsistencies or unintended inferences. In this paper we propose a set of guidelines for importing required terms from an external resource into a target ontology. We describe the methodology, its implementation, present some examples of this application, and outline future work and extensions.

165 citations


Book ChapterDOI
01 Jan 2011
TL;DR: This chapter presents a survey of the most relevant methods, techniques and tools used for the task of ontology learning, explaining how BOEMIE addresses problems observed in existing systems and contributes to issues that are not frequently considered by existing approaches.
Abstract: Ontology learning is the process of acquiring (constructing or integrating) an ontology (semi-) automatically. Being a knowledge acquisition task, it is a complex activity, which becomes even more complex in the context of the BOEMIE project1, due to the management of multimedia resources and the multi-modal semantic interpretation that they require. The purpose of this chapter is to present a survey of the most relevant methods, techniques and tools used for the task of ontology learning. Adopting a practical perspective, an overview of the main activities involved in ontology learning is presented. This breakdown of the learning process is used as a basis for the comparative analysis of existing tools and approaches. The comparison is done along dimensions that emphasize the particular interests of the BOEMIE project. In this context, ontology learning in BOEMIE is treated and compared to the state of the art, explaining how BOEMIE addresses problems observed in existing systems and contributes to issues that are not frequently considered by existing approaches.

158 citations


Journal ArticleDOI
TL;DR: This work investigates the literature on both metamodelling and ontologies in order to identify ways in which they can be made compatible and linked in such a way as to benefit both communities and create a contribution to a coherent underpinning theory for software engineering.

143 citations


Journal ArticleDOI
TL;DR: This restructured ontology can be used to identify immune cells by flow cytometry, supports sophisticated biological queries involving cells, and helps generate new hypotheses about cell function based on similarities to other cell types.
Abstract: The Cell Ontology (CL) is an ontology for the representation of in vivo cell types. As biological ontologies such as the CL grow in complexity, they become increasingly difficult to use and maintain. By making the information in the ontology computable, we can use automated reasoners to detect errors and assist with classification. Here we report on the generation of computable definitions for the hematopoietic cell types in the CL. Computable definitions for over 340 CL classes have been created using a genus-differentia approach. These define cell types according to multiple axes of classification such as the protein complexes found on the surface of a cell type, the biological processes participated in by a cell type, or the phenotypic characteristics associated with a cell type. We employed automated reasoners to verify the ontology and to reveal mistakes in manual curation. The implementation of this process exposed areas in the ontology where new cell type classes were needed to accommodate species-specific expression of cellular markers. Our use of reasoners also inferred new relationships within the CL, and between the CL and the contributing ontologies. This restructured ontology can be used to identify immune cells by flow cytometry, supports sophisticated biological queries involving cells, and helps generate new hypotheses about cell function based on similarities to other cell types. Use of computable definitions enhances the development of the CL and supports the interoperability of OBO ontologies.

140 citations


Journal ArticleDOI
TL;DR: This article describes how to adapt a semi-automatic method for learning OWL class expressions to the ontology engineering use case and performs rigorous performance optimization of the underlying algorithms for providing instant suggestions to the user.

134 citations


Proceedings ArticleDOI
16 Jul 2011
TL;DR: A new approach is reported that enables us to efficiently extract a polynomial representation of the family of all locality-based modules of an ontology, and the fundamental algorithm to pursue this task is described.
Abstract: Extracting a subset of a given ontology that captures all the ontology's knowledge about a specified set of terms is a well-understood task. This task can be based, for instance, on locality-based modules. However, a single module does not allow us to understand neither topicality, connectedness, structure, or superfluous parts of an ontology, nor agreement between actual and intended modeling. The strong logical properties of locality-based modules suggest that the family of all such modules of an ontology can support comprehension of the ontology as a whole. However, extracting that family is not feasible, since the number of locality-based modules of an ontology can be exponential w.r.t. its size. In this paper we report on a new approach that enables us to efficiently extract a polynomial representation of the family of all locality-based modules of an ontology. We also describe the fundamental algorithm to pursue this task, and report on experiments carried out and results obtained.

112 citations


Proceedings ArticleDOI
24 Oct 2011
TL;DR: This talk provides an introduction to ontology-based data management, illustrating the main ideas and techniques for using an ontology to access the data layer of an information system, and discusses several important issues that are still the subject of extensive investigations, including the need of inconsistency tolerant query answering methods, and theneed of supporting update operations expressed over the ontology.
Abstract: Ontology-based data management aims at accessing and using data by means of an ontology, i.e., a conceptual representation of the domain of interest in the underlying information system. This new paradigm provides several interesting features, many of which have been already proved effective in managing complex information systems. On the other hand, several important issues remain open, and constitute stimulating challenges for the research community. In this talk we first provide an introduction to ontology-based data management, illustrating the main ideas and techniques for using an ontology to access the data layer of an information system, and then we discuss several important issues that are still the subject of extensive investigations, including the need of inconsistency tolerant query answering methods, and the need of supporting update operations expressed over the ontology.

112 citations


Journal ArticleDOI
TL;DR: The design and development of NanoParticle Ontology is discussed, which is developed within the framework of the Basic Formal Ontology (BFO), and implemented in the Ontology Web Language (OWL) using well-defined ontology design principles.

110 citations


Book ChapterDOI
28 Jun 2011
TL;DR: Pythia compositionally constructs meaning representations using a vocabulary aligned to the vocabulary of a given ontology, which relies on a deep linguistic analysis that allows to construct formal queries even for complex natural language questions.
Abstract: In this paper we present the ontology-based question answering system Pythia. It compositionally constructs meaning representations using a vocabulary aligned to the vocabulary of a given ontology. In doing so it relies on a deep linguistic analysis, which allows to construct formal queries even for complex natural language questions (e.g. involving quantification and superlatives).

110 citations


Journal ArticleDOI
TL;DR: A survey for the different approaches in ontology learning from semi-structured and unstructured date is presented.
Abstract: The problem that ontology learning deals with is the knowledge acquisition bottleneck, that is to say the difficulty to actually model the knowledge relevant to the domain of interest. Ontologies are the vehicle by which we can model and share the knowledge among various applications in a specific domain. So many research developed several ontology learning approaches and systems. In this paper, we present a survey for the different approaches in ontology learning from semi-structured and unstructured date

Journal ArticleDOI
TL;DR: The approach for ontology extraction on top of RDB by incorporating concept hierarchy as background knowledge is proposed, which is more efficient than the current approaches and can be applied in any of the fields such as eGoverment, eCommerce and so on.
Abstract: Relational Database (RDB) has been widely used as the back-end database of information system. Contains a wealth of high-quality information, RDB provides conceptual model and metadata needed in the ontology construction. However, most of the existing ontology building approaches convert RDB schema without considering the knowledge resided in the database. This paper proposed the approach for ontology extraction on top of RDB by incorporating concept hierarchy as background knowledge. Incorporating the background knowledge in the building process of Web Ontology Language (OWL) ontology gives two main advantages: (1) accelerate the building process, thereby minimizing the conversion cost; (2) background knowledge guides the extraction of knowledge resided in database. The experimental simulation using a gold standard shows that the Taxonomic F-measure (TF) evaluation reaches 90% while Relation Overlap (RO) is 83.33%. In term of processing time, this approach is more efficient than the current approaches. In addition, our approach can be applied in any of the fields such as eGoverment, eCommerce and so on.

Book ChapterDOI
29 May 2011
TL;DR: This work presents a solution for automatically finding schema-level links between two LOD ontologies - in the sense of ontology alignment - and shows that this solution significantly outperformed existing ontology aligned solutions on this same task.
Abstract: The Linked Open Data (LOD) is a major milestone towards realizing the Semantic Web vision, and can enable applications such as robust Question Answering (QA) systems that can answer queries requiring multiple, disparate information sources. However, realizing these applications requires relationships at both the schema and instance level, but currently the LOD only provides relationships for the latter. To address this limitation, we present a solution for automatically finding schema-level links between two LOD ontologies - in the sense of ontology alignment. Our solution, called BLOOMS+, extends our previous solution (i.e. BLOOMS) in two significant ways. BLOOMS+ 1) uses a more sophisticated metric to determine which classes between two ontologies to align, and 2) considers contextual information to further support (or reject) an alignment. We present a comprehensive evaluation of our solution using schema-level mappings from LOD ontologies to Proton (an upper level ontology) - created manually by human experts for a real world application called FactForge. We show that our solution performed well on this task. We also show that our solution significantly outperformed existing ontology alignment solutions (including our previously published work on BLOOMS) on this same task.

Book ChapterDOI
26 May 2011

Journal ArticleDOI
TL;DR: A novel method, dubbed DiShIn, that effectively exploits the multiple inheritance relationships present in many biomedical ontologies by modifying the way traditional semantic similarity measures calculate the shared information content of two ontology concepts.
Abstract: The large-scale effort in developing, maintaining and making biomedical ontologies available motivates the application of similarity measures to compare ontology concepts or, by extension, the entities described therein. A common approach, known as semantic similarity, compares ontology concepts through the information content they share in the ontology. However, different disjunctive ancestors in the ontology are frequently neglected, or not properly explored, by semantic similarity measures. This paper proposes a novel method, dubbed DiShIn, that effectively exploits the multiple inheritance relationships present in many biomedical ontologies. DiShIn calculates the shared information content of two ontology concepts, based on the information content of the disjunctive common ancestors of the concepts being compared. DiShIn identifies these disjunctive ancestors through the number of distinct paths from the concepts to their common ancestors. DiShIn was applied to Gene Ontology and its performance was evaluated against state-of-the-art measures using CESSM, a publicly available evaluation platform of protein similarity measures. By modifying the way traditional semantic similarity measures calculate the shared information content, DiShIn was able to obtain a statistically significant higher correlation between semantic and sequence similarity. Moreover, the incorporation of DiShIn in existing applications that exploit multiple inheritance would reduce their execution time.

Journal ArticleDOI
TL;DR: OntoCmaps is presented, a domain-independent and open ontology learning tool that extracts deep semantic representations from corpora and generates rich conceptual representations in the form of concept maps and proposes an innovative filtering mechanism based on metrics from graph theory.

Journal ArticleDOI
01 Jul 2011
TL;DR: This paper presents the approach to extract relevant ontology concepts and their relationships from a knowledge base of heterogeneous text documents and shows the architecture of the implemented system and discusses the experiments in a real-world context.
Abstract: Ontologies have been frequently employed in order to solve problems derived from the management of shared distributed knowledge and the efficient integration of information across different applications However, the process of ontology building is still a lengthy and error-prone task Therefore, a number of research studies to (semi-)automatically build ontologies from existing documents have been developed In this paper, we present our approach to extract relevant ontology concepts and their relationships from a knowledge base of heterogeneous text documents We also show the architecture of the implemented system and discuss the experiments in a real-world context

Proceedings ArticleDOI
24 Oct 2011
TL;DR: This paper introduces the notion of Ontology Stream Management System (OSMS) and presents a stream-reasoning approach based on Truth Maintenance System (TMS), and presents optimised EL++ algorithm to reduce memory consumption.
Abstract: So far researchers in the Description Logics / Ontology communities mainly consider ontology reasoning services for static ontologies. The rapid development of the Semantic Web and its emerging data ask for reasoning technologies for dynamic knowledge streams. Existing work on stream reasoning is focused on lightweight languages such as RDF and RDFS. In this paper, we introduce the notion of Ontology Stream Management System (OSMS) and present a stream-reasoning approach based on Truth Maintenance System (TMS). We present optimised EL++ algorithm to reduce memory consumption. Our evaluations show that the optimisation improves TMS-enabled EL++ reasoning to deal with relatively large volumes of data and update efficiently.

Journal ArticleDOI
01 Jan 2011
TL;DR: This work proposes a novel approach to facilitate the concurrent development of ontologies by different groups of experts that adapts Concurrent Versioning, a successful paradigm in software development, to allow several developers to make changes concurrently to an ontology.
Abstract: We propose a novel approach to facilitate the concurrent development of ontologies by different groups of experts. Our approach adapts Concurrent Versioning, a successful paradigm in software development, to allow several developers to make changes concurrently to an ontology. Conflict detection and resolution are based on novel techniques that take into account the structure and semantics of the ontology versions to be reconciled by using precisely-defined notions of structural and semantic differences between ontologies and by extending state-of-the-art ontology debugging and repair techniques. We also present ContentCVS, a system that implements our approach, and a preliminary empirical evaluation which suggests that our approach is both computationally feasible and useful in practice.

Book ChapterDOI
29 May 2011
TL;DR: Investigation of assumptions that ontology developers will use a top-down approach by using a foundational ontology, because it purportedly speeds up ontology development and improves quality and interoperability of the domain ontology found that the 'cost' incurred spending time getting acquainted with a foundationalOntology compared to starting from scratch was more than made up for in size, understandability, and interoperable already within the limited time frame.
Abstract: There is an assumption that ontology developers will use a top-down approach by using a foundational ontology, because it purportedly speeds up ontology development and improves quality and interoperability of the domain ontology. Informal assessment of these assumptions reveals ambiguous results that are not only open to different interpretations but also such that foundational ontology usage is not foreseen in most methodologies. Therefore, we investigated these assumptions in a controlled experiment. After a lecture about DOLCE, BFO, and partwhole relations, one-third chose to start domain ontology development with an OWLized foundational ontology. On average, those who commenced with a foundational ontology added more new classes and class axioms, and significantly less object properties than those who started from scratch. No ontology contained errors regarding part-of vs. is-a. The comprehensive results show that the 'cost' incurred spending time getting acquainted with a foundational ontology compared to starting from scratch was more than made up for in size, understandability, and interoperability already within the limited time frame of the experiment.

Book
01 Jan 2011
TL;DR: This paper Bootstrapping Ontology Evolution with Multimedia Information Extraction and Semantic Representation of Multimedia Content is Bootstrapped.
Abstract: Bootstrapping Ontology Evolution with Multimedia Information Extraction.- Semantic Representation of Multimedia Content.- Semantics Extraction from Images.- Ontology Based Information Extraction from Text.- Logical Formalization of Multimedia Interpretation.- Ontology Population and Enrichment: State of the Art.- Ontology and Instance Matching.- A Survey of Semantic Image and Video Annotation Tools.

Journal Article
TL;DR: The reengineering of part of a Software Process Ontology based on the Unified Foundational Ontology (UFO) is discussed, which concerns standard processes, project processes, and activities, which are analyzed at the light of UFO concepts.
Abstract: Normal 0 21 During project planning , knowledge about software processes is useful in several situations: software processes are defined, activities are scheduled, and people are allocated to these activities. In this context, standard software processes are used as basis for defining project processes, and tools are used to support scheduling, people allocation, and so on. Ideally, people and tools should share a common conceptualization regarding this domain for allowing interoperability, and correct use of the tools. A domain ontology can be used to define an explicit representation of this shared conceptualization . Moreover, for a domain ontology to adequately serve as a reference model, it should be built explicitly taking foundational concepts into account . This paper discusses the reengineering of part of a Software Process Ontology based on the Unified Foundational Ontology (UFO). The part reengineered concerns standard processes, project processes, and activities, which are analyzed at the light of UFO concepts.

Book ChapterDOI
23 Oct 2011
TL;DR: In this article, the authors present a solution for visualizing and navigating ontologies, which exploits an empirically-validated ontology summarization method, both to provide concise views of large ontologies and also to support a'middle-out' ontology navigation approach, starting from the most information-rich nodes.
Abstract: Observational studies in the literature have highlighted low levels of user satisfaction in relation to the support for ontology visualization and exploration provided by current ontology engineering tools These issues are particularly problematic for non-expert users, who rely on effective tool support to abstract from representational details and to be able to make sense of the contents and the structure of ontologies To address these issues, we have developed a novel solution for visualizing and navigating ontologies, KC-Viz, which exploits an empirically-validated ontology summarization method, both to provide concise views of large ontologies, and also to support a 'middle-out' ontology navigation approach, starting from the most information-rich nodes (key concepts) In this paper we present the main features of KC-Viz and also discuss the encouraging results derived from a preliminary empirical evaluation, which suggest that the use of KC-Viz provides performance advantages to users tackling realistic browsing and visualization tasks Supplementary data gathered through questionnaires also convey additional interesting findings, including evidence that prior experience in ontology engineering affects not just objective performance in ontology engineering tasks but also subjective views on the usability of ontology engineering tools

Journal ArticleDOI
TL;DR: A novel approach to ontology localization with the objective of obtaining multilingual ontologies, and an extension to the Ontology Metadata Vocabulary, the so-called LexOMV, with the aim of reporting on multilinguality at the ontology metadata level.
Abstract: This paper presents a novel approach to ontology localization with the objective of obtaining multilingual ontologies. Within the ontology development process, ontology localization has been defined as the activity of adapting an ontology to a concrete linguistic and cultural community. Depending on the ontology layers-terminological and/or conceptual-involved in the ontology localization activity, three heterogeneous multilingual ontology metamodels have been identified, of which we propose one of them. Our proposal consists in associating the ontology metamodel to an external model for representing and structuring lexical and terminological data in different natural languages. Our model has been called Linguistic Information Repository (LIR). The main advantages of this modelling modality rely on its flexibility by allowing (1) the enrichment of any ontology element with as much linguistic information as needed by the final application, and (2) the establishment of links among linguistic elements within and across different natural languages. The LIR model has been designed as an ontology of linguistic elements and is currently available in Web Ontology Language (OWL). The set of lexical and terminological data that it provides to ontology elements enables the localization of any ontology to a certain linguistic and cultural universe. The LIR has been evaluated against the multilingual requirements of the Food and Agriculture Organization of the United Nations in the framework of the NeOn project. It has proven to solve multilingual representation problems related to the establishment of well-defined relations among lexicalizations within and across languages, as well as conceptualization mismatches among different languages. Finally, we present an extension to the Ontology Metadata Vocabulary, the so-called LexOMV, with the aim of reporting on multilinguality at the ontology metadata level. By adding this contribution to the LIR model, we account for multilinguality at the three levels of an ontology: data level, knowledge representation level and metadata level.

Journal ArticleDOI
TL;DR: A novel, holistic approach to facilitate the involvements of domain experts in the ontology authoring process is presented here, which integrates an ontology construction methodology, the use of a controlled natural language, and appropriate tool support.

Journal ArticleDOI
TL;DR: This paper proposes the extraction of concepts, instances, and relationships from a handbook of a specific domain to quickly construct base domain ontology as a good starting point for expediting the development process of more comprehensivedomain ontology.

Journal ArticleDOI
01 Sep 2011
TL;DR: This paper discusses the design and development of an ontology for Modeling and Simulation called the Discrete-event Modeling Ontology (DeMO), and it presents prototype applications that demonstrate various uses and benefits that such an ontologies may provide to the modeling and simulation community.
Abstract: Several fields have created ontologies for their subdomains. For example, the biological sciences have developed extensive ontologies such as the Gene Ontology, which is considered a great success. Ontologies could provide similar advantages to the Modeling and Simulation community. They provide a way to establish common vocabularies and capture knowledge about a particular domain with community-wide agreement. Ontologies can support significantly improved (semantic) search and browsing, integration of heterogeneous information sources, and improved knowledge discovery capabilities. This paper discusses the design and development of an ontology for Modeling and Simulation called the Discrete-event Modeling Ontology (DeMO), and it presents prototype applications that demonstrate various uses and benefits that such an ontology may provide to the Modeling and Simulation community.

Journal ArticleDOI
01 Jul 2011
TL;DR: This paper introduces a multiagent ontology mapping framework that has been designed to operate effectively in the Semantic Web environment and analyzes the challenges faced by this framework.
Abstract: Ontology mapping is a prerequisite for achieving heterogeneous data integration on the Semantic Web. The vision of the Semantic Web implies that a large number of ontologies present on the web need to be aligned before one can make use of them, for example, a question answering on the Semantic Web. At the same time, these ontologies can be used as domain-specific background knowledge by the ontology mapping systems to increase the mapping precision. However, these ontologies can differ in representation, quality, and size that pose different challenges to ontology mapping. In this paper, we analyze these challenges and introduce a multiagent ontology mapping framework that has been designed to operate effectively in the Semantic Web environment.

01 Jan 2011
TL;DR: A method for automatic ontology building using the relational database resources to improve the efficiency and the practical experiments prove the method and system feasibility.
Abstract: Ontology resources are more and more important for organizing knowledge. It has become an important task to improve the efficiency of ontology construction for ontology applications. How to generate ontology automatically from database resources is an emerging task in ontology construction. Aiming at solving the problem, a method for automatic ontology building using the relational database resources to improve the efficiency is proposed in the paper. Firstly, mapping analysis of ontology and database is done. Secondly, construction rules of ontology elements based on relational database, which are used to generate ontology concepts, properties, axioms, instances are put forward. Thirdly,Ontology automatic Generation System based on Relational Database (OGSRD) is designed and implemented. Finally, the practical experiments prove the method and system feasibility.

Book ChapterDOI
06 Oct 2011
TL;DR: A never completely account of languages that have been used for the research community for representing ontologies is presented and the most popular four ontology languages (KIF, OWL, RDF + RDF(S) and DAML+OIL) are reviewed.
Abstract: Nowadays a number of papers are presented on the research for the ontology application for a business system modelling. For this purpose formal and executable ontologies earn a lot of attention. However, formality and executability of an ontology depends on a language, which is used to present it. This paper presents a never completely account of languages that have been used for the research community for representing ontologies. The most popular four ontology languages (KIF, OWL, RDF + RDF(S) and DAML+OIL) are reviewed. Their advantages and disadvantages are discussed. Finally, thirteen comparison criteria are distinguished and chosen ontology languages are compared. The discussion is also presented in the paper.