scispace - formally typeset
Search or ask a question

Showing papers on "Ontology-based data integration published in 2013"


Proceedings Article
01 Oct 2013
TL;DR: A new semantic parsing approach that learns to resolve ontological mismatches, which is learned from question-answer pairs, uses a probabilistic CCG to build linguistically motivated logicalform meaning representations, and includes an ontology matching model that adapts the output logical forms for each target ontology.
Abstract: We consider the challenge of learning semantic parsers that scale to large, open-domain problems, such as question answering with Freebase. In such settings, the sentences cover a wide variety of topics and include many phrases whose meaning is difficult to represent in a fixed target ontology. For example, even simple phrases such as ‘daughter’ and ‘number of people living in’ cannot be directly represented in Freebase, whose ontology instead encodes facts about gender, parenthood, and population. In this paper, we introduce a new semantic parsing approach that learns to resolve such ontological mismatches. The parser is learned from question-answer pairs, uses a probabilistic CCG to build linguistically motivated logicalform meaning representations, and includes an ontology matching model that adapts the output logical forms for each target ontology. Experiments demonstrate state-of-the-art performance on two benchmark semantic parsing datasets, including a nine point accuracy improvement on a recent Freebase QA corpus.

341 citations


Journal ArticleDOI
TL;DR: This paper summarises ENVO’s motivation, content, structure, adoption, and governance approach.
Abstract: As biological and biomedical research increasingly reference the environmental context of the biological entities under study, the need for formalisation and standardisation of environment descriptors is growing. The Environment Ontology (ENVO; http://www.environmentontology.org) is a community-led, open project which seeks to provide an ontology for specifying a wide range of environments relevant to multiple life science disciplines and, through an open participation model, to accommodate the terminological requirements of all those needing to annotate data using ontology classes. This paper summarises ENVO’s motivation, content, structure, adoption, and governance approach. The ontology is available from http://purl.obolibrary.org/obo/envo.owl - an OBO format version is also available by switching the file suffix to “obo”.

274 citations


Book ChapterDOI
21 Oct 2013
TL;DR: It is shown that if optimal string similarity metrics are chosen, those alone can produce alignments that are competitive with the state of the art in ontology alignment systems.
Abstract: Ontology alignment is an important part of enabling the semantic web to reach its full potential. The vast majority of ontology alignment systems use one or more string similarity metrics, but often the choice of which metrics to use is not given much attention. In this work we evaluate a wide range of such metrics, along with string pre-processing strategies such as removing stop words and considering synonyms, on different types of ontologies. We also present a set of guidelines on when to use which metric. We furthermore show that if optimal string similarity metrics are chosen, those alone can produce alignments that are competitive with the state of the art in ontology alignment systems. Finally, we examine the improvements possible to an existing ontology alignment system using an automated string metric selection strategy based upon the characteristics of the ontologies to be aligned.

183 citations


Journal ArticleDOI
TL;DR: The Ontology of units of Measure and related concepts OM, an OWL ontology of the domain of quantities and units of measure supports making quantitative research data more explicit, so that the data can be integrated, verified and reproduced.
Abstract: This paper describes the Ontology of units of Measure and related concepts OM, an OWL ontology of the domain of quantities and units of measure. OM supports making quantitative research data more explicit, so that the data can be integrated, verified and reproduced. The various options for modeling the domain are discussed. For example, physical quantities can be modeled either as classes, instances or properties. The design choices made are based on use cases from our own projects and general experience in the field. The use cases have been implemented as tools and web services. OM is compared with QUDT, another active effort for an OWL model in this domain. We note possibilities for integration of these efforts. We also discuss the role OWL plays in our approach.

166 citations



Book ChapterDOI
02 Sep 2013
TL;DR: This paper introduces an ontology design pattern for semantic trajectories and discusses the formalization of the pattern using the Web Ontology Language (OWL) and applies the pattern to two different scenarios, personal travel and wildlife monitoring.
Abstract: Trajectory data have been used in a variety of studies, including human behavior analysis, transportation management, and wildlife tracking. While each study area introduces a different perspective, they share the need to integrate positioning data with domain-specific information. Semantic annotations are necessary to improve discovery, reuse, and integration of trajectory data from different sources. Consequently, it would be beneficial if the common structure encountered in trajectory data could be annotated based on a shared vocabulary, abstracting from domain-specific aspects. Ontology design patterns are an increasingly popular approach to define such flexible and self-contained building blocks of annotations. They appear more suitable for the annotation of interdisciplinary, multi-thematic, and multi-perspective data than the use of foundational and domain ontologies alone. In this paper, we introduce such an ontology design pattern for semantic trajectories. It was developed as a community effort across multiple disciplines and in a data-driven fashion. We discuss the formalization of the pattern using the Web Ontology Language (OWL) and apply the pattern to two different scenarios, personal travel and wildlife monitoring.

116 citations


Proceedings Article
03 Aug 2013
TL;DR: This paper proposes two new families of inconsistency-tolerant semantics which approximate the CQA semantics from above and from below and converge to it in the limit, and shows a general tractability result for all known first-order rewritable ontology languages.
Abstract: A robust system for ontology-based data access should provide meaningful answers to queries even when the data conflicts with the ontology. This can be accomplished by adopting an inconsistency-tolerant semantics, with the consistent query answering (CQA) semantics being the most prominent example. Unfortunately, query answering under the CQA semantics has been shown to be computationally intractable, even when extremely simple ontology languages are considered. In this paper, we address this problem by proposing two new families of inconsistency-tolerant semantics which approximate the CQA semantics from above and from below and converge to it in the limit. We study the data complexity of conjunctive query answering under these new semantics, and show a general tractability result for all known first-order rewritable ontology languages. We also analyze the combined complexity of query answering for ontology languages of the DL-Lite family.

107 citations


Journal ArticleDOI
TL;DR: This paper defines an ontology alignment process based on a memetic algorithm able to efficiently aggregate similarity measures without using a priori knowledge about ontologies under alignment and yields high performance in terms of alignment quality with respect to top-performers of well-known Ontology Alignment Evaluation Initiative campaigns.

81 citations


Journal ArticleDOI
TL;DR: A novel approach COnto-Diff is proposed to determine an expressive and invertible diff evolution mapping between given versions of an ontology and it is shown how the Diff results can be used for version management and annotation migration in collaborative curation.

80 citations


Journal ArticleDOI
TL;DR: The ontology and how it is used for the personalization of user interfaces for developing transportation interactive systems by model-driven engineering is presented.
Abstract: Ontologies have been largely exploited in many domains and studies. In this paper, we present a new application of a domain ontology for generating personalized user interfaces for transportation interactive systems. The concepts, relationships and axioms of transportation ontology are exploited during the semi-automatic generation of personalized user interfaces. Personalization deals with the capacity of adaptation of a user interface, reflecting what is known about the user and the domain application. It can be performed on the interface container presentation (e.g., layout, colors, sizes) and in the content provided in their input/output (e.g., data, information, document). In this paper, the transportation ontology is used to provide the content personalization. This paper presents the ontology and how it is used for the personalization of user interfaces for developing transportation interactive systems by model-driven engineering.

73 citations


Journal ArticleDOI
TL;DR: It is argued that IB can benefit significantly from the integration solutions enabled by Semantic Web (SW) technologies, and insight is provided into how SW technologies can be used to build open, standardized and interoperable solutions for interdisciplinary integration on a global basis.
Abstract: Integrative Biology (IB) uses experimental or computational quantitative technologies to characterize biological systems at the molecular, cellular, tissue and population levels. IB typically involves the integration of the data, knowledge and capabilities across disciplinary boundaries in order to solve complex problems. We identify a series of bioinformatics problems posed by interdisciplinary integration: (i) data integration that interconnects structured data across related biomedical domains; (ii) ontology integration that brings jargons, terminologies and taxonomies from various disciplines into a unified network of ontologies; (iii) knowledge integration that integrates disparate knowledge elements from multiple sources; (iv) service integration that build applications out of services provided by different vendors. We argue that IB can benefit significantly from the integration solutions enabled by Semantic Web (SW) technologies.The SW enables scientists to share content beyond the boundaries of applications and websites, resulting into a web of data that is meaningful and understandable to any computers. In this review, we provide insight into how SW technologies can be used to build open, standardized and interoperable solutions for interdisciplinary integration on a global basis. We present a rich set of case studies in system biology, integrative neuroscience, bio-pharmaceutics and translational medicine, to highlight the technical features and benefits of SW applications in IB.

Book ChapterDOI
26 May 2013
TL;DR: Based on well-established works in Software Engineering, the notion of ontology patterns in Ontology Engineering is revisited to introduce the idea of ontolo- gy pattern language as a way to organize related ontological patterns.
Abstract: Ontology design patterns have been pointed out as a promising ap- proach for ontology engineering. The goal of this paper is twofold. Firstly, based on well-established works in Software Engineering, we revisit the notion of ontology patterns in Ontology Engineering to introduce the notion of ontolo- gy pattern language as a way to organize related ontology patterns. Secondly, we present an overview of a software process ontology pattern language.

Journal ArticleDOI
TL;DR: A dynamic multi-strategies ontology alignment with automatic matcher selection and dynamic similarity aggregation is proposed and a practical ontology-driven framework for building SIL is described.

Journal ArticleDOI
TL;DR: The goal of the Ontology Summit 2013 was to create guidance for ontology developers and users on how to evaluate ontologies.
Abstract: The goal of the Ontology Summit 2013 was to create guidance for ontology developers and users on how to evaluate ontologies. Over a period of four months a variety of approaches were discussed by participants, who represented a broad spectrum of ontology, software, and system developers and users. We explored how established best practices in systems engineering and in software engineering can be utilized in ontology development.

21 Oct 2013
TL;DR: The goal of this paper is to clarify concepts and the terminology used in Ontology Engineering to talk about the notion of ontology patterns taking into account already well-established notions of patterns in Software Engineering.
Abstract: Ontology patterns have been pointed out as a promising approach for ontology engineering. The goal of this paper is to clarify concepts and the terminology used in Ontology Engineering to talk about the notion of ontology patterns taking into account already well-established notions of patterns in Software Engineering.

Journal ArticleDOI
TL;DR: An ontology is a claim on/for knowledge that attempts to model what is known about a domain of discourse to build an abstract (yet extendable) philosophical and practical conceptualization of the essence of knowledge in a domain.
Abstract: An ontology is a claim on/for knowledge that attempts to model what is known about a domain of discourse. A domain ontology does not aim to exhaustively list all concepts in a domain, but rather to build an abstract (yet extendable) philosophical (yet practical) conceptualization of the essence of knowledge in a domain. At the core of any ontology is an ontological model—an architecture of how the world (in a domain) behaves (or becomes). The ontology categorizes construction knowledge across three main dimensions: concept, modality, and context. Concept encompasses five key terms: entity (further subdivided into generic and secondary), environmental element, abstract concept, attribute, and system (combinations of the previous four types). Modality is a means for generating a variety of types for each of the described concepts. Context allows for linking concepts in a variety of ways—creating different worlds.

Journal ArticleDOI
TL;DR: The evaluation of OQuaRE is presented, performed by an international panel of experts in ontology engineering, and the results include the positive and negative aspects of the current version of O quaRE, the completeness and utility of the quality metrics included in OquaRE.
Abstract: The increasing importance of ontologies has resulted in the development of a large number of ontologies in both coordinated and non-coordinated efforts. The number and complexity of such ontologies make hard to ontology and tool developers to select which ontologies to use and reuse. So far, there are no mechanism for making such decisions in an informed manner. Consequently, methods for evaluating ontology quality are required. OQuaRE is a method for ontology quality evaluation which adapts the SQuaRE standard for software product quality to ontologies. OQuaRE has been applied to identify the strengths and weaknesses of different ontologies but, so far, this framework has not been evaluated itself. Therefore, in this paper we present the evaluation of OQuaRE, performed by an international panel of experts in ontology engineering. The results include the positive and negative aspects of the current version of OQuaRE, the completeness and utility of the quality metrics included in OQuaRE and the comparison between the results of the manual evaluations done by the experts and the ones obtained by a software implementation of OQuaRE.

Journal ArticleDOI
TL;DR: An ontology-based fuzzy video semantic content model that uses spatial/temporal relations in event and concept definitions is introduced and a metaontology definition provides a wide-domain applicable rule construction standard that allows the user to construct an ontology for a given domain.
Abstract: Recent increase in the use of video-based applications has revealed the need for extracting the content in videos. Raw data and low-level features alone are not sufficient to fulfill the user 's needs; that is, a deeper understanding of the content at the semantic level is required. Currently, manual techniques, which are inefficient, subjective and costly in time and limit the querying capabilities, are being used to bridge the gap between low-level representative features and high-level semantic content. Here, we propose a semantic content extraction system that allows the user to query and retrieve objects, events, and concepts that are extracted automatically. We introduce an ontology-based fuzzy video semantic content model that uses spatial/temporal relations in event and concept definitions. This metaontology definition provides a wide-domain applicable rule construction standard that allows the user to construct an ontology for a given domain. In addition to domain ontologies, we use additional rule definitions (without using ontology) to lower spatial relation computation cost and to be able to define some complex situations more effectively. The proposed framework has been fully implemented and tested on three different domains. We have obtained satisfactory precision and recall rates for object, event and concept extraction.

Proceedings ArticleDOI
17 Nov 2013
TL;DR: This work proposes a mechanism to support evaluating whether the ontology follows their correspondent CQs, particularly when these ontologies are defined in OWL (Ontology Web Language), under the Description Logic formalism.
Abstract: Competency Questions(CQs) play an important role in the ontology development lifecycle, as they represent the ontology requirements. Although the main methodologies describe and use CQs, the current practice of ontology engineering makes a superficial use of CQs. One of the main problems that hamper their proper use lies on the lack of tools that assist users to check if CQs are being fulfilled by the ontology being defined, particularly when these ontologies are defined in OWL (Ontology Web Language), under the Description Logic formalism. We propose a mechanism to support evaluating whether the ontology follows their correspondent CQs.

Proceedings ArticleDOI
02 May 2013
TL;DR: It is demonstrated that microtask crowdsourcing can become a scalable and efficient component in ontology-engineering workflows and that turkers can achieve accuracy as high as 90% on verifying hierarchy statements form common-sense ontologies such as WordNet.
Abstract: Ontology evaluation has proven to be one of the more difficult problems in ontology engineering. Researchers proposed numerous methods to evaluate logical correctness of an ontology, its structure, or coverage of a domain represented by a corpus. However, evaluating whether or not ontology assertions correspond to the real world remains a manual and time-consuming task. In this paper, we explore the feasibility of using microtask crowdsourcing through Amazon Mechanical Turk to evaluate ontologies. Specifically, we look at the task of verifying the subclass--superclass hierarchy in ontologies. We demonstrate that the performance of Amazon Mechanical Turk workers (turkers) on this task is comparable to the performance of undergraduate students in a formal study. We explore the effects of the type of the ontology on the performance of turkers and demonstrate that turkers can achieve accuracy as high as 90% on verifying hierarchy statements form common-sense ontologies such as WordNet. Finally, we compare the performance of turkers to the performance of domain experts on verifying statements from an ontology in the biomedical domain. We report on lessons learned about designing ontology-evaluation experiments on Amazon Mechanical Turk. Our results demonstrate that microtask crowdsourcing can become a scalable and efficient component in ontology-engineering workflows.

Journal ArticleDOI
TL;DR: This work defines the reasoning rules to classify related learning objects to enhance the powerful deductive reasoning capabilities of the system and proposes conflict detection and resolution techniques for both semantic and structural conflicts.
Abstract: There is an increasing demand for sharing learning resources between existing learning resource systems to support reusability, exchangeability, and adaptability. The learning resources need to be annotated with ontologies into learning objects that use different metadata standards. These ontologies have introduced the problems of semantic and structural heterogeneity. This research proposes a Semantic Ontology Mapping for Interoperability of Learning Resource Systems. To enable semantic ontology mapping, this research proposes conflict detection and resolution techniques for both semantic and structural conflicts. The Semantic Bridge Ontology has been proposed as a core component for generating mapping rules to reconcile terms defined in local ontologies into terms defined in the target common ontology. This work defines the reasoning rules to classify related learning objects to enhance the powerful deductive reasoning capabilities of the system. As a consequence, ontology-based learning object metadata are generated and used by the semantic query engine to facilitate user queries of learning objects across heterogeneous learning resource systems.

Journal ArticleDOI
TL;DR: A change history management framework for evolving ontologies; developed over the last couple of years is introduced, a comprehensive and methodological framework for managing issues related to change management in evolving ontology, such as versioning, provenance, consistency, recovery, change representation and visualization.
Abstract: Knowledge constantly grows in scientific discourse and is revised over time by different stakeholders, either collaboratively or through institutionalized efforts. The body of knowledge gets structured and refined as the Communities of Practice concerned with a field of knowledge develop a deeper understanding of the issues. As a result, the knowledge model moves from a loosely clustered terminology to a semi-formal or even formal ontology. Change history management in such evolving knowledge models is an important and challenging task. Different techniques have been introduced in the research literature to solve the issue. A comprehensive solution must address various multi-faceted issues, such as ontology recovery, visualization of change effects, and keeping the evolving ontology in a consistent state. More so because the semantics of changes and evolution behavior of the ontology are hard to comprehend. This paper introduces a change history management framework for evolving ontologies; developed over the last couple of years. It is a comprehensive and methodological framework for managing issues related to change management in evolving ontologies, such as versioning, provenance, consistency, recovery, change representation and visualization. The Change history log is central to our framework and is supported by a semantically rich and formally sound change representation scheme known as change history ontology. Changes are captured and then stored in the log in conformance with the change history ontology. The log entries are later used to revert ontology to a previous consistent state, and to visualize the effects of change on ontology during its evolution. The framework is implemented to work as a plug-in for ontology repositories, such as Joseki and ontology editors, such as Protege. The change detection accuracy of the proposed system Change Tracer has been compared with that of Changes Tab, Version Log Generator in Protege; Change Detection, and Change Capturing of NeOn Toolkit. The proposed system has shown better accuracy against the existing systems. A comprehensive evaluation of the methodology was designed to validate the recovery operations. The accuracy of Roll-Back and Roll-Forward algorithms was conducted using different versions of SWETO Ontology, CIDOC CRM Ontology, OMV Ontology, and SWRC Ontology. Experimental results and comparison with other approaches shows that the change management process of the proposed system is accurate, consistent, and comprehensive in its coverage.

Journal ArticleDOI
TL;DR: The functionality of the NCBO Web services and widgets are incorporated into semantically aware applications for ontology development and visualization, data annotation, and data integration.
Abstract: As new biomedical technologies are developed, the amount of publically available biomedical data continues to increase. To help manage these vast and disparate data sources, researchers have turned to the Semantic Web. Specifically, ontologies are used in data annotation, natural language processing, information retrieval, clinical decision support, and data integration tasks. The development of software applications to perform these tasks requires the integration of Web services to incorporate the wide variety of ontologies used in the health care and life sciences. The National Center for Biomedical Ontology, a National Center for Biomedical Computing created under the NIH Roadmap, developed BioPortal, which provides access to one of the largest repositories of biomedical ontologies. The NCBO Web services provide programmtic access to these ontologies and can be grouped into four categories; Ontology, Mapping, Annotation, and Data Access. The Ontology Web services provide access to ontologies, their metadata, ontology versions, downloads, navigation of the class hierarchy (parents, children, siblings) and details of each term. The Mapping Web services provide access to the millions of ontology mappings published in BioPortal. The NCBO Annotator Web service “tags” text automatically with terms from ontologies in BioPortal, and the NCBO Resource Ind ex Web services provides access to an ontology-based index of public, online data resources. The NCBO Widgets package the Ontology Web services for use directly in Web sites. The functionality of the NCBO Web services and widgets are incorporated into semantically aware applications for ontology development and visualization, data annotation, and data integration. This overview will describe these classes of applications, discuss a few examples of each type, and which NCBO Web services are used by these applications.

Journal ArticleDOI
TL;DR: An overview of the GO-CCO, its overall design, and some recent extensions that make use of additional spatial information are provided.
Abstract: Background: The Gene Ontology (GO) (http://www.geneontology.org/) contains a set of terms for describing the activity and actions of gene products across all kingdoms of life. Each of these activities is executed in a location within a cell or in the vicinity of a cell. In order to capture this context, the GO includes a sub-ontology called the Cellular Component (CC) ontology (GO-CCO). The primary use of this ontology is for GO annotation, but it has also been used for phenotype annotation, and for the annotation of images. Another ontology with similar scope to the GO-CCO is the Subcellular Anatomy Ontology (SAO), part of the Neuroscience Information Framework Standard (NIFSTD) suite of ontologies. The SAO also covers cell components, but in the domain of neuroscience. Description: Recently, the GO-CCO was enriched in content and links to the Biological Process and Molecular Function branches of GO as well as to other ontologies. This was achieved in several ways. We carried out an amalgamation of SAO terms with GO-CCO ones; as a result, nearly 100 new neuroscience-related terms were added to the GO. The GO-CCO also contains relationships to GO Biological Process and Molecular Function terms, as well as connecting to external ontologies such as the Cell Ontology (CL). Terms representing protein complexes in the Protein Ontology (PRO) reference GO-CCO terms for their species-generic counterparts. GO-CCO terms can also be used to search a variety of databases. Conclusions: In this publication we provide an overview of the GO-CCO, its overall design, and some recent extensions that make use of additional spatial information. One of the most recent developments of the GO-CCO was the merging in of the SAO, resulting in a single unified ontology designed to serve the needs of GO annotators as well as the specific needs of the neuroscience community.

Journal ArticleDOI
TL;DR: This paper provides a solution that allows query answering in data Integration systems under evolving ontologies without mapping redefinition by rewriting queries among ontology versions and then forwarding them to the underlying data integration systems to be answered.

Journal ArticleDOI
TL;DR: A hybrid ontology approach is proposed to allow for the full exchange of both feature definition semantics and geometric construction data in computer-aided design systems.

Proceedings Article
01 Aug 2013
TL;DR: A Natural Language Generation system that converts RDF data into natural language text based on an ontology and an associated ontology lexicon, and combines the use of such a lexicon with the choice of lexical items and syntactic structures based on statistical information extracted from a domain-specific corpus is developed.
Abstract: The increasing amount of machinereadable data available in the context of the Semantic Web creates a need for methods that transform such data into human-comprehensible text. In this paper we develop and evaluate a Natural Language Generation (NLG) system that converts RDF data into natural language text based on an ontology and an associated ontology lexicon. While it follows a classical NLG pipeline, it diverges from most current NLG systems in that it exploits an ontology lexicon in order to capture context-specific lexicalisations of ontology concepts, and combines the use of such a lexicon with the choice of lexical items and syntactic structures based on statistical information extracted from a domain-specific corpus. We apply the developed approach to the cooking domain, providing both an ontology and an ontology lexicon in lemon format. Finally, we evaluate fluency and adequacy of the generated recipes with respect to two target audiences: cooking novices and advanced cooks.

Patent
27 Nov 2013
TL;DR: In this article, a method and system for harmonizing and mediating ontologies to search across large data sources is presented, which includes translating the query into one or more translated queries, each translated query targeting a respective ontology different from the first ontology.
Abstract: A method and system for harmonizing and mediating ontologies to search across large data sources is disclosed. The method comprises receiving a query targeting a first ontology. The method further comprises translating the query into one or more translated queries, each translated query targeting a respective ontology different from the first ontology. For each of the queries, issuing the query to a respective database organized according to the respective ontology of the query, and receiving a respective result set for the query, wherein the respective result set corresponds to the respective ontology of the query. The method further comprises translating the respective result set into a translated result set corresponding to the first ontology, aggregating the result sets into an aggregated result set corresponding to the first ontology, and returning the aggregated results set corresponding to the first ontology.

Journal ArticleDOI
TL;DR: In this scheme,domain ontology is first constructed using the graph-based approach to automating construction of domain ontology GRAONTO proposed by the group, and query semantic extension and retrieval are then adopted for semantic-based knowledge retrieval.

Journal ArticleDOI
01 Jan 2013
TL;DR: The experimental results show that the new method can produce significantly better classification accuracy and the high performance demonstrates that the presented ontological operations based on the ontology graph knowledge model are effectively developed.
Abstract: A new ontology learning model called domain ontology graph (DOG) is proposed in this paper. There are two key components in the DOG, i.e., the definition of the ontology graph and the ontology learning process. The former defines the ontology and knowledge conceptualization model from the domain-specific text documents; the latter offers the necessary method of semiautomatic domain ontology learning and generates the corresponding ontology graphs. Two kinds of ontological operations are also defined based on the proposed DOG, i.e., document ontology graph generation and ontology-graph-based text classification. The simulation studies focused upon Chinese text data are used to demonstrate the potential effectiveness of our proposed strategy. This is accomplished by generating DOGs to represent the domain knowledge and conducting the text classifications based on the generated ontology graph. The experimental results show that the new method can produce significantly better classification accuracy (e.g., with 92.3% in f-measure) compared with other methods (such as 86.8% in f-measure for the term-frequency-inverse-document-frequency approach). The high performance demonstrates that our presented ontological operations based on the ontology graph knowledge model are effectively developed.