scispace - formally typeset
Search or ask a question

Showing papers on "Ontology (information science) published in 2021"


Journal ArticleDOI
TL;DR: A historical archive covering the past 15 years of GO data with a consistent format and file structure for both the ontology and annotations is made available to maintain consistency with other ontologies.
Abstract: The Gene Ontology Consortium (GOC) provides the most comprehensive resource currently available for computable knowledge regarding the functions of genes and gene products. Here, we report the advances of the consortium over the past two years. The new GO-CAM annotation framework was notably improved, and we formalized the model with a computational schema to check and validate the rapidly increasing repository of 2838 GO-CAMs. In addition, we describe the impacts of several collaborations to refine GO and report a 10% increase in the number of GO annotations, a 25% increase in annotated gene products, and over 9,400 new scientific articles annotated. As the project matures, we continue our efforts to review older annotations in light of newer findings, and, to maintain consistency with other ontologies. As a result, 20 000 annotations derived from experimental data were reviewed, corresponding to 2.5% of experimental GO annotations. The website (http://geneontology.org) was redesigned for quick access to documentation, downloads and tools. To maintain an accurate resource and support traceability and reproducibility, we have made available a historical archive covering the past 15 years of GO data with a consistent format and file structure for both the ontology and annotations.

1,988 citations


Journal ArticleDOI
TL;DR: A novel healthcare monitoring framework based on the cloud environment and a big data analytics engine is proposed to precisely store and analyze healthcare data, and to improve the classification accuracy.

190 citations


Journal ArticleDOI
TL;DR: A three-module framework, named “Ontology-Based Privacy-Preserving” (OBPP) is proposed to address the heterogeneity issue while keeping the privacy information of IoT devices, and can be widely applied to smart cities.

135 citations


Proceedings ArticleDOI
01 Jun 2021
TL;DR: The dataset construction framework effectively merged heterogeneous sources from open domain semantic parsing and spoken dialogue systems by utilizing techniques including tree ontology annotation, question-answer pair to declarative sentence conversion, and predicate unification, all with minimum post-editing.
Abstract: We present DART, an open domain structured DAta Record to Text generation dataset with over 82k instances (DARTs). Data-to-text annotations can be a costly process, especially when dealing with tables which are the major source of structured data and contain nontrivial structures. To this end, we propose a procedure of extracting semantic triples from tables that encodes their structures by exploiting the semantic dependencies among table headers and the table title. Our dataset construction framework effectively merged heterogeneous sources from open domain semantic parsing and spoken dialogue systems by utilizing techniques including tree ontology annotation, question-answer pair to declarative sentence conversion, and predicate unification, all with minimum post-editing. We present systematic evaluation on DART as well as new state-of-the-art results on WebNLG 2017 to show that DART (1) poses new challenges to existing data-to-text datasets and (2) facilitates out-of-domain generalization. Our data and code can be found at https://github.com/Yale-LILY/dart.

87 citations


Journal ArticleDOI
TL;DR: An overview over the methods that use ontologies to compute similarity and incorporate them in machine learning methods is provided, which outlines how semantic similarity measures and ontology embeddings can exploit the background knowledge in ontologies and how ontologies can provide constraints that improve machine learning models.
Abstract: Ontologies have long been employed in the life sciences to formally represent and reason over domain knowledge and they are employed in almost every major biological database. Recently, ontologies are increasingly being used to provide background knowledge in similarity-based analysis and machine learning models. The methods employed to combine ontologies and machine learning are still novel and actively being developed. We provide an overview over the methods that use ontologies to compute similarity and incorporate them in machine learning methods; in particular, we outline how semantic similarity measures and ontology embeddings can exploit the background knowledge in ontologies and how ontologies can provide constraints that improve machine learning models. The methods and experiments we describe are available as a set of executable notebooks, and we also provide a set of slides and additional resources at https://github.com/bio-ontology-research-group/machine-learning-with-ontologies.

81 citations


Journal ArticleDOI
TL;DR: The Database of Intrinsically Disordered Proteins (DisProt) as discussed by the authors is a repository of manually curated annotations of intrinsically disordered proteins and regions from the literature, including a restyled web interface.
Abstract: The Database of Intrinsically Disordered Proteins (DisProt, URL: https://disprot.org) is the major repository of manually curated annotations of intrinsically disordered proteins and regions from the literature. We report here recent updates of DisProt version 9, including a restyled web interface, refactored Intrinsically Disordered Proteins Ontology (IDPO), improvements in the curation process and significant content growth of around 30%. Higher quality and consistency of annotations is provided by a newly implemented reviewing process and training of curators. The increased curation capacity is fostered by the integration of DisProt with APICURON, a dedicated resource for the proper attribution and recognition of biocuration efforts. Better interoperability is provided through the adoption of the Minimum Information About Disorder (MIADE) standard, an active collaboration with the Gene Ontology (GO) and Evidence and Conclusion Ontology (ECO) consortia and the support of the ELIXIR infrastructure.

74 citations


Proceedings ArticleDOI
TL;DR: In this paper, the authors evaluate existing cyber-threat-intelligence-relevant ontologies, sharing standards, and taxonomies for the purpose of measuring their high-level conceptual expressivity with regards to the who, what, why, where, when, and how elements of an adversarial attack in addition to courses of action and technical indicators.
Abstract: Cyber threat intelligence is the provision of evidence-based knowledge about existing or emerging threats. Benefits of threat intelligence include increased situational awareness and efficiency in security operations and improved prevention, detection, and response capabilities. To process, analyze, and correlate vast amounts of threat information and derive highly contextual intelligence that can be shared and consumed in meaningful times requires utilizing machine-understandable knowledge representation formats that embed the industry-required expressivity and are unambiguous. To a large extend, this is achieved by technologies like ontologies, interoperability schemas, and taxonomies. This research evaluates existing cyber-threat-intelligence-relevant ontologies, sharing standards, and taxonomies for the purpose of measuring their high-level conceptual expressivity with regards to the who, what, why, where, when, and how elements of an adversarial attack in addition to courses of action and technical indicators. The results confirmed that little emphasis has been given to developing a comprehensive cyber threat intelligence ontology with existing efforts not being thoroughly designed, non-interoperable and ambiguous, and lacking semantic reasoning capability.

70 citations


Journal ArticleDOI
TL;DR: This study has concluded that by some means, this approach works similarly as a content-based recommender system since by taking the gain of a semantic approach, the system can also recommend items according to the user’s interests.
Abstract: In our work, we have presented two widely used recommendation systems. We have presented a context-aware recommender system to filter the items associated with user’s interests coupled with a context-based recommender system to prescribe those items. In this study, context-aware recommender systems perceive the user’s location, time, and company. The context-based recommender system retrieves patterns from World Wide Web-based on the user’s past interactions and provides future news recommendations. We have presented different techniques to support media recommendations for smartphones, to create a framework for context-aware, to filter E-learning content, and to deliver convenient news to the user. To achieve this goal, we have used content-based, collaborative filtering, a hybrid recommender system, and implemented a Web ontology language (OWL). We have also used the Resource Description Framework (RDF), JAVA, machine learning, semantic mapping rules, and natural ontology languages that suggest user items related to the search. In our work, we have used E-paper to provide users with the required news. After applying the semantic reasoning approach, we have concluded that by some means, this approach works similarly as a content-based recommender system since by taking the gain of a semantic approach, we can also recommend items according to the user’s interests. In a content-based recommender system, the system provides additional options or results that rely on the user’s ratings, appraisals, and interests.

66 citations


Journal ArticleDOI
TL;DR: A central concepts based ontology partitioning algorithm is used to divide the ontology into several disjoint segments, which borrows the idea from the social network and Firefly Algorithm, and results show that the alignments obtained by the method significantly outperforms the state-of-the-art biomedical ontology matching techniques.

63 citations


Journal ArticleDOI
TL;DR: This work supports AI related decision-making in additive manufacturability analysis and (re-)design for AM and guides machine learning to addressing problems related to AM design rules.
Abstract: Additive Manufacturing (AM) is becoming data-intensive while increasingly generating newly available data. The availability of AM data provides Design for AM (DfAM) with a newfound opportunity to construct AM design rules with improved understanding of AM’s influence on part qualities. To seize the opportunity, this paper proposes a novel approach for AM design rule construction based on machine learning and knowledge graph. First, this paper presents a framework that enables i) deploying machine learning for extracting knowledge on predictive additive manufacturability from data, ii) adopting ontology with knowledge graphs as a knowledge base for storing both a priori and newfound AM knowledge, and iii) reasoning with knowledge for deriving data-driven prescriptive AM design rules. Second, this paper presents a methodology that constructs knowledge on predictive additive manufacturability and prescriptive AM design rules. In the methodology, we formalize knowledge representations, extractions, and reasoning, which enhances automated and autonomous construction and improvements of AM design rules. The methodology then employs a machine learning algorithm of Classification and Regression Tree on measurement data from National Institute of Standards and Technology for construction of a Laser Powder Bed Fusion-specific design rule for overhang features. This work supports AI related decision-making in additive manufacturability analysis and (re-)design for AM and guides machine learning to addressing problems related to AM design rules. This work is also meaningful as it provides sharable AM design rule knowledge with the AM society.

60 citations


Journal ArticleDOI
TL;DR: The CBRPMO contributes to the industry by extending the application of ontologies in the bridge sector to cover the rehabilitation stage, enhancing functions of conventional ontologies, and reducing information searching time compared to manual searching, which improves constraint management approaches by automating the information searching step.

Journal ArticleDOI
TL;DR: A graph-based context-aware requirement elicitation approach considering contextual information within the Smart PSS is proposed, which leverages the pre-defined product, service, and condition ontologies together with Deepwalk technique, to formulate those concepts as nodes and their relationships as the edge of the proposed requirement graph.
Abstract: The paradigm of Smart product-service systems (Smart PSS) has emerged recently owing to the edge-cutting Information and Communication Technology (ICT) and artificial intelligence (AI) techniques. ...

Journal ArticleDOI
TL;DR: Taking into account the modern society in which a flood of information occurs, yamato has a sophisticated theory of informational objects (representations) and quality and quantity are carefully organized for the sake of greater interoperability of real-world data.
Abstract: Upper ontology plays critical roles in ontology development by giving developers a guideline of how to view the target domain. Although some upper ontologies such as DOLCE, BFO, GFO, SUMO, CYC, etc. are already developed and extensively used, a careful examination of them reveals some room for improvement in a couple of respects. This paper discusses YAMATO 1 : Yet Another More Advanced Top-level Ontology which has been developed intended to cover three features in Quality description, Representation and Process/Event, respectively, in a better way than existing ontologies.

Journal ArticleDOI
TL;DR: The Human Disease Ontology (DO) (www.diseaseontology.org) database as discussed by the authors has significantly expanded the disease content and enhanced the userbase and website since the DO's 2018 Nucleic Acids Research DATABASE issue paper.
Abstract: The Human Disease Ontology (DO) (www.disease-ontology.org) database, has significantly expanded the disease content and enhanced our userbase and website since the DO's 2018 Nucleic Acids Research DATABASE issue paper. Conservatively, based on available resource statistics, terms from the DO have been annotated to over 1.5 million biomedical data elements and citations, a 10× increase in the past 5 years. The DO, funded as a NHGRI Genomic Resource, plays a key role in disease knowledge organization, representation, and standardization, serving as a reference framework for multiscale biomedical data integration and analysis across thousands of clinical, biomedical and computational research projects and genomic resources around the world. This update reports on the addition of 1,793 new disease terms, a 14% increase of textual definitions and the integration of 22 137 new SubClassOf axioms defining disease to disease connections representing the DO's complex disease classification. The DO's updated website provides multifaceted etiology searching, enhanced documentation and educational resources.

Journal ArticleDOI
TL;DR: The European Research Council Horizon 2020 Starting Grant Urban ecologies: governing nonhuman life in global cities (uEcologies; Grant No. 759239) as mentioned in this paper is the starting grant for this project.
Abstract: European Research Council Horizon 2020 Starting Grant Urban ecologies: governing nonhuman life in global cities (uEcologies; Grant No. 759239).

Journal ArticleDOI
TL;DR: In this article, the authors proposed an up-and-coming sensor ontology integrating technique, which uses debate mechanism (DM) to extract the ontology alignment from various alignments determined by different matchers, and utilize support strength and disprove strength in the debating process to calculate its local factor.
Abstract: In order to enhance the communication between sensor networks in the Internet of things (IoT), it is indispensable to establish the semantic connections between sensor ontologies in this field. For this purpose, this paper proposes an up-and-coming sensor ontology integrating technique, which uses debate mechanism (DM) to extract the sensor ontology alignment from various alignments determined by different matchers. In particular, we use the correctness factor of each matcher to determine a correspondence’s global factor, and utilize the support strength and disprove strength in the debating process to calculate its local factor. Through comprehensively considering these two factors, the judgment factor of an entity mapping can be obtained, which is further applied in extracting the final sensor ontology alignment. This work makes use of the bibliographic track provided by the Ontology Alignment Evaluation Initiative (OAEI) and five real sensor ontologies in the experiment to assess the performance of our method. The comparing results with the most advanced ontology matching techniques show the robustness and effectiveness of our approach.

Journal ArticleDOI
TL;DR: A new fuzzy logic-based product recommendation system which dynamically predicts the most relevant products to the customers in online shopping according to the users’ current interests and uses ontology alignment for making decisions that are more accurate and predict dynamically based on the search context.

Journal ArticleDOI
TL;DR: IDO provides a simple recipe for building new pathogen-specific ontologies in a way that allows data about novel diseases to be easily compared, along multiple dimensions, with data represented by existing disease ontologies.
Abstract: BACKGROUND: Effective response to public health emergencies, such as we are now experiencing with COVID-19, requires data sharing across multiple disciplines and data systems. Ontologies offer a powerful data sharing tool, and this holds especially for those ontologies built on the design principles of the Open Biomedical Ontologies Foundry. These principles are exemplified by the Infectious Disease Ontology (IDO), a suite of interoperable ontology modules aiming to provide coverage of all aspects of the infectious disease domain. At its center is IDO Core, a disease- and pathogen-neutral ontology covering just those types of entities and relations that are relevant to infectious diseases generally. IDO Core is extended by disease and pathogen-specific ontology modules. RESULTS: To assist the integration and analysis of COVID-19 data, and viral infectious disease data more generally, we have recently developed three new IDO extensions: IDO Virus (VIDO); the Coronavirus Infectious Disease Ontology (CIDO); and an extension of CIDO focusing on COVID-19 (IDO-COVID-19). Reflecting the fact that viruses lack cellular parts, we have introduced into IDO Core the term acellular structure to cover viruses and other acellular entities studied by virologists. We now distinguish between infectious agents - organisms with an infectious disposition - and infectious structures - acellular structures with an infectious disposition. This in turn has led to various updates and refinements of IDO Core's content. We believe that our work on VIDO, CIDO, and IDO-COVID-19 can serve as a model for yielding greater conformance with ontology building best practices. CONCLUSIONS: IDO provides a simple recipe for building new pathogen-specific ontologies in a way that allows data about novel diseases to be easily compared, along multiple dimensions, with data represented by existing disease ontologies. The IDO strategy, moreover, supports ontology coordination, providing a powerful method of data integration and sharing that allows physicians, researchers, and public health organizations to respond rapidly and efficiently to current and future public health crises.

Journal ArticleDOI
TL;DR: This review paper starts with analyzing the nature of semantic web and its requirements and discusses all domains where semantic web technologies play a vital role and those domains that increase the growth of the semantic web.
Abstract: Semantic web and its technologies have been eyed in many fields. They have the capacity to organize and link data over the web in a consistent and coherent way. Semantic web technologies consist of...

Posted ContentDOI
26 Oct 2021-Database
TL;DR: The Open Biological and Biomedical Ontologies (OBO) Foundry as discussed by the authors was created to facilitate the development, harmonization, application and sharing of ontologies, guided by a set of overarching principles.
Abstract: Biological ontologies are used to organize, curate and interpret the vast quantities of data arising from biological experiments. While this works well when using a single ontology, integrating multiple ontologies can be problematic, as they are developed independently, which can lead to incompatibilities. The Open Biological and Biomedical Ontologies (OBO) Foundry was created to address this by facilitating the development, harmonization, application and sharing of ontologies, guided by a set of overarching principles. One challenge in reaching these goals was that the OBO principles were not originally encoded in a precise fashion, and interpretation was subjective. Here, we show how we have addressed this by formally encoding the OBO principles as operational rules and implementing a suite of automated validation checks and a dashboard for objectively evaluating each ontology's compliance with each principle. This entailed a substantial effort to curate metadata across all ontologies and to coordinate with individual stakeholders. We have applied these checks across the full OBO suite of ontologies, revealing areas where individual ontologies require changes to conform to our principles. Our work demonstrates how a sizable, federated community can be organized and evaluated on objective criteria that help improve overall quality and interoperability, which is vital for the sustenance of the OBO project and towards the overall goals of making data Findable, Accessible, Interoperable, and Reusable (FAIR). Database URL http://obofoundry.org/.

Journal ArticleDOI
TL;DR: A return to roots is proposed by defining a Model-Driven Engineering (MDE) methodology that supports automation of BDA based on model specification that lets customers declare requirements to be achieved by an abstract Big Data platform and smart engines deploy the Big Data pipeline carrying out the analytics on a specific instance of such platform.
Abstract: The Big Data revolution promises to build a data-driven ecosystem where better decisions are supported by enhanced analytics and data management. However, major hurdles still need to be overcome on the road that leads to commoditization and wide adoption of Big Data Analytics (BDA). Big Data complexity is the first factor hampering the full potential of BDA. The opacity and variety of Big Data technologies and computations, in fact, make BDA a failure prone and resource-intensive process, which requires a trial-and-error approach. This problem is even exacerbated by the fact that current solutions to Big Data application development take a bottom-up approach, where the last technology release drives application development. Selection of the best Big Data platform, as well as of the best pipeline to execute analytics, represents then a deal breaker. In this paper, we propose a return to roots by defining a Model-Driven Engineering (MDE) methodology that supports automation of BDA based on model specification. Our approach lets customers declare requirements to be achieved by an abstract Big Data platform and smart engines deploy the Big Data pipeline carrying out the analytics on a specific instance of such platform. Driven by customers’ requirements, our methodology is based on an OWL-S ontology of Big Data services and on a compiler transforming OWL-S service compositions in workflows that can be directly executed on the selected platform. The proposal is experimentally evaluated in a real-world scenario focusing on the threat detection system of SAP.

Proceedings ArticleDOI
19 Apr 2021
TL;DR: Zhang et al. as discussed by the authors explored richer and more competitive prior knowledge to model the inter-class relationship for zero-shot learning via ontology-based knowledge representation and semantic embedding.
Abstract: Zero-shot Learning (ZSL), which aims to predict for those classes that have never appeared in the training data, has arisen hot research interests. The key of implementing ZSL is to leverage the prior knowledge of classes which builds the semantic relationship between classes and enables the transfer of the learned models (e.g., features) from training classes (i.e., seen classes) to unseen classes. However, the priors adopted by the existing methods are relatively limited with incomplete semantics. In this paper, we explore richer and more competitive prior knowledge to model the inter-class relationship for ZSL via ontology-based knowledge representation and semantic embedding. Meanwhile, to address the data imbalance between seen classes and unseen classes, we developed a generative ZSL framework with Generative Adversarial Networks (GANs). Our main findings include: (i) an ontology-enhanced ZSL framework that can be applied to different domains, such as image classification (IMGC) and knowledge graph completion (KGC); (ii) a comprehensive evaluation with multiple zero-shot datasets from different domains, where our method often achieves better performance than the state-of-the-art models. In particular, on four representative ZSL baselines of IMGC, the ontology-based class semantics outperform the previous priors e.g., the word embeddings of classes by an average of 12.4 accuracy points in the standard ZSL across two example datasets (see Figure 4).

Journal ArticleDOI
TL;DR: In this paper, a random walk and word embedding based ontology embedding method named OWL2Vec*, which encodes the semantics of an OWL ontology by taking into account its graph structure, lexical information and logical constructors.
Abstract: Semantic embedding of knowledge graphs has been widely studied and used for prediction and statistical analysis tasks across various domains such as Natural Language Processing and the Semantic Web. However, less attention has been paid to developing robust methods for embedding OWL (Web Ontology Language) ontologies, which contain richer semantic information than plain knowledge graphs, and have been widely adopted in domains such as bioinformatics. In this paper, we propose a random walk and word embedding based ontology embedding method named OWL2Vec*, which encodes the semantics of an OWL ontology by taking into account its graph structure, lexical information and logical constructors. Our empirical evaluation with three real world datasets suggests that OWL2Vec* benefits from these three different aspects of an ontology in class membership prediction and class subsumption prediction tasks. Furthermore, OWL2Vec* often significantly outperforms the state-of-the-art methods in our experiments.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a framework that provides the updated information of the Corona Patients in the vicinity and thus provides identifiable data for remote monitoring of locality cohorts for early detection of COVID-19 based on ontology method.

Journal ArticleDOI
01 Nov 2021
TL;DR: This work proposes a semantic personalized recommendation system (SPRS) that recommends personalized sets of videos to users depending on their previous activity on the site and exploits a domain ontology and user items content to the domain concepts.
Abstract: The past decade has seen significant development in the number of personalized recommendation applications on the World Wide Web. It aims to assist users to retrieve relevant items from a large repository of contents by providing items or services of likely interest based on examined evidence of the users’ preferences and desires. However, this vision is complex due to the huge amount of information aka media-rich information available on the web. Most of the systems formulated so far use the metadata linked with the digital contents, but such systems fail to generate significant recommendations results. In these circumstances, a semantic personalized recommendation system (SPRS) plays an important role to take away the semantic gap between high-level semantic contents and low-level media features. The proposed system recommends personalized sets of videos to users depending on their previous activity on the site and exploits a domain ontology and user items content to the domain concepts. To evaluate the performance of the framework, items’ prediction is executed by utilizing the proposed framework, and performance is determined by comparing the predicted and actual ratings of the items in terms of Predictive Accuracy Metrics, precision, and recall.

Posted ContentDOI
02 Jun 2021-bioRxiv
TL;DR: The Open Biological and Biomedical Ontologies (OBO) Foundry as discussed by the authors was created to facilitate the development, harmonization, application, and sharing of ontologies, guided by a set of overarching principles.
Abstract: Biological ontologies are used to organize, curate, and interpret the vast quantities of data arising from biological experiments. While this works well when using a single ontology, integrating multiple ontologies can be problematic, as they are developed independently, which can lead to incompatibilities. The Open Biological and Biomedical Ontologies (OBO) Foundry was created to address this by facilitating the development, harmonization, application, and sharing of ontologies, guided by a set of overarching principles. One challenge in reaching these goals was that the OBO principles were not originally encoded in a precise fashion, and interpretation was subjective. Here we show how we have addressed this by formally encoding the OBO principles as operational rules and implementing a suite of automated validation checks and a dashboard for objectively evaluating each ontology9s compliance with each principle. This entailed a substantial effort to curate metadata across all ontologies and to coordinate with individual stakeholders. We have applied these checks across the full OBO suite of ontologies, revealing areas where individual ontologies require changes to conform to our principles. Our work demonstrates how a sizable federated community can be organized and evaluated on objective criteria that help improve overall quality and interoperability, which is vital for the sustenance of the OBO project and towards the overall goals of making data FAIR.

Journal ArticleDOI
27 Apr 2021
TL;DR: The Open Energy Ontology (OEO) developed for the domain of energy systems analysis is presented and the advantages of using an ontology such as the OEO are demonstrated with three use cases: data representation, data annotation and interface homogenisation.
Abstract: Heterogeneous data, different definitions and incompatible models are a huge problem in many domains, with no exception for the field of energy systems analysis. Hence, it is hard to re-use results, compare model results or couple models at all. Ontologies provide a precisely defined vocabulary to build a common and shared conceptualisation of the energy domain. Here, we present the Open Energy Ontology (OEO) developed for the domain of energy systems analysis. Using the OEO provides several benefits for the community. First, it enables consistent annotation of large amounts of data from various research projects. One example is the Open Energy Platform (OEP). Adding such annotations makes data semantically searchable, exchangeable, re-usable and interoperable. Second, computational model coupling becomes much easier. The advantages of using an ontology such as the OEO are demonstrated with three use cases: data representation, data annotation and interface homogenisation. We also describe how the ontology can be used for linked open data (LOD).

Journal ArticleDOI
TL;DR: In this paper, an extended compact genetic algorithm-based ontology entity matching technique (ECGA-OEM) is proposed, which uses both the compact encoding mechanism and linkage learning approach to match the ontologies efficiently.
Abstract: Data heterogeneity is the obstacle for the resource sharing on Semantic Web (SW), and ontology is regarded as a solution to this problem. However, since different ontologies are constructed and maintained independently, there also exists the heterogeneity problem between ontologies. Ontology matching is able to identify the semantic correspondences of entities in different ontologies, which is an effective method to address the ontology heterogeneity problem. Due to huge memory consumption and long runtime, the performance of the existing ontology matching techniques requires further improvement. In this work, an extended compact genetic algorithm-based ontology entity matching technique (ECGA-OEM) is proposed, which uses both the compact encoding mechanism and linkage learning approach to match the ontologies efficiently. Compact encoding mechanism does not need to store and maintain the whole population in the memory during the evolving process, and the utilization of linkage learning protects the chromosome’s building blocks, which is able to reduce the algorithm’s running time and ensure the alignment’s quality. In the experiment, ECGA-OEM is compared with the participants of ontology alignment evaluation initiative (OAEI) and the state-of-the-art ontology matching techniques, and the experimental results show that ECGA-OEM is both effective and efficient.

Journal ArticleDOI
TL;DR: The multi-criteria approach adopted by this study proves that the DSS can make well-adapted decisions, handle the dynamic nature of a production system, and help manufacturers move closer to Zero Defect Manufacturing.

Journal ArticleDOI
TL;DR: In this paper, an ontology based model for preventing and detecting SQLIA using ontology (SQLIO) is proposed which implements Ontology Creation and prediction rule based vulnerabilities model, the proposed methodology provides prevents and detects SQLIA web vulnerability to a greater extent in cloud environment.
Abstract: Many modern day web applications deal with huge amount of secured and high impact data. As a result security plays a major role in web application development. The security of any web application focuses on data the application handles. The web application framework should prevent and detect web application vulnerabilities. Data will be stored in a database, so the OWASP categorized vulnerability SQL Injection Attacks (SQLIA) is the most critical vulnerability for a web application. An Ontology based model for preventing and detecting SQLIA using ontology (SQLIO) is proposed which implements Ontology Creation and prediction rule based vulnerabilities model. The proposed methodology provides prevents and detects SQLIA web vulnerability to a greater extent in cloud environment.