scispace - formally typeset
Search or ask a question

Showing papers on "Semantic Web published in 2021"


Journal ArticleDOI
TL;DR: In this article, the authors trace the triumphs and challenges of two decades of Semantic Web research and applications, and present the challenges and triumphs of the SemEval project.
Abstract: Tracing the triumphs and challenges of two decades of Semantic Web research and applications

115 citations


Book
12 Jul 2021
TL;DR: In this paper, the authors survey fundamental concepts and practical methods for creating and curating large-scale knowledge bases, including methods for discovering and canonicalizing entities and their semantic types and organizing them into clean taxonomies.
Abstract: Equipping machines with comprehensive knowledge of the world's entities and their relationships has been a long-standing goal of AI. Over the last decade, large-scale knowledge bases, also known as knowledge graphs, have been automatically constructed from web contents and text sources, and have become a key asset for search engines. This machine knowledge can be harnessed to semantically interpret textual phrases in news, social media and web tables, and contributes to question answering, natural language processing and data analytics. This article surveys fundamental concepts and practical methods for creating and curating large knowledge bases. It covers models and methods for discovering and canonicalizing entities and their semantic types and organizing them into clean taxonomies. On top of this, the article discusses the automatic extraction of entity-centric properties. To support the long-term life-cycle and the quality assurance of machine knowledge, the article presents methods for constructing open schemas and for knowledge curation. Case studies on academic projects and industrial knowledge graphs complement the survey of concepts and methods.

53 citations


Journal ArticleDOI
TL;DR: To solve the challenges of limited depth and expressiveness in the current ontologies, an enhanced reference generalized ontological model (RGOM) based on Reference Architecture Model for I4.0 (RAMI 4.0) is proposed that can be used to generate a knowledge graph capable of providing answers in response to any real-time query.
Abstract: In recent years, due to technological advancements, the concept of Industry 4.0 (I4.0) is gaining popularity, while presenting several technical challenges being tackled by both the industrial and academic research communities. Semantic Web including Knowledge Graphs is a promising technology that can play a significant role in realizing I4.0 implementations. This paper surveys the use of the Semantic Web and Knowledge Graphs for I4.0 from different perspectives such as managing information related to equipment maintenance, resource optimization, and the provision of on-time and on-demand production and services. Moreover, to solve the challenges of limited depth and expressiveness in the current ontologies, we have proposed an enhanced reference generalized ontological model (RGOM) based on Reference Architecture Model for I4.0 (RAMI 4.0). RGOM can facilitate a range of I4.0 concepts including improved asset monitoring, production enhancement, reconfiguration of resources, process optimizations, product orders and deliveries, and the life cycle of products. Our proposed RGOM can be used to generate a knowledge graph capable of providing answers in response to any real-time query.

50 citations


Journal ArticleDOI
TL;DR: This review paper starts with analyzing the nature of semantic web and its requirements and discusses all domains where semantic web technologies play a vital role and those domains that increase the growth of the semantic web.
Abstract: Semantic web and its technologies have been eyed in many fields. They have the capacity to organize and link data over the web in a consistent and coherent way. Semantic web technologies consist of...

45 citations


Journal ArticleDOI
TL;DR: In this paper, a random walk and word embedding based ontology embedding method named OWL2Vec*, which encodes the semantics of an OWL ontology by taking into account its graph structure, lexical information and logical constructors.
Abstract: Semantic embedding of knowledge graphs has been widely studied and used for prediction and statistical analysis tasks across various domains such as Natural Language Processing and the Semantic Web. However, less attention has been paid to developing robust methods for embedding OWL (Web Ontology Language) ontologies, which contain richer semantic information than plain knowledge graphs, and have been widely adopted in domains such as bioinformatics. In this paper, we propose a random walk and word embedding based ontology embedding method named OWL2Vec*, which encodes the semantics of an OWL ontology by taking into account its graph structure, lexical information and logical constructors. Our empirical evaluation with three real world datasets suggests that OWL2Vec* benefits from these three different aspects of an ontology in class membership prediction and class subsumption prediction tasks. Furthermore, OWL2Vec* often significantly outperforms the state-of-the-art methods in our experiments.

40 citations


Journal ArticleDOI
TL;DR: The state of the art of ontology learning is presented and the challenge of evaluating ontologies to make them reliable is highlighted, since it is not a trivial task in this field; it actually represents a research area on its own.

36 citations


Journal ArticleDOI
TL;DR: In this paper, an extended compact genetic algorithm-based ontology entity matching technique (ECGA-OEM) is proposed, which uses both the compact encoding mechanism and linkage learning approach to match the ontologies efficiently.
Abstract: Data heterogeneity is the obstacle for the resource sharing on Semantic Web (SW), and ontology is regarded as a solution to this problem. However, since different ontologies are constructed and maintained independently, there also exists the heterogeneity problem between ontologies. Ontology matching is able to identify the semantic correspondences of entities in different ontologies, which is an effective method to address the ontology heterogeneity problem. Due to huge memory consumption and long runtime, the performance of the existing ontology matching techniques requires further improvement. In this work, an extended compact genetic algorithm-based ontology entity matching technique (ECGA-OEM) is proposed, which uses both the compact encoding mechanism and linkage learning approach to match the ontologies efficiently. Compact encoding mechanism does not need to store and maintain the whole population in the memory during the evolving process, and the utilization of linkage learning protects the chromosome’s building blocks, which is able to reduce the algorithm’s running time and ensure the alignment’s quality. In the experiment, ECGA-OEM is compared with the participants of ontology alignment evaluation initiative (OAEI) and the state-of-the-art ontology matching techniques, and the experimental results show that ECGA-OEM is both effective and efficient.

34 citations


Journal ArticleDOI
TL;DR: The WarSampo knowledge graph (KG), a shared semantic infrastructure, and a Linked Open Data service for publishing data about WW2, with a focus on Finnish military history are presented.
Abstract: The Second World War (WW2) is arguably the most devastating catastrophe of human history, a topic of great interest to not only researchers but the general public. However, data about the Second World War is heterogeneous and distributed in various organizations and countries making it hard to utilize. In order to create aggregated global views of the war, a shared ontology and data infrastructure is needed to harmonize information in various data silos. This makes it possible to share data between publishers and application developers, to support data analysis in Digital Humanities research, and to develop datadriven intelligent applications. As a first step towards these goals, this article presents the WarSampo knowledge graph (KG), a shared semantic infrastructure, and a Linked Open Data (LOD) service for publishing data about WW2, with a focus on Finnish military history. The shared semantic infrastructure is based on the idea of representing war as a spatio-temporal sequence of events that soldiers, military units, and other actors participate in. The used metadata schema is an extension of CIDOC CRM, supplemented by various military historical domain ontologies. With an infrastructure containing shared ontologies, maintaining the interlinked data brings upon new challenges, as one change in an ontology can propagate across several datasets that use it. To support sustainability, a repeatable automatic data transformation and linking pipeline has been created for rebuilding the whole WarSampo KG from the individual source datasets. The WarSampo KG is hosted on a data service based on W3C Semantic Web standards and best practices, including content negotiation, SPARQL API, download, automatic documentation, and other services supporting the reuse of the data. The WarSampo KG, a part of the international LOD Cloud and totalling ca. 14 million triples, is in use in nine end-user application views of the WarSampo portal, which has had over 400 000 end users since its opening in 2015.

30 citations


Proceedings ArticleDOI
10 May 2021
TL;DR: In this article, the authors proposed a knowledge graph enhanced architecture of the intelligent Digital Twin, offering capabilities, which are internal linking and referencing, knowledge completion, error detection, collective reasoning and semantic querying.
Abstract: Cyber-Physical Systems, characterized by networking capabilities and digital representations, offer many promising potentials for industrial automation. In an attempt to further enrich the system's digital representation by incorporating interdisciplinary models and considering a continuous and synchronized representation of it within the cyber layer, the concept of the Digital Twin emerged, enabling system monitoring, virtual commissioning, failure diagnosis and simulations by managing the Cyber-Physical Systems data along its lifecycle. To add further intelligence into the Digital Twin, the architecture of the intelligent Digital Twin was proposed. Nevertheless, managing and relating the complex and dynamic digital models as well as the heterogeneous data of the intelligent Digital Twin present open challenges. Due to their inherent extensibility and adaptability as well as their semantic expressiveness, Knowledge Graphs are a suitable concept to overcome these challenges and enable reasoning to gain new insights. Prominent applications of Knowledge Graphs are recommendation systems and exploratory search within the semantic web. However, there seems to be a lacking yet potential applicability for Knowledge Graphs in the industrial domain. Therefore, this contribution proposes a Knowledge Graph enhanced architecture of the intelligent Digital Twin, offering capabilities, which are internal linking and referencing, knowledge completion, error detection, collective reasoning and semantic querying. Based on the proposed concept, potential application fields for Knowledge Graph enhanced intelligent Digital Twin are addressed.

27 citations


Journal ArticleDOI
Ren Li1, Tianjin Mo1, Jianxi Yang1, Shixin Jiang1, Tong Li1, Yiming Liu1 
TL;DR: This article presents a novel model, called the bridge structure and health monitoring ontology, to achieve fine-grained modeling of bridge structures, SHM systems, sensors, and sensory data from multiple perspectives and uses a bridge SHM big data platform to demonstrate the usefulness.
Abstract: Structural health monitoring (SHM) systems have been extensively used to ensure the operational safety of long-span bridges. Large-scale bridge structural response and loading data observed from various sensors show obvious big data characteristics. However, serious “data island” problems, which exist in the conventional SHM solutions, inevitably limit the effects of sensory data analysis and information sharing. A unified bridge SHM semantic representation model is much in demand. By taking the advantages of Semantic Web technologies, this article presents a novel model, called the bridge structure and health monitoring ontology, to achieve fine-grained modeling of bridge structures, SHM systems, sensors, and sensory data from multiple perspectives. A bridge SHM big data platform is used to demonstrate the usefulness. Several representative data accessing and rule-based reasoning scenarios are employed as to illustrate the advantages of the proposed manner.

26 citations


Journal ArticleDOI
TL;DR: In this article, the authors developed an OWL-based ontology (UiA eHealth Ontology/uiAeHo) model to annotate personal, physiological, behavioral, and contextual data from heterogeneous sources (sensor, questionnaire, and interview), followed by structuring and standardizing of diverse descriptions to generate meaningful, practical, personalized and contextual lifestyle recommendations based on the defined rules.
Abstract: Background: Lifestyle diseases, because of adverse health behavior, are the foremost cause of death worldwide. An eCoach system may encourage individuals to lead a healthy lifestyle with early health risk prediction, personalized recommendation generation, and goal evaluation. Such an eCoach system needs to collect and transform distributed heterogenous health and wellness data into meaningful information to train an artificially intelligent health risk prediction model. However, it may produce a data compatibility dilemma. Our proposed eHealth ontology can increase interoperability between different heterogeneous networks, provide situation awareness, help in data integration, and discover inferred knowledge. This “proof-of-concept” study will help sensor, questionnaire, and interview data to be more organized for health risk prediction and personalized recommendation generation targeting obesity as a study case. Objective: The aim of this study is to develop an OWL-based ontology (UiA eHealth Ontology/UiAeHo) model to annotate personal, physiological, behavioral, and contextual data from heterogeneous sources (sensor, questionnaire, and interview), followed by structuring and standardizing of diverse descriptions to generate meaningful, practical, personalized, and contextual lifestyle recommendations based on the defined rules. Methods: We have developed a simulator to collect dummy personal, physiological, behavioral, and contextual data related to artificial participants involved in health monitoring. We have integrated the concepts of “Semantic Sensor Network Ontology” and “Systematized Nomenclature of Medicine—Clinical Terms” to develop our proposed eHealth ontology. The ontology has been created using Protege (version 5.x). We have used the Java-based “Jena Framework” (version 3.16) for building a semantic web application that includes resource description framework (RDF) application programming interface (API), OWL API, native tuple store (tuple database), and the SPARQL (Simple Protocol and RDF Query Language) query engine. The logical and structural consistency of the proposed ontology has been evaluated with the “HermiT 1.4.3.x” ontology reasoner available in Protege 5.x. Results: The proposed ontology has been implemented for the study case “obesity.” However, it can be extended further to other lifestyle diseases. “UiA eHealth Ontology” has been constructed using logical axioms, declaration axioms, classes, object properties, and data properties. The ontology can be visualized with “Owl Viz,” and the formal representation has been used to infer a participant’s health status using the “HermiT” reasoner. We have also developed a module for ontology verification that behaves like a rule-based decision support system to predict the probability for health risk, based on the evaluation of the results obtained from SPARQL queries. Furthermore, we discussed the potential lifestyle recommendation generation plan against adverse behavioral risks. Conclusions: This study has led to the creation of a meaningful, context-specific ontology to model massive, unintuitive, raw, unstructured observations for health and wellness data (eg, sensors, interviews, questionnaires) and to annotate them with semantic metadata to create a compact, intelligible abstraction for health risk predictions for individualized recommendation generation.

Journal ArticleDOI
01 Jan 2021
TL;DR: In this paper, the authors introduce a dynamic knowledge-graph approach for digital twins and illustrate how this approach is by design naturally suited to realizing the vision of a Universal Digital Twin, which includes the notions of a "base world" that describes the real world and that is maintained by agents that incorporate real-time data, and of parallel worlds that support the intelligent exploration of alternative designs without affecting the base world.
Abstract: This paper introduces a dynamic knowledge-graph approach for digital twins and illustrates how this approach is by design naturally suited to realizing the vision of a Universal Digital Twin. The dynamic knowledge graph is implemented using technologies from the Semantic Web. It is composed of concepts and instances that are defined using ontologies, and of computational agents that operate on both the concepts and instances to update the dynamic knowledge graph. By construction, it is distributed, supports cross-domain interoperability, and ensures that data are connected, portable, discoverable, and queryable via a uniform interface. The knowledge graph includes the notions of a “base world” that describes the real world and that is maintained by agents that incorporate real-time data, and of “parallel worlds” that support the intelligent exploration of alternative designs without affecting the base world. Use cases are presented that demonstrate the ability of the dynamic knowledge graph to host geospatial and chemical data, control chemistry experiments, perform cross-domain simulations, and perform scenario analysis. The questions of how to make intelligent suggestions for alternative scenarios and how to ensure alignment between the scenarios considered by the knowledge graph and the goals of society are considered. Work to extend the dynamic knowledge graph to develop a digital twin of the UK to support the decarbonization of the energy system is discussed. Important directions for future research are highlighted.

Journal ArticleDOI
TL;DR: An overview from 1980 to 2020 of the developed research, applications, and DFX techniques for the assessment of green issues is presented to provide a coherent domain ontology that can help managers manage knowledge, improve teamwork, and make decisions in a collaborative green PDP.
Abstract: Through appropriate operations and policies, such as green processes and product development (PDP), companies can respond to environmental sustainability. To remain competitive, one such approach, Design for X (DFX), involves considering the different environment and sustainable strategies through different factors Xs. With regards to the availability of different DFX techniques that consider environmental issues, the decision as to which approach needs to be adopted remains absent. This paper aims at presenting an overview from 1980 to 2020 of the developed research, applications, and DFX techniques for the assessment of green issues. Selected DFX techniques are linked with strategies used in organizations. Following a literature analysis, a collaborative knowledge-based framework that addresses the design concepts needed to assess environmental, safety, and health concerns in the development of the green product is proposed. Furthermore, as a pillar for considering the Semantic Web and an evolving approach linked with Natural language processing (NLP) and Artificial Intelligence (AI), an ontology-based knowledge management model for green assessment is developed for the representation, acquisition, organization and capitalization of knowledge in a computer interpretable manner. The findings are useful for both managers and practitioners as they provide a coherent domain ontology that can help them manage knowledge, improve teamwork, and make decisions in a collaborative green PDP. Besides, an understanding of the essential design considerations that are required to implement environmental, safety, and health issues, as well as competencies used in the PDP is presented. The key barriers, managerial and strategic implications and mitigation actions are also identified in this paper.

Journal ArticleDOI
01 Apr 2021
TL;DR: A novel framework based on fuzzy ontology is proposed for information retrieval to overcome the weaknesses of current web system and to utilize the strengths query expansion.
Abstract: World Wide Web (WWW) constitutes fuzzy information and requires soft computing techniques to deal context of the query. It works on the principle of keyword matching yielding low precision and recall. Semantic web, an extension WWW improves the information retrieval process. Query expansion is utmost importance in information retrieval to retrieve relevant results. To overcome the weaknesses of current web system and to utilize the strengths query expansion a novel framework based on fuzzy ontology is proposed for information retrieval. In the proposed framework, domain specific knowledge is utilized for ontology construction. In framework pre-defined domain ontologies and Global ontology, ConceptNet is used to construct a fuzzy ontology. Based on constructed fuzzy ontology most semantically related words for a query are identified and query is expanded. A fuzzy membership function is defined for different semantic relationships present among the Global ontology ConceptNet. Based on the proposed framework queries are expanded (Semantic query expansion) and evaluated on four popular search engines namely Google, Yahoo, Bing and Exalead. The performance metrics used are Precision, Mean Average Precision (MAP), Mean Reciprocal Rank (MRR), R-precision and Number of documents retrieved. The Web search engines are precision oriented. Based on the proposed framework all the metrics are improved approx. by 10%. Precision before the query expansion lies between 0.75-0.81 whereas after the query expansion lies between 0.85-0.89 on various search engines. The number of documents retrieved is almost improved 1/1000 after the query expansion.

Journal ArticleDOI
TL;DR: A conceptual framework is developed that links cross-domain information, infers implicit knowledge, and empowers building managers with insightful assessments, which reduces burdensome intervention from the managers when compared with traditional solutions.

Journal ArticleDOI
TL;DR: An approach is proposed towards a machine-based damage evaluation, applying semantic web technologies on a new developed method for damage detection on constructions based on a large amount of high-resolution images from which georeferenced point clouds are calculated by using photogrammetric methods.

Journal ArticleDOI
TL;DR: A review of the semantic web and its applications in the construction industry, along with an overview of the current LCA tools, shows that the proposed method is superior to RDBMS methods in terms of capturing semantics and can improve L CA tools by providing reliable data in the early design stages.

DOI
19 Apr 2021
TL;DR: This review focuses on recent research literature on the use of Semantic Web Technologies (SWT) in city planning and foregrounds representational, evaluative, projective, and synthetical meta-practices as constituent practices of city planning.
Abstract: This review focuses on recent research literature on the use of Semantic Web Technologies (SWT) in city planning. The review foregrounds representational, evaluative, projective, and synthetical meta-practices as constituent practices of city planning. We structure our review around these four meta-practices that we consider fundamental to those processes. We find that significant research exists in all four meta-practices. Linking across domains by combining various methods of semantic knowledge generation, processing, and management is necessary to bridge gaps between these meta-practices and will enable future Semantic City Planning Systems. Meta-Practices Semantic City Planning Systems Literature Search

Journal ArticleDOI
TL;DR: Twinbase as mentioned in this paper is an open-source, Git-based Digital Twin Web server developed with user-friendliness in mind, which stores digital twin documents in a Git repository, modifies them with Git workflows, and distributes them to users via a static web server, from which the documents can be accessed via a client library or a regular web browser.
Abstract: Digital twins are expected to form a network, a “Digital Twin Web,” in the future. Digital Twin Web follows a similar structure to the World Wide Web and consists of meta-level digital twins that are described as digital twin description documents and distributed via Digital Twin Web servers. Standards must be established before the Digital Twin Web can be used efficiently, and having an easily accessible server implementation can foster the development of those standards. Twinbase is an open-source, Git-based Digital Twin Web server developed with user-friendliness in mind. Twinbase stores digital twin documents in a Git repository, modifies them with Git workflows, and distributes them to users via a static web server, from which the documents can be accessed via a client library or a regular web browser. A demo server is available at https://dtw.twinbase.org and new server instances can be initialized free-of-charge at GitHub via its browser interface. Twinbase is built with GitHub repository, Pages, and Actions but can be extended to support other providers or self-hosting. We describe the underlying architecture of Twinbase to support the creation of derivative and alternative server implementations. The Digital Twin Web requires permanent, globally accessible, and transferable identifiers to function properly, and to address this issue, we introduce the concept of digital twin identifier registry. Performance measurements showed that the median response times for fetching a digital twin document from Twinbase varied between 0.4 and 1.2 seconds depending on the identifier registry.

Book ChapterDOI
TL;DR: This paper explores an alternative solution and contributes a general-purpose meta-model for converting non-RDF resources into RDF: Facade-X, which can be implemented by overriding the SERVICE operator and does not require to extend the SPARQL syntax.
Abstract: The Semantic Web research community understood since its beginning how crucial it is to equip practitioners with methods to transform non-RDF resources into RDF. Proposals focus on either engineering content transformations or accessing non-RDF resources with SPARQL. Existing solutions require users to learn specific mapping languages (e.g. RML), to know how to query and manipulate a variety of source formats (e.g. XPATH, JSON-Path), or to combine multiple languages (e.g. SPARQL Generate). In this paper, we explore an alternative solution and contribute a general-purpose meta-model for converting non-RDF resources into RDF: Facade-X. Our approach can be implemented by overriding the SERVICE operator and does not require to extend the SPARQL syntax. We compare our approach with the state of art methods RML and SPARQL Generate and show how our solution has lower learning demands and cognitive complexity, and it is cheaper to implement and maintain, while having comparable extensibility and efficiency.

Journal ArticleDOI
TL;DR: A meta-review of the main criteria adopted for assessing OEMs, and major issues and shortcomings identified in existing methodologies are reviewed, resulting in three use cases of semantic-based DSS in health-related fields.
Abstract: New models and technological advances are driving the digital transformation of healthcare systems. Ontologies and Semantic Web have been recognized among the most valuable solutions to manage the massive, various, and complex healthcare data deriving from different sources, thus acting as backbones for ontology-based Decision Support Systems (DSSs). Several contributions in the literature propose Ontology engineering methodologies (OEMs) to assist the formalization and development of ontologies, by providing guidelines on tasks, activities, and stakeholders’ participation. Nevertheless, existing OEMs differ widely according to their approach, and often lack of sufficient details to support ontology engineers. This paper performs a meta-review of the main criteria adopted for assessing OEMs, and major issues and shortcomings identified in existing methodologies. The key issues requiring specific attention (i.e., the delivery of a feasibility study, the introduction of project management processes, the support for reuse, and the involvement of stakeholders) are then explored into three use cases of semantic-based DSS in health-related fields. Results contribute to the literature on OEMs by providing insights on specific tools and approaches to be used when tackling these issues in the development of collaborative OEMs supporting DSS.

Journal ArticleDOI
TL;DR: This work identifies some fundamental requirements for a DT service framework based on applications identified in corresponding literature and proposes a service framework architecture that utilizes Semantic Web technology and a workflow engine for service orchestration to support the fulfilment of the identified requirements.
Abstract: Digital Twins (DT) in industrial cyber-physical systems are the key enabling technology for Industry 4.0. Services are an essential part of almost every DT concept, but their interaction is usually implementation-specific since no common guidelines are available. This work identifies some fundamental requirements for a DT service framework based on applications identified in corresponding literature. Based on these requirements, a service framework architecture is proposed. The architecture utilizes Semantic Web technology and a workflow engine for service orchestration to support the fulfilment of the identified requirements. As a case study for sensor data evaluation of an industrial process, a proof-of-concept implementation is presented, showing the feasibility and suitability of the proposed DT service framework architecture.

Journal ArticleDOI
TL;DR: A Uniform Compact Genetic Algorithm (UCGA) is proposed to match the bibliographic ontologies, which employs the real-valued compact encoding mechanism to improve the algorithm's performance, the Linearly Decreasing Virtual Population (LDVP) to trade-off the exploration and exploitation of the algorithm, and the local perturbation to enhance the convergence speed and avoid the algorithm’s premature convergence.
Abstract: Digital Library (DL) is a source of inspiration for the standards and technologies on Semantic Web (SW), which is usually implemented by using bibliographic data. To address DL’s data heterogeneity problem, it is necessary to annotate the bibliographic data with semantic information, which requires the utilization of the bibliographic ontologies. In particular, a bibliographic ontology provides the domain knowledge by specifying the domain concepts and their relationships. However, due to human subjectivity, a concept in different bibliographic ontologies might be defined in different names, causing the data heterogeneity problem. To address this issue, it is necessary to find the mappings between bibliographic ontologies’ concepts, which is the so-called bibliographic ontology matching. In this paper, a Uniform Compact Genetic Algorithm (UCGA) is proposed to match the bibliographic ontologies, which employs the real-valued compact encoding mechanism to improve the algorithm’s performance, the Linearly Decreasing Virtual Population (LDVP) to trade-off the exploration and exploitation of the algorithm, and the local perturbation to enhance the convergence speed and avoid the algorithm’s premature convergence. In addition, by using the Uniform Probability Density Function (UPDF) and Uniform Cumulative Distribution Function (UCDF), the UCGA can greatly reduce the evolutionary time and memory consumption. The experiment uses the Biblio testing cases provided by the Ontology Alignment Evaluation Initiative (OAEI) to evaluate UCGA’s performance and the experimental results show that UCGA is both effective and efficient.

Journal ArticleDOI
TL;DR: MEET-LM, a methodology that aims at generating and evaluating embeddings from a text corpus preserving the co-hyponymy relations synthesised from a domain-specific taxonomy, is proposed and implemented and applied to a real-life dataset of 2M+ vacancies related to ICT-jobs.

Proceedings ArticleDOI
09 Jun 2021
TL;DR: In this paper, an ontology-based case study has been implemented for public higher education (AISHE-Onto). SPARQL queries have been applied to make reasoning with the proposed ontology.
Abstract: The Electronic Government is a challenging field for the Semantic Web and the ontologies play a key role in the development of the Semantic Web. This paper explains the terms of the university through university ontology. We will focus on creating a university ontology. Here an ontology-based case study has been implemented for Public Higher Education (AISHE-Onto). SPARQL queries have been applied to make reasoning with the proposed ontology. As a result, a successful query interface has been provided to search academic data by the AISHE-Onto semantic portal.

Journal ArticleDOI
TL;DR: An efficient epidemic prevention and anti-epidemic method is proposed, allowing for real-time situational understanding, the discovery of patients’ relationships, the analysis of the spatiotemporal distribution of patients, super spreader mining, key node analysis, and the prevention and control of high-risk groups.
Abstract: In view of the lack of data association in spatiotemporal information analysis and the lack of spatiotemporal situation analysis in knowledge graphs, this paper combines the semantic web of the geographic knowledge graph with the visual analysis model of spatial information and puts forward the comprehensive utilization of the related technologies of the geographic knowledge graph and big data visual analysis. Then, it realizes the situational analysis of COVID-19 (Coronavirus Disease 2019) and the exploration of patient relationships through interactive collaborative analysis. The main contributions of the paper are as follows. (1) Based on the characteristics of the geographic knowledge graph, a patient entity model and an entity relationship type and knowledge representation method are proposed, and a knowledge graph of the spatiotemporal information of COVID-19 is constructed. (2) To analyse the COVID-19 patients’ situations and explore their relationships, an analytical framework is designed. The framework, combining the semantic web of the geographic knowledge graph and the visual analysis model of geographic information, allows one to analyse the semantic web by using the node attribute similarity calculation, key stage mining, community prediction and other methods. (3) An efficient epidemic prevention and anti-epidemic method, which is of referential significance, is proposed. It is based on experiments and the collaborative analysis of the semantic web and spatial information, allowing for real-time situational understanding, the discovery of patients’ relationships, the analysis of the spatiotemporal distribution of patients, super spreader mining, key node analysis, and the prevention and control of high-risk groups.

Proceedings ArticleDOI
19 Apr 2021
TL;DR: In this paper, the authors propose ColChain, a decentralized architecture based on blockchains that allows users to propose updates to faulty or outdated data, trace updates back to their origin, and query older versions of the data.
Abstract: One of the major obstacles that currently prevents the Semantic Web from exploiting its full potential is that the data it provides access to is sometimes not available or outdated. The reason is rooted deep within its architecture that relies on data providers to keep the data available, queryable, and up-to-date at all times – an expectation that many data providers in reality cannot live up to for an extended (or infinite) period of time. Hence, decentralized architectures have recently been proposed that use replication to keep the data available in case the data provider fails. Although this increases availability, it does not help keeping the data up-to-date or allow users to query and access previous versions of a dataset. In this paper, we therefore propose ColChain (COLlaborative knowledge CHAINs), a novel decentralized architecture based on blockchains that not only lowers the burden for the data providers but at the same time also allows users to propose updates to faulty or outdated data, trace updates back to their origin, and query older versions of the data. Our extensive experiments show that ColChain reaches these goals while achieving query processing performance comparable to the state of the art.

Journal ArticleDOI
TL;DR: The COVID-19 Knowledge Graph (KG) as discussed by the authors is a standardized and computable knowledge graph that integrates knowledge extracted from biomedical literature and integrating it with relevant information from curated biological databases.
Abstract: Summary The global response to the COVID-19 pandemic has led to a rapid increase of scientific literature on this deadly disease. Extracting knowledge from biomedical literature and integrating it with relevant information from curated biological databases is essential to gain insight into COVID-19 etiology, diagnosis, and treatment. We used Semantic Web technology RDF to integrate COVID-19 knowledge mined from literature by iTextMine, PubTator, and SemRep with relevant biological databases and formalized the knowledge in a standardized and computable COVID-19 Knowledge Graph (KG). We published the COVID-19 KG via a SPARQL endpoint to support federated queries on the Semantic Web and developed a knowledge portal with browsing and searching interfaces. We also developed a RESTful API to support programmatic access and provided RDF dumps for download. Availability and implementation The COVID-19 Knowledge Graph is publicly available under CC-BY 4.0 license at https://research.bioinformatics.udel.edu/covid19kg/.

Journal ArticleDOI
TL;DR: This work presents a framework that combines supervised learning for crop type classification on satellite imagery time-series with semantic web and linked data technologies to assist in the implementation of rule sets by the European common agricultural policy (CAP).
Abstract: During the last decades, massive amounts of satellite images are becoming available that can be enriched with semantic annotations for the creation of value-added earth observation products. One challenge is to extract knowledge from the raw satellite data in an automated way and to effectively manage the extracted information in a semantic way, to allow fast and accurate decisions of spatiotemporal nature in a real operational scenario. In this work, we present a framework that combines supervised learning for crop type classification on satellite imagery time-series with semantic web and linked data technologies to assist in the implementation of rule sets by the European common agricultural policy (CAP). The framework collects georeferenced data that are available online and satellite images from the Sentinel-2 mission. We analyze image time-series that cover the entire cultivation period and link each parcel with a specific crop. On top of that, we introduce a semantic layer to facilitate a knowledge-driven management of the available information, capitalizing on ontologies for knowledge representation and semantic rules, to identify possible farmers noncompliance according to the Greening 1 (crop diversification) and SMR 1 rule (protection of waters against pollution caused by nitrates) rules of the CAP. Experiments show the effectiveness of the proposed integrated approach in three different scenarios for crop type monitoring and consistency checking for noncompliance to the CAP rules: the smart sampling of on-the-spot checks; the automatic detection of CAP's Greening 1 rule; and the automatic detection of susceptible parcels according to the CAP's SMR 1 rule.

Journal ArticleDOI
TL;DR: An ontology-driven approach is proposed to formally conceptualize essential elements of indicators, covering: performance, results, measures, goals and relationships of a given business strategy, and the proposed semantic model is evaluated on a real-world case study on water management.