scispace - formally typeset
Search or ask a question

Showing papers on "Semantic Web published in 2022"


Journal ArticleDOI
TL;DR: The semantic context model is focused to bring in the usage of adaptive environment and can be mapped to individual User interface (UI) display through smart calculations for versatile UIs.
Abstract: Currently, many mobile devices provide various interaction styles and modes which create complexity in the usage of interfaces. The context offers the information base for the development of Adaptive user interface (AUI) frameworks to overcome the heterogeneity. For this purpose, the ontological modeling has been made for specific context and environment. This type of philosophy states to the relationship among elements (e.g., classes, relations, or capacities etc.) with understandable satisfied representation. The context mechanisms can be examined and understood by any machine or computational framework with these formal definitions expressed in Web ontology language (WOL)/Resource description frame work (RDF). The Protégé is used to create taxonomy in which system is framed based on four contexts such as user, device, task and environment. Some competency questions and use-cases are utilized for knowledge obtaining while the information is refined through the instances of concerned parts of context tree. The consistency of the model has been verified through the reasoning software while SPARQL querying ensured the data availability in the models for defined use-cases. The semantic context model is focused to bring in the usage of adaptive environment. This exploration has finished up with a versatile, scalable and semantically verified context learning system. This model can be mapped to individual User interface (UI) display through smart calculations for versatile UIs.

49 citations


Journal ArticleDOI
TL;DR:
Abstract: In 2012, the Open Geospatial Consortium published GeoSPARQL defining “an RDF/OWL ontology for [spatial] information”, “SPARQL extension functions” for performing spatial operations on RDF data and “RIF rules” defining entailments to be drawn from graph pattern matching. In the 8+ years since its publication, GeoSPARQL has become the most important spatial Semantic Web standard, as judged by references to it in other Semantic Web standards and its wide use for Semantic Web data. An update to GeoSPARQL was proposed in 2019 to deliver a version 1.1 with a charter to: handle outstanding change requests and source new ones from the user community and to “better present” the standard, that is to better link all the standard’s parts and better document and exemplify elements. Expected updates included new geometry representations, alignments to other ontologies, handling of new spatial referencing systems, and new artifact presentation. This paper describes motivating change requests and actual resultant updates in the candidate version 1.1 of the standard alongside reference implementations and usage examples. We also describe the theory behind particular updates, initial implementations of many parts of the standard, and our expectations for GeoSPARQL 1.1’s use.

28 citations


Journal ArticleDOI
TL;DR: This application report presents a collaborative publication model for CH Linked Data and six design principles for creating shared data services and semantic portals for DH research and applications, suggesting feasibility of the proposed Sampo model.
Abstract: Cultural heritage (CH) contents are typically strongly interlinked, but published in heterogeneous, distributed local data silos, making it difficult to utilize the data on a global level. Furthermore, the content is usually available only for humans to read, and not as data for Digital Humanities (DH) analyses and application development. This application report addresses these problems by presenting a collaborative publication model for CH Linked Data and six design principles for creating shared data services and semantic portals for DH research and applications. This Sampo model has evolved gradually in 2002–2021 through lessons learned when developing the Sampo series of linked data services and semantic portals in use, including MuseumFinland (2004), CultureSampo (2009), BookSampo (2011), WarSampo (2015), Norssit Alumni (2017), U.S. Congress Prosopographer (2018), NameSampo (2019), BiographySampo (2019), WarVictimSampo 1914–1922 (2019), MMM (2020), AcademySampo (2021), FindSampo (2021), WarMemoirSampo (2021), and LetterSampo (2022). These Semantic Web applications surveyed in this paper cover a wide range of application domains in CH and have attracted up to millions of users on the Semantic Web, suggesting feasibility of the proposed Sampo model. This work shows a shift of focus in research on CH semantic portals from data aggregation and exploration systems (1. generation systems) to systems supporting DH research (2. generation systems) with data analytic tools, and finally to automatic knowledge discovery and Artificial Intelligence (3. generation systems).

23 citations


Journal ArticleDOI
TL;DR: This paper provides a systematic literature review on knowledge graph creation from structured and semi-structured data sources using Semantic Web technologies and highlights the tools, methods, types of data sources, ontologies, and publication methods.
Abstract: Knowledge graphs have, for the past decade, been a hot topic both in public and private domains, typically used for large-scale integration and analysis of data using graph-based data models. One of the central concepts in this area is the Semantic Web, with the vision of providing a well-defined meaning to information and services on the Web through a set of standards. Particularly, linked data and ontologies have been quite essential for data sharing, discovery, integration, and reuse. In this paper, we provide a systematic literature review on knowledge graph creation from structured and semi-structured data sources using Semantic Web technologies. The review takes into account four prominent publication venues, namely, Extended Semantic Web Conference, International Semantic Web Conference, Journal of Web Semantics, and Semantic Web Journal. The review highlights the tools, methods, types of data sources, ontologies, and publication methods, together with the challenges, limitations, and lessons learned in the knowledge graph creation processes.

21 citations


Journal ArticleDOI
TL;DR: In this article , a knowledge-based recommendation system that uses multiple domain ontologies and operates on semantically related usage data has been proposed for Massive Open Online Courses (MOOC) platforms.
Abstract: Abstract With web-based education and Technology Enhanced Learning (TEL) assuming new importance, there has been a shift towards Massive Open Online Courses (MOOC) platforms owing to their openness and flexible “on-the-go” nature. The previous decade has seen tremendous research in the field of Adaptive E-Learning Systems but work in the field of personalization in MOOCs is still a promising avenue. This paper aims to discuss the scope of said personalization in a MOOC environment along with proposing an approach to build a knowledge-based recommendation system that uses multiple domain ontologies and operates on semantically related usage data. The recommendation system employs cluster-based collaborative filtering in conjunction with rules written in the Semantic Web Rule Language (SWRL) and thus is truly a hybrid recommendation system. It has at its core, clusters of learners which are segregated using predicted learning style in accordance with the Felder Silverman Learning Style Model (FSLSM) through the detection of tracked usage parameters. Recommendations are made to the granularity of internal course elements along with learning path recommendation and provided general learning tips and suggestions. The study is concluded with an observed positive trend in the learning experience of participants, gauged through click-through log and explicit feedback forms. In addition, the impact of recommendation is statistically analyzed and used to improve the recommendations.

17 citations


Journal ArticleDOI
TL;DR: In this article , the authors proposed an SW technology index to standardize the development for ensuring that the work of SW technology is designed well and to quantitatively evaluate the quality of the work in SW technology.
Abstract: Semantic web (SW) technology has been widely applied to many domains such as medicine, health care, finance, geology. At present, researchers mainly rely on their experience and preferences to develop and evaluate the work of SW technology. Although the general architecture (e.g., Tim Berners-Lee's Semantic Web Layer Cake) of SW technology was proposed many years ago and has been well-known, it still lacks a concrete guideline for standardizing the development of SW technology. In this paper, we propose an SW technology index to standardize the development for ensuring that the work of SW technology is designed well and to quantitatively evaluate the quality of the work in SW technology. This index consists of 10 criteria that quantify the quality as a score of [Formula: see text]. We address each criterion in detail for a clear explanation from three aspects: (1) what is the criterion? (2) why do we consider this criterion and (3) how do the current studies meet this criterion? Finally, we present the validation of this index by providing some examples of how to apply the index to the validation cases. We conclude that the index is a useful standard to guide and evaluate the work in SW technology.

12 citations


Journal ArticleDOI
TL;DR: The Flow Systems Ontology (FSO) as discussed by the authors is an ontology for describing the composition of flow systems and their mass and energy flows, and two example models are expressed using FSO vocabulary.

12 citations


Journal ArticleDOI
TL;DR: This review paper has given the comprehensive review of using the semantic web in the domain of healthcare, some virtual communities, and other information retrieval projects and highlighted all the necessary information for a good understanding of the semanticweb and its ontological frameworks.
Abstract: The semantic web is an emerging technology that helps to connect different users to create their content and also facilitates the way of representing information in a manner that can be made understandable for computers. As the world is heading towards the fourth industrial revolution, the implicit utilization of artificial-intelligence-enabled semantic web technologies paves the way for many real-time application developments. The fundamental building blocks for the overwhelming utilization of semantic web technologies are ontologies, and it allows sharing as well as reusing the concepts in a standardized way so that the data gathered from heterogeneous sources receive a common nomenclature, and it paves the way for disambiguating the duplicates very easily. In this context, the right utilization of ontology capabilities would further strengthen its presence in many web-based applications such as e-learning, virtual communities, social media sites, healthcare, agriculture, etc. In this paper, we have given the comprehensive review of using the semantic web in the domain of healthcare, some virtual communities, and other information retrieval projects. As the role of semantic web is becoming pervasive in many domains, the demand for the semantic web in healthcare, virtual communities, and information retrieval has been gaining huge momentum in recent years. To obtain the correct sense of the meaning of the words or terms given in the textual content, it is deemed necessary to apply the right ontology to fix the ambiguity and shun any deviations that persist on the concepts. In this review paper, we have highlighted all the necessary information for a good understanding of the semantic web and its ontological frameworks.

11 citations


Journal ArticleDOI
TL;DR: The Tiny Matchmaking Engine (TME) as discussed by the authors is a matchmaking and reasoning engine for the Web Ontology Language (OWL), designed and implemented with a compact and portable C core.

10 citations


Journal ArticleDOI
TL;DR: In this paper , the authors present an approach to integrate and synthesize existing scientific evidence by using semantic and machine reasoning approaches underlain by semantics and machine-reasoning technologies, but without shared semantics and common standards for machine actionable data and models, our collective ability to build, grow, and share a collective knowledge base will remain limited.
Abstract: Abstract Progress in key social-ecological challenges of the global environmental agenda (e.g., climate change, biodiversity conservation, Sustainable Development Goals) is hampered by a lack of integration and synthesis of existing scientific evidence. Facing a fast-increasing volume of data, information remains compartmentalized to pre-defined scales and fields, rarely building its way up to collective knowledge. Today's distributed corpus of human intelligence, including the scientific publication system, cannot be exploited with the efficiency needed to meet current evidence synthesis challenges; computer-based intelligence could assist this task. Artificial Intelligence (AI)-based approaches underlain by semantics and machine reasoning offer a constructive way forward, but depend on greater understanding of these technologies by the science and policy communities and coordination of their use. By labelling web-based scientific information to become readable by both humans and computers, machines can search, organize, reuse, combine and synthesize information quickly and in novel ways. Modern open science infrastructure—i.e., public data and model repositories—is a useful starting point, but without shared semantics and common standards for machine actionable data and models, our collective ability to build, grow, and share a collective knowledge base will remain limited. The application of semantic and machine reasoning technologies by a broad community of scientists and decision makers will favour open synthesis to contribute and reuse knowledge and apply it toward decision making.

9 citations


Journal ArticleDOI
TL;DR: In this paper , a semantic approach to integrate heterogeneous data under a building information modeling (BIM) environment and enable automated safety checking through SPARQL-based reasoning is proposed.


Journal ArticleDOI
TL;DR: This article studies a unified method for data access to heterogeneous data sources with Facade-X, a meta-model implemented in a new data integration system called SPARQL Anything, which allows a single meta- model, based on RDF, to represent data from any file format expressible in BNF syntax.
Abstract: Data integration is the dominant use case for RDF Knowledge Graphs. However, Web resources come in formats with weak semantics (for example, CSV and JSON), or formats specific to a given application (for example, BibTex, HTML, and Markdown). To solve this problem, Knowledge Graph Construction (KGC) is gaining momentum due to its focus on supporting users in transforming data into RDF. However, using existing KGC frameworks result in complex data processing pipelines, which mix structural and semantic mappings, whose development and maintenance constitute a significant bottleneck for KG engineers. Such frameworks force users to rely on different tools, sometimes based on heterogeneous languages, for inspecting sources, designing mappings, and generating triples, thus making the process unnecessarily complicated. We argue that it is possible and desirable to equip KG engineers with the ability of interacting with Web data formats by relying on their expertise in RDF and the well-established SPARQL query language [2]. In this article, we study a unified method for data access to heterogeneous data sources with Facade-X, a meta-model implemented in a new data integration system called SPARQL Anything. We demonstrate that our approach is theoretically sound, since it allows a single meta-model, based on RDF, to represent data from (a) any file format expressible in BNF syntax, as well as (b) any relational database. We compare our method to state-of-the-art approaches in terms of usability (cognitive complexity of the mappings) and general performance. Finally, we discuss the benefits and challenges of this novel approach by engaging with the reference user community.

Journal ArticleDOI
TL;DR: In this paper , a framework is provided to build an IoT-based home air quality assessment system by using type-2 fuzzy ontology so that smart home systems can make a decision and control appropriately based on predefined rules by employing the provided semantic reasoning.

Journal ArticleDOI
TL;DR: In this paper , the authors focus on recent research literature on the use of Semantic Web Technologies (SWT) in city planning and foreground representational, evaluative, projective, and synthetical meta-practices as constituent practices of city planning.
Abstract: This review focuses on recent research literature on the use of Semantic Web Technologies (SWT) in city planning. The review foregrounds representational, evaluative, projective, and synthetical meta-practices as constituent practices of city planning. We structure our review around these four meta-practices that we consider fundamental to those processes. We find that significant research exists in all four metapractices. Linking across domains by combining various methods of semantic knowledge generation, processing, and management is necessary to bridge gaps between these meta-practices and will enable future Semantic City Planning Systems.

Journal ArticleDOI
TL;DR: In this article , the authors present a survey of semantic web-based education systems that enable new researchers to develop their knowledge and analyse all available possibilities for semantic web based education systems.
Abstract: Educators have been calling for reform for a decade. Recent technical breakthroughs have led to various improvements in the semantic web-based education system. After last year's COVID-19 outbreak, development quickened. Many countries and educational systems now concentrate on providing students with online education, which differs greatly from traditional classroom education. Online education allows students to learn at their own pace and the system. As a consequence, we may say that education has become more dynamic. In the educational system, this changing nature makes user demands difficult to identify. Many instructors suggest using machine learning, artificial intelligence, or ontology to improve traditional teaching methods. Due to the lack of survey studies examining and comparing all of the researcher's semantic web-based teaching methodologies, we decided to conduct this survey. This paper's goal is to analyse all available possibilities for semantic web-based education systems that enable new researchers to develop their knowledge.

Journal ArticleDOI
TL;DR: In this paper, an OWL ontology has been developed and an RDF repository has been generated to allow advanced SPARQL querying for satellite data products for Earth Observation (EO).
Abstract: Earth Observation (EO) based on Remote Sensing (RS) is gaining importance nowadays, since it offers a well-grounded technological framework for the development of advanced applications in multiple domains, such as climate change, precision agriculture, smart urbanism, safety, and many others. This promotes the continuous generation of data-driven software facilities oriented to advanced processing, analysis and visualization, which often offer enhanced computing capabilities. Nevertheless, the development of knowledge-driven approaches is still an open challenge in remote sensing, besides they provide human experts with domain knowledge representation, support for data standardization and semantic integration of sources, which indeed enhance the construction of advanced on-top applications. To this end, the use of ontologies and web semantic technologies have shown high success in knowledge representation in many fields, in which the Earth Observation is not an exception. However, as argued by the research community, there is large room for improvement in the specific case of remote sensing, where ontologies that consider the special nature and structure of different satellital and airborne data products are demanded. This article addresses, in first instance, part of this need by proposing a semantic model for the consolidation, integration, reasoning and linking of data (and meta-data), in the context of satellital remote sensing products for EO. With this objective, an OWL ontology has been developed and an RDF repository has been generated to allow advanced SPARQL querying. Although the proposal has been designed to consider remote sensing data products in general, the current study is mainly focused on the Sentinel 2 satellite mission from the Copernicus Programme of the European Space Agency (ESA). Four different use cases are showcased to check potentials of the proposed semantic model in terms of ontology integration, federated querying, data analysis and reasoning.

Journal ArticleDOI
TL;DR: In this paper , an OWL ontology has been developed and an RDF repository has been generated to allow advanced SPARQL querying for satellite data products for Earth Observation (EO).
Abstract: Earth Observation (EO) based on Remote Sensing (RS) is gaining importance nowadays, since it offers a well-grounded technological framework for the development of advanced applications in multiple domains, such as climate change, precision agriculture, smart urbanism, safety, and many others. This promotes the continuous generation of data-driven software facilities oriented to advanced processing, analysis and visualization, which often offer enhanced computing capabilities. Nevertheless, the development of knowledge-driven approaches is still an open challenge in remote sensing, besides they provide human experts with domain knowledge representation, support for data standardization and semantic integration of sources, which indeed enhance the construction of advanced on-top applications. To this end, the use of ontologies and web semantic technologies have shown high success in knowledge representation in many fields, in which the Earth Observation is not an exception. However, as argued by the research community, there is large room for improvement in the specific case of remote sensing, where ontologies that consider the special nature and structure of different satellital and airborne data products are demanded. This article addresses, in first instance, part of this need by proposing a semantic model for the consolidation, integration, reasoning and linking of data (and meta-data), in the context of satellital remote sensing products for EO. With this objective, an OWL ontology has been developed and an RDF repository has been generated to allow advanced SPARQL querying. Although the proposal has been designed to consider remote sensing data products in general, the current study is mainly focused on the Sentinel 2 satellite mission from the Copernicus Programme of the European Space Agency (ESA). Four different use cases are showcased to check potentials of the proposed semantic model in terms of ontology integration, federated querying, data analysis and reasoning.

Journal ArticleDOI
TL;DR: A survey of standards used in the domain of digital cultural heritage with focus on the METadata Encoding and Transmission Standard (METS) created by the Library of Congress in the United States of America is presented in this paper .
Abstract: Abstract This paper is a survey of standards being used in the domain of digital cultural heritage with focus on the Metadata Encoding and Transmission Standard (METS) created by the Library of Congress in the United States of America. The process of digitization of cultural heritage requires silo breaking in a number of areas—one area is that of academic disciplines to enable the performance of rich interdisciplinary work. This lays the foundation for the emancipation of the second form of silo which are the silos of knowledge, both traditional and born digital, held in individual institutions, such as galleries, libraries, archives and museums. Disciplinary silo breaking is the key to unlocking these institutional knowledge silos. Interdisciplinary teams, such as developers and librarians, work together to make the data accessible as open data on the “semantic web”. Description logic is the area of mathematics which underpins many ontology building applications today. Creating these ontologies requires a human–machine symbiosis. Currently in the cultural heritage domain, the institutions’ role is that of provider of this open data to the national aggregator which in turn can make the data available to the trans-European aggregator known as Europeana. Current ingests to the aggregators are in the form of machine readable cataloguing metadata which is limited in the richness it provides to disparate object descriptions. METS can provide this richness.

Journal ArticleDOI
TL;DR: In this paper , a personalized recommender system based on ontology and web usage mining is presented, which uses item representations and user profiles based on the ontologies that provide personalized services to semantic applications.
Abstract: In this research, we offer a customized-recommendation system that uses item representations and user profiles based on the ontologies that provide personalized services to semantic applications. To develop and implement the personalized-recommendation system, a system that uses the representations of the items and the user profiles based on the ontologies to provide the semantic applications with personalized services. Recommendation systems can use semantic reasoning capabilities to overcome present system limits and increase the quality of recommendations. The recommender makes use of domain ontologies to improve personalization: on the one hand, a domain-based inference method is used to model user interests more effectively and accurately; on the other hand, a semantic similarity method is used to improve the stemmer algorithm, which is used by our content-based filtering approach, which provides a measure of the affinity between an item and the user. In recommender systems and web personalization, Web Usage Mining is crucial. This study presents an effective recommender system based on ontology and web usage mining. The approach's first step is to extract features from online documents and build on related ideas. Then, they create an ontology for the website using the concepts and relevant terms retrieved from the records. The semantic similarity of web documents is used to group them into multiple semantic themes, each with its own set of preferences. The suggested solution incorporates ontology and semantic knowledge into Web Usage Mining and personalization procedures, as well as a stemming algorithm, and gets an overall accuracy of 90%.

Journal ArticleDOI
TL;DR: In this article , a framework for merging the highly reusable terminological components of production ontologies in an a posteriori way is presented, which combines translations, domain-specific vocabularies, and inconsistency checks with syntactic, terminological, and structural analyses to integrate knowledge representations formalized in the Web Ontology Language.

Journal ArticleDOI
TL;DR: The study showed that integrating different geospatial data sets as a knowledge base can facilitate answering sophisticated questions from different users in multi-scale urban building data integration and enrichment.
Abstract: The advent of Web 2.0 has emerged abundant but often unstructured user-generated georeferenced data, such as those from Volunteered Geographic Information (VGI) initiatives. In many cases, these da...


Journal ArticleDOI
TL;DR: In this paper , a comprehensive evaluation of the popular semantic data repositories and their computational performance in managing and providing semantic support for spatial queries is provided. And the results show that Virtuoso achieves the overall best performance in both non-spatial and spatial-semantic queries.

Journal ArticleDOI
TL;DR: In this paper , the authors propose the Musical Semantic Event Processing Architecture (MUSEPA), a semantically-based architecture designed to meet the IoMusT requirements of low-latency communication, discoverability, interoperability, and automatic inference.

Journal ArticleDOI
TL;DR: In this article , the authors introduce Doc2KG (Document-to-Knowledge-Graph), an intelligent framework that handles both creation and real-time updating of a knowledge graph, while also exploiting domain-specific ontology standards.
Abstract: Document Management Systems (DMS) are used for decades to store large amounts of information in textual form. Their technology paradigm is based on storing vast quantities of textual information enriched with metadata to support searchability. However, this exhibits limitations as it treats textual information as black box and is based exclusively on user-created metadata, a process that suffers from quality and completeness shortcomings. The use of knowledge graphs in DMS can substantially improve searchability, providing the ability to link data and enabling semantic searching. Recent approaches focus on either creating knowledge graphs from document collections or updating existing ones. In this paper, we introduce Doc2KG (Document-to-Knowledge-Graph), an intelligent framework that handles both creation and real-time updating of a knowledge graph, while also exploiting domain-specific ontology standards. We use DIAVGEIA (clarity), an award winning Greek open government portal, as our case-study and discuss new capabilities for the portal by implementing Doc2KG.

Book ChapterDOI
01 Jan 2022
TL;DR: This paper presents a stream reasoning playground that targets streaming reasoning as the first-class modelling and processing feature, and presents a more generic scenario for time-series data, where a workflow for streaming time- series data from various datasets is facilitated by using mapping functions.

Journal ArticleDOI
01 Jan 2022
TL;DR: In this paper, the authors present contributions of the ANR McBIM (Communicating Material for BIM) project regarding Digital Building Twins, specifically how Semantic Web technologies allow providing explainable decision-support.
Abstract: This paper presents contributions of the ANR McBIM (Communicating Material for BIM) project regarding Digital Building Twins, specifically how Semantic Web technologies allow providing explainable decision-support. Following an introduction stating our understanding of a Digital Building Twin (DBT), namely a lively representation of a buildings' status and environment, we identify five main research domains following the study of main research issues related to DBT. We then present the state-of-the-art and existing standards for digitizing the construction process, Semantic Web technologies, and wireless sensor networks. We further position the main contributions made so far in the ANR McBIM project's context according to this analysis, e.g., sensor placement in the communicating material and explainable decision-support.

Book ChapterDOI
TL;DR: In this paper , a declarative indexing framework and an associated visualization web application, KartoGraphI, are proposed to help the user identify the knowledge bases relevant for a given problem, and estimate their usability.
Abstract: A large number of semantic Web knowledge bases have been developed and published on the Web. To help the user identify the knowledge bases relevant for a given problem, and estimate their usability, we propose a declarative indexing framework and an associated visualization Web application, KartoGraphI . It provides an overview of important characteristics for more than 400 knowledge bases including, for instance, dataset location, SPARQL compatibility level, shared vocabularies, etc.

Journal ArticleDOI
29 Sep 2022-PeerJ
TL;DR: This research article catalogs an automatable task set necessary to assess and validate the portion of Wikidata relating to the COVID-19 epidemiology, and demonstrates the efficiency and limitations of the proposed approach by comparing it to the features of other methods for the validation of linked web data as revealed by previous research.
Abstract: Urgent global research demands real-time dissemination of precise data. Wikidata, a collaborative and openly licensed knowledge graph available in RDF format, provides an ideal forum for exchanging structured data that can be verified and consolidated using validation schemas and bot edits. In this research article, we catalog an automatable task set necessary to assess and validate the portion of Wikidata relating to the COVID-19 epidemiology. These tasks assess statistical data and are implemented in SPARQL, a query language for semantic databases. We demonstrate the efficiency of our methods for evaluating structured non-relational information on COVID-19 in Wikidata, and its applicability in collaborative ontologies and knowledge graphs more broadly. We show the advantages and limitations of our proposed approach by comparing it to the features of other methods for the validation of linked web data as revealed by previous research.