scispace - formally typeset
Search or ask a question

Showing papers on "Semantic Web Stack published in 2012"


Patent
12 Sep 2012
TL;DR: In this paper, an ontology-driven portal that organizes all three categories of data according to various Facets using underlying ontologies to define each facet and wherein any type of information can be classified and linked to other types of information is disclosed.
Abstract: The patent describes a single location and application on a network where a user can organize public, group, and private/personal information and have this single, location accessible to the public. A new, ontology-driven portal that organizes all three categories of data according to various “facets” using underlying ontologies to define each “facet” and wherein any type of information can be classified and linked to other types of information is disclosed. An application that enables a user to effectively utilize and manage knowledge and data the user posses and allows other users to effectively and seamlessly benefit from the user's knowledge and data over a computer network is also disclosed. A method of processing content created by a user utilizing a semantic, ontology-driven portal on a computer network is described. The semantic portal application provides the user with a content base, such as a semantic form or meta-form, for creating a semantic posting. The semantic portal utilizes a knowledge data structure, such as a taxonomy or ontology, in preparing a semantic posting based on the information provided by the user via the content base. The semantic portal application prepares a preview of a semantic posting for evaluation by the user. The semantic posting is then either modified by the user or accepted and posted by the user for external parties to view.

452 citations


Book
01 Jan 2012
TL;DR: This chapter discusses the development of Semantic Web Services Grounding using Ontology Engineering Methodologies, which involves mapping and Querying Disparate Knowledge Bases to Reasoning with Inconsistency.
Abstract: Foreword 1 Introduction 11 Semantic Web Technologies 12 The Goal of the Semantic Web 13 Ontologies and Ontology Languages 14 Creating and Managing Ontologies 15 Using Ontologies 16 Applications 17 Developing the Semantic Web References 2 Knowledge Discovery for Ontology Construction 21 Introduction 22 Knowledge Discovery 23 Ontology Definition 24 Methodology for Semi-automatic Ontology Construction 25 Ontology Learning Scenarios 26 Using Knowledge Discovery for Ontology Learning 27 Related Work on Ontology Construction 28 Discussion and Conclusion Acknowledgments References 3 Semantic Annotation and Human Language Technology 31 Introduction 32 Information Extraction: A Brief Introduction 33 Semantic Annotation 34 Applying 'Traditional' IE in Semantic Web Applications 35 Ontology-based IE 36 Deterministic Ontology Authoring using Controlled Language IE 37 Conclusion References 4 Ontology Evolution 41 Introduction 42 Ontology Evolution: State-of-the-art 43 Logical Architecture 44 Data-driven Ontology Changes 45 Usage-driven Ontology Changes 46 Conclusion References 5 Reasoning With Inconsistent Ontologies: Framework, Prototype, and Experiment 51 Introduction 52 Brief Survey of Approaches to Reasoning with Inconsistency 53 Brief Survey of Causes for Inconsistency in the Semantic WEB 54 Reasoning with Inconsistent Ontologies 55 Selection Functions 56 Strategies for Selection Functions 57 Syntactic Relevance-Based Selection Functions 58 Prototype of Pion 59 Discussion and Conclusions Acknowledgment References 6 Ontology Mediation, Merging, and Aligning 61 Introduction 62 Approaches in Ontology Mediation 63 Mapping and Querying Disparate Knowledge Bases 64 Summary References 7 Ontologies for Knowledge Management 71 Introduction 72 Ontology usage Scenario 73 Terminology 74 Ontologies as RDBMS Schema 75 Topic-ontologies versus Schema-ontologies 76 Proton Ontology 77 Conclusion References 8 Semantic Information Access 81 Introduction 82 Knowledge Access and the Semantic WEB 83 Natural Language Generation from Ontologies 84 Device Independence: Information Anywhere 85 SEKTAgent 86 Concluding Remarks References 9 Ontology Engineering Methodologies 91 Introduction 92 The Methodology Focus 93 Past and Current Research 94 Diligent Methodology 95 First Lessons Learned 96 Conclusion and Next Steps References 10 Semantic Web Services-Approaches and Perspectives 101 Semantic Web Services-A Short Overview 102 The WSMO Approach 103 The OWL-S Approach 104 The SWSF Approach 105 The IRS-III Approach 106 The WSDL-S Approach 107 Semantic Web Services Grounding: The Link Between The SWS and Existing Web Services Standards 108 Conclusions and Outlook References 11 Applying Semantic Technology to a Digital Library 111 Introduction 112 Digital Libraries: The State-of-the-art 113 A Case Study: the BT Digital Library 114 The Users' View 115 Implementing Semantic Technology in a Digital Library 116 Future Directions References 12 Semantic Web: A Legal Case Study 121 Introduction 122 Profile of The Users 123 Ontologies for Legal Knowledge 124 Architecture 125 Conclusions References 13 A Semantic Service Oriented Architecture for the Telecommunications Industry 131 Introduction 132 Introduction to Service Oriented Architectures 133 A Semantic Service Orientated Architecture 134 Semantic Mediation 135 Standards and Ontologies in Telecommunications 136 Case Study 137 Conclusion References 14 Conclusion and Outlook 141 Management of Networked Ontologies 142 Engineering of Networked Ontologies 143 Contextualizing Ontologies 144 Cross Media Resources 145 Social Semantic Desktop 146 Applications Index

372 citations


BookDOI
01 Jan 2012

341 citations


Proceedings ArticleDOI
05 Sep 2012
TL;DR: This paper implemented a content-based RS that leverages the data available within Linked Open Data datasets (in particular DBpedia, Freebase and LinkedMDB) in order to recommend movies to the end users.
Abstract: The World Wide Web is moving from a Web of hyper-linked Documents to a Web of linked Data Thanks to the Semantic Web spread and to the more recent Linked Open Data (LOD) initiative, a vast amount of RDF data have been published in freely accessible datasets These datasets are connected with each other to form the so called Linked Open Data cloud As of today, there are tons of RDF data available in the Web of Data, but only few applications really exploit their potential power In this paper we show how these data can successfully be used to develop a recommender system (RS) that relies exclusively on the information encoded in the Web of Data We implemented a content-based RS that leverages the data available within Linked Open Data datasets (in particular DBpedia, Freebase and LinkedMDB) in order to recommend movies to the end users We extensively evaluated the approach and validated the effectiveness of the algorithms by experimentally measuring their accuracy with precision and recall metrics

278 citations


Journal ArticleDOI
11 Jan 2012
TL;DR: Twenty-five Semantic Web and Database researchers met at the 2011 STI Semantic Summit in Riga, Latvia July 6-8, 2011 to discuss the opportunities and challenges posed by Big Data.
Abstract: Twenty-five Semantic Web and Database researchers met at the 2011 STI Semantic Summit in Riga, Latvia July 6-8, 2011[1] to discuss the opportunities and challenges posed by Big Data for the Semantic Web, Semantic Technologies, and Database communities. The unanimous conclusion was that the greatest shared challenge was not only engineering Big Data, but also doing so meaningfully. The following are four expressions of that challenge from different perspectives.

228 citations


Journal ArticleDOI
01 Dec 2012
TL;DR: The model, lemon, is presented, which aims to address gaps while building on existing work, in particular the Lexical Markup Framework, the ISOcat Data Category Registry, SKOS (Simple Knowledge Organization System) and the LexInfo and LIR ontology-lexicon models.
Abstract: Lexica and terminology databases play a vital role in many NLP applications, but currently most such resources are published in application-specific formats, or with custom access interfaces, leading to the problem that much of this data is in "data silos" and hence difficult to access. The Semantic Web and in particular the Linked Data initiative provide effective solutions to this problem, as well as possibilities for data reuse by inter-lexicon linking, and incorporation of data categories by dereferencable URIs. The Semantic Web focuses on the use of ontologies to describe semantics on the Web, but currently there is no standard for providing complex lexical information for such ontologies and for describing the relationship between the lexicon and the ontology. We present our model, lemon, which aims to address these gaps while building on existing work, in particular the Lexical Markup Framework, the ISOcat Data Category Registry, SKOS (Simple Knowledge Organization System) and the LexInfo and LIR ontology-lexicon models.

211 citations


Journal ArticleDOI
TL;DR: The semantic bookmarking and annotation facilities of Semantic Turkey are now supporting just a part of a whole methodology where different actors can cooperate in developing, building and populating ontologies while navigating the Web.
Abstract: Born four years ago as a Semantic Web extension for the web browser Firefox, Semantic Turkey pushed forward the traditional concept of links&folders-based bookmarking to a new dimension, allowing users to keep track of relevant information from visited web sites and to organize the collected content according to standard or personally defined ontologies. Today, the tool has broken the boundaries of its original intents and can be considered, under every aspect, an extensible platform for knowledge management and acquisition. The semantic bookmarking and annotation facilities of Semantic Turkey are now supporting just a part of a whole methodology where different actors, from domain experts to knowledge engineers, can cooperate in developing, building and populating ontologies while navigating the Web.

197 citations


Journal ArticleDOI
TL;DR: In this article, the authors introduce the principles and architectures of two new ontologies central to the task of semantic publishing: FaBiO, the FRBR-aligned Bibliographic Ontology, an ontology for recording and publishing bibliographic records of scholarly endeavours on the Semantic Web, and CiTO, the Citation Typing Ontology.

194 citations


Journal ArticleDOI
TL;DR: This paper attempts to gather the most notable approaches proposed so far in the literature, present them concisely in tabular format and group them under a classification scheme and explores the perspectives and future research steps for a seamless and meaningful integration of databases into the Semantic Web.
Abstract: Relational databases are considered one of the most popular storage solutions for various kinds of data and they have been recognized as a key factor in generating huge amounts of data for Semantic Web applications. Ontologies, on the other hand, are one of the key concepts and main vehicle of knowledge in the Semantic Web research area. The problem of bridging the gap between relational databases and ontologies has attracted the interest of the Semantic Web community, even from the early years of its existence and is commonly referred to as the database-to-ontology mapping problem. However, this term has been used interchangeably for referring to two distinct problems: namely, the creation of an ontology from an existing database instance and the discovery of mappings between an existing database instance and an existing ontology. In this paper, we clearly define these two problems and present the motivation, benefits, challenges and solutions for each one of them. We attempt to gather the most notable approaches proposed so far in the literature, present them concisely in tabular format and group them under a classification scheme. We finally explore the perspectives and future research steps for a seamless and meaningful integration of databases into the Semantic Web.

172 citations


Book
Eero Hyvönen1
19 Oct 2012
TL;DR: This book gives an overview on why, when, and how Linked (Open) Data and Semantic Web technologies can be employed in practice in publishing CH collections and other content on the Web, and motivates and presents a general semantic portal model and publishing framework as a solution approach to distributed semantic content creation, based on an ontology infrastructure.
Abstract: Cultural Heritage (CH) data is syntactically and semantically heterogeneous, multilingual, semantically rich, and highly interlinked. It is produced in a distributed, open fashion by museums, libraries, archives, and media organizations, as well as individual persons. Managing publication of such richness and variety of content on the Web, and at the same time supporting distributed, interoperable content creation processes, poses challenges where traditional publication approaches need to be re-thought. Application of the principles and technologies of Linked Data and the Semantic Web is a new, promising approach to address these problems. This development is leading to the creation of large national and international CH portals, such as Europeana, to large open data repositories, such as the Linked Open Data Cloud, and massive publications of linked library data in the U.S., Europe, and Asia. Cultural Heritage has become one of the most successful application domains of Linked Data nd Semantic Web technologies. This book gives an overview on why, when, and how Linked (Open) Data and Semantic Web technologies can be employed in practice in publishing CH collections and other content on the Web. The text first motivates and presents a general semantic portal model and publishing framework as a solution approach to distributed semantic content creation, based on an ontology infrastructure. On the Semantic Web, such an infrastructure includes shared metadata models, ontologies, and logical reasoning, and is supported by shared ontology and other Web services alleviating the use of the new technology and linked data in legacy cataloging systems. The goal of all this is to provide layman users and researchers with new, more intelligent and usable Web applications that can be utilized by other Web applications, too, via well-defined Application Programming Interfaces (API). At the same time, it is possible to provide publishing organizations with more cost-efficient so utions for content creation and publication. This book is targeted to computer scientists, museum curators, librarians, archivists, and other CH professionals interested in Linked Data and CH applications on the Semantic Web. The text is focused on practice and applications, making it suitable to students, researchers, and practitioners developing Web services and applications of CH, as well as to CH managers willing to understand the technical issues and challenges involved in linked data publication. Table of Contents: Cultural Heritage on the Semantic Web / Portal Model for Collaborative CH Publishing / Requirements for Publishing Linked Data / Metadata Schemas / Domain Vocabularies and Ontologies / Logic Rules for Cultural Heritage / Cultural Content Creation / Semantic Services for Human and Machine Users / Conclusions

155 citations


Journal ArticleDOI
Aabhas Paliwal1, Basit Shafiq1, Jaideep Vaidya1, Hui Xiong1, Nabil R. Adam1 
TL;DR: This paper addresses the issue of web service discovery given nonexplicit service description semantics that match a specific service request and proposes a solution for achieving functional level service categorization based on an ontology framework.
Abstract: A vast majority of web services exist without explicit associated semantic descriptions As a result many services that are relevant to a specific user service request may not be considered during service discovery In this paper, we address the issue of web service discovery given nonexplicit service description semantics that match a specific service request Our approach to semantic-based web service discovery involves semantic-based service categorization and semantic enhancement of the service request We propose a solution for achieving functional level service categorization based on an ontology framework Additionally, we utilize clustering for accurately classifying the web services based on service functionality The semantic-based categorization is performed offline at the universal description discovery and integration (UDDI) The semantic enhancement of the service request achieves a better matching with relevant services The service request enhancement involves expansion of additional terms (retrieved from ontology) that are deemed relevant for the requested functionality An efficient matching of the enhanced service request with the retrieved service descriptions is achieved utilizing Latent Semantic Indexing (LSI) Our experimental results validate the effectiveness and feasibility of the proposed approach

Journal ArticleDOI
TL;DR: A vision of a multilingual Web of Data is presented and the role that techniques such as ontology localization, ontology mapping, and cross-lingual ontology-based information access and presentation will play in achieving this is discussed.

Journal ArticleDOI
TL;DR: An ontology-based information extraction and retrieval system and its application to soccer domain is presented and a keyword-based semantic retrieval approach is proposed, which is improved considerably using domain-specific information extraction, inference and rules.

Journal ArticleDOI
05 Sep 2012
TL;DR: An overview of the features of techniques for storing RDF data is given for efficient data storage and query processing as the number and scale of Semantic Web in real-word applications in use increase and scalability becomes more important.
Abstract: The Semantic Web extends the principles of the Web by allowing computers to understand and easily explore the Web. In recent years RDF has been a widespread data format for the Semantic Web. There is a real need to efficiently store and retrieve RDF data as the number and scale of Semantic Web in real-word applications in use increase. As datasets grow larger and more datasets are linked together, scalability becomes more important. Efficient data storage and query processing that can scale to large amounts of possibly schema-less data has become an important research topic. This paper gives an overview of the features of techniques for storing \textttRDF data.

Journal ArticleDOI
TL;DR: New approach to perform effective personalization highly based on Semantic web technologies performed in new version of Protus 2.0, which comprises the use of an ontology and adaptation rules for knowledge representation and inference engines for reasoning.
Abstract: With the development of the Semantic web the use of ontologies as a formalism to describe knowledge and information in a way that can be shared on the web is becoming common. The explicit conceptualization of system components in a form of ontology facilitates knowledge sharing, knowledge reuse, communication and collaboration and construction of knowledge rich and intensive systems. Semantic web provides huge potential and opportunities for developing the next generation of e-learning systems. In previous work, we presented tutoring system named Protus (PRogramming TUtoring System) that is used for learning the essence of Java programming language. It uses principles of learning style identification and content recommendation for course personalization. This paper presents new approach to perform effective personalization highly based on Semantic web technologies performed in new version of the system, named Protus 2.0. This comprises the use of an ontology and adaptation rules for knowledge representation and inference engines for reasoning. Functionality, structure and implementation of a Protus 2.0 ontology as well as syntax of SWRL rules implemented for on-the-fly personalization will be presented in this paper.

Journal ArticleDOI
TL;DR: The Linked Stream Middleware is described, which makes it easy to integrate time-dependent data with other Linked Data sources, by enriching both sensor sources and sensor data streams with semantic descriptions, and enabling complex SPARQL-like queries across both dataset types through a novel query processing engine, along with means to mashup the data and process results.

Journal ArticleDOI
TL;DR: It is argued that machine learning research has to offer a wide variety of methods applicable to different expressivity levels ofSemantic Web knowledge bases: ranging from weakly expressive but widely available knowledge bases in RDF to highly expressive first-order knowledge bases, this paper surveys statistical approaches to mining the Semantic Web.
Abstract: In the Semantic Web vision of the World Wide Web, content will not only be accessible to humans but will also be available in machine interpretable form as ontological knowledge bases. Ontological knowledge bases enable formal querying and reasoning and, consequently, a main research focus has been the investigation of how deductive reasoning can be utilized in ontological representations to enable more advanced applications. However, purely logic methods have not yet proven to be very effective for several reasons: First, there still is the unsolved problem of scalability of reasoning to Web scale. Second, logical reasoning has problems with uncertain information, which is abundant on Semantic Web data due to its distributed and heterogeneous nature. Third, the construction of ontological knowledge bases suitable for advanced reasoning techniques is complex, which ultimately results in a lack of such expressive real-world data sets with large amounts of instance data. From another perspective, the more expressive structured representations open up new opportunities for data mining, knowledge extraction and machine learning techniques. If moving towards the idea that part of the knowledge already lies in the data, inductive methods appear promising, in particular since inductive methods can inherently handle noisy, inconsistent, uncertain and missing data. While there has been broad coverage of inducing concept structures from less structured sources (text, Web pages), like in ontology learning, given the problems mentioned above, we focus on new methods for dealing with Semantic Web knowledge bases, relying on statistical inference on their standard representations. We argue that machine learning research has to offer a wide variety of methods applicable to different expressivity levels of Semantic Web knowledge bases: ranging from weakly expressive but widely available knowledge bases in RDF to highly expressive first-order knowledge bases, this paper surveys statistical approaches to mining the Semantic Web. We specifically cover similarity and distance-based methods, kernel machines, multivariate prediction models, relational graphical models and first-order probabilistic learning approaches and discuss their applicability to Semantic Web representations. Finally we present selected experiments which were conducted on Semantic Web mining tasks for some of the algorithms presented before. This is intended to show the breadth and general potential of this exiting new research and application area for data mining.

Proceedings ArticleDOI
26 Mar 2012
TL;DR: This paper describes a general approach to exploit the wealth of already existing TEL data on the Web by allowing its exposure as Linked Data and by taking into account automated enrichment and interlinking techniques to provide rich and well-interlinked data for the educational domain.
Abstract: Research on interoperability of technology-enhanced learning (TEL) repositories throughout the last decade has led to a fragmented landscape of competing approaches, such as metadata schemas and interface mechanisms. However, so far Web-scale integration of resources is not facilitated, mainly due to the lack of take-up of shared principles, datasets and schemas. On the other hand, the Linked Data approach has emerged as the de-facto standard for sharing data on the Web and offers a large potential to solve interoperability issues in the field of TEL. In this paper, we describe a general approach to exploit the wealth of already existing TEL data on the Web by allowing its exposure as Linked Data and by taking into account automated enrichment and interlinking techniques to provide rich and well-interlinked data for the educational domain. This approach has been implemented in the context of the mEducator project where data from a number of open TEL data repositories has been integrated, exposed and enriched by following Linked Data principles.

Proceedings ArticleDOI
Larry Heck1, Dilek Hakkani-Tur1
01 Dec 2012
TL;DR: An unsupervised training approach for SLU systems that leverages the structured semantic knowledge graphs of the emerging Semantic Web using a combination of web search retrieval and syntax-based dependency parsing is proposed.
Abstract: This paper proposes an unsupervised training approach for SLU systems that leverages the structured semantic knowledge graphs of the emerging Semantic Web. The approach creates natural language surface forms of entity-relation-entity portions of knowledge graphs using a combination of web search retrieval and syntax-based dependency parsing. The new forms are used to train an SLU system in an unsupervised manner. This paper tests the approach on the problem of intent detection, and shows that the unsupervised training procedure matches the performance of supervised training over operating points important for commercial applications.

Journal ArticleDOI
TL;DR: S-Match as mentioned in this paper is an open source semantic matching framework that tackles the semantic interoperability problem by transforming several data structures such as business catalogs, web directories, conceptual models and web services descriptions into lightweight ontologies and establishing semantic correspondences between them.
Abstract: Achieving automatic interoperability among systems with diverse data structures and languages expressing different viewpoints is a goal that has been difficult to accomplish. This paper describes S-Match, an open source semantic matching framework that tackles the semantic interoperability problem by transforming several data structures such as business catalogs, web directories, conceptual models and web services descriptions into lightweight ontologies and establishing semantic correspondences between them. The framework is the first open source semantic matching project that includes three different algorithms tailored for specific domains and provides an extensible API for developing new algorithms, including possibility to plug-in specific background knowledge according to the characteristics of each application domain.

Journal ArticleDOI
TL;DR: This paper presents a novel method for mining association rules from semantic instance data repositories expressed in RDF/(S) and OWL, which takes advantage of the schema-level knowledge encoded in the ontology to derive appropriate transactions which will later feed traditional association rules algorithms.
Abstract: The amount of ontologies and semantic annotations available on the Web is constantly growing. This new type of complex and heterogeneous graph-structured data raises new challenges for the data mining community. In this paper, we present a novel method for mining association rules from semantic instance data repositories expressed in RDF/(S) and OWL. We take advantage of the schema-level (i.e. Tbox) knowledge encoded in the ontology to derive appropriate transactions which will later feed traditional association rules algorithms. This process is guided by the analyst requirements, expressed in the form of query patterns. Initial experiments performed on semantic data of a biomedical application show the usefulness and efficiency of the approach.

Book ChapterDOI
27 May 2012
TL;DR: LODifier as mentioned in this paper is an approach that combines deep semantic analysis with named entity recognition, word sense disambiguation and controlled Semantic Web vocabularies in order to extract named entities and relations between them from text and to convert them into an RDF representation which is linked to DBpedia and WordNet.
Abstract: The automated extraction of information from text and its transformation into a formal description is an important goal in both Semantic Web research and computational linguistics The extracted information can be used for a variety of tasks such as ontology generation, question answering and information retrieval LODifier is an approach that combines deep semantic analysis with named entity recognition, word sense disambiguation and controlled Semantic Web vocabularies in order to extract named entities and relations between them from text and to convert them into an RDF representation which is linked to DBpedia and WordNet We present the architecture of our tool and discuss design decisions made An evaluation of the tool on a story link detection task gives clear evidence of its practical potential

Book ChapterDOI
27 May 2012
TL;DR: This work presents a semi-automatic approach for building mappings that translate data in structured sources to RDF expressed in terms of a vocabulary of the user’s choice, and provides an easy to use interface that enables users to control the automated process to guide the system to produce the desired mappings.
Abstract: The Linked Data cloud contains large amounts of RDF data generated from databases. Much of this RDF data, generated using tools such as D2R, is expressed in terms of vocabularies automatically derived from the schema of the original database. The generated RDF would be significantly more useful if it were expressed in terms of commonly used vocabularies. Using today’s tools, it is labor-intensive to do this. For example, one can first use D2R to automatically generate RDF from a database and then use R2R to translate the automatically generated RDF into RDF expressed in a new vocabulary. The problem is that defining the R2R mappings is difficult and labor intensive because one needs to write the mapping rules in terms of SPARQL graph patterns. In this work, we present a semi-automatic approach for building mappings that translate data in structured sources to RDF expressed in terms of a vocabulary of the user’s choice. Our system, Karma, automatically derives these mappings, and provides an easy to use interface that enables users to control the automated process to guide the system to produce the desired mappings. In our evaluation, users need to interact with the system less than once per column (on average) in order to construct the desired mapping rules. The system then uses these mapping rules to generate semantically rich RDF for the data sources. We demonstrate Karma using a bioinformatics example and contrast it with other approaches used in that community. Bio2RDF [7] and Semantic MediaWiki Linked Data Extension (SMW-LDE) [2] are examples of efforts that integrate bioinformatics datasets by mapping them to a common vocabulary. We applied our approach to a scenario used in the SMW-LDE that integrate ABA, Uniprot, KEGG Pathway, PharmGKB and Linking Open Drug Data datasets using a

Journal ArticleDOI
01 Jun 2012
TL;DR: A platform for multifaceted product search using Semantic Web technology that is able to process RDFa annotated (X)HTML pages and aggregate product information coming from different Web stores.
Abstract: This paper presents a platform for multifaceted product search using Semantic Web technology. Online shops can use a ping service to submit their RDFa annotated Web pages for processing. The platform is able to process these RDFa annotated (X)HTML pages and aggregate product information coming from different Web stores. We propose solutions for the identification of products and the mapping of the categories in this process. Furthermore, when a loose vocabulary such as the Google RDFa vocabulary is used, the platform deals with the issue of heterogeneous information (e.g., currencies, rating scales, etc.).

Journal ArticleDOI
01 Feb 2012
TL;DR: This paper presents a temporal extension of the very expressive fragment SHIN(D) of the OWL Description Logic language, resulting in the temporal OWL language, and illustrates the expressiveness of the newly introduced language by using an example from the financial domain.
Abstract: Through its interoperability and reasoning capabilities, the Semantic Web opens a realm of possibilities for developing intelligent systems on the Web. The Web Ontology Language (OWL) is the most expressive standard language for modeling ontologies, the cornerstone of the Semantic Web. However, up until now, no standard way of expressing time and time-dependent information in OWL has been provided. In this paper, we present a temporal extension of the very expressive fragment SHIN(D) of the OWL Description Logic language, resulting in the temporal OWL language. Through a layered approach, we introduce three extensions: 1) concrete domains, which allow the representation of restrictions using concrete domain binary predicates; 2) temporal representation , which introduces time points, relations between time points, intervals, and Allen's 13 interval relations into the language; and 3) timeslices/fluents, which implement a perdurantist view on individuals and allow for the representation of complex temporal aspects, such as process state transitions. We illustrate the expressiveness of the newly introduced language by using an example from the financial domain.

Proceedings ArticleDOI
17 Apr 2012
TL;DR: This paper explains why capturing functionality is the connection between those three building blocks, and introduces the functional API description format RESTdesc that creates this bridge between hypermedia APIs and the Semantic Web.
Abstract: The early visions for the Semantic Web, from the famous 2001 Scientific American article by Berners-Lee et al., feature intelligent agents that can autonomously perform tasks like discovering information, scheduling events, finding execution plans for complex operations, and in general, use reasoning techniques to come up with sense-making and traceable decisions. While today-more than ten years later-the building blocks (1) resource-oriented rest infrastructure, (2) Web APIs, and (3) Linked Data are in place, the envisioned intelligent agents have not landed yet. In this paper, we explain why capturing functionality is the connection between those three building blocks, and introduce the functional API description format RESTdesc that creates this bridge between hypermedia APIs and the Semantic Web. Rather than adding yet another component to the Semantic Web stack, RESTdesc offers instead concise descriptions that reuse existing vocabularies to guide hypermedia-driven agents. Its versatile capabilities are illustrated by a real-life agent use case for Web browsers wherein we demonstrate that RESTdesc functional descriptions are capable of fulfilling the promise of autonomous agents on the Web.

Journal ArticleDOI
TL;DR: This article proposes a novel smart Web service based on the context of things, which is implemented using a REpresentational State Transfer for Things (Thing-REST) style, to tackle the two problems.
Abstract: Combining the Semantic Web and the Ubiquitous Web, Web 3.0 is for things. The Semantic Web enables human knowledge to be machine-readable and the Ubiquitous Web allows Web services to serve any thing, forming a bridge between the virtual world and the real world. By using context, Web services can become smarter—that is, aware of the target things' or applications' physical environments, or situations and respond proactively and intelligently. Existing methods for implementing context-aware Web services on Web 2.0 mainly enumerate different implementations corresponding to different attribute values of the context, in order to improve the Quality of Services (QoS). However, things in the physical world are extremely diverse, which poses new problems for Web services: it is difficult to unify the context of things and to implement a flexible smart Web service for things. This article proposes a novel smart Web service based on the context of things, which is implemented using a REpresentational State Transfer for Things (Thing-REST) style, to tackle the two problems. In a smart Web service, the user's description (semantic context) and sensor reports (sensing context) are two channels for acquiring the context of things which are then employed by ontology services to make the context of things machine-readable. With guidance of domain knowledge services, event detection services can analyze things' needs particularly, well through the context of things. We then propose a Thing-REST style to manage the context of things and user context, and to mashup Web services through three structures (i.e., chain, select, and merge) to implement smart Web services. A smart plant watering-service application demonstrates the effectiveness of our method.

Proceedings ArticleDOI
19 Sep 2012
TL;DR: This paper presents a general framework for the Semantic Web of Things, based on an evolution of classic Knowledge Base models, also providing architectural solutions for information storage, communication and processing.
Abstract: The Semantic Web of Things is a novel paradigm combining the Semantic Web and the Internet of Things, aiming to associate semantic annotations to real-world objects, locations and events. This paper presents a general framework for the Semantic Web of Things, based on an evolution of classic Knowledge Base models, also providing architectural solutions for information storage, communication and processing.

Proceedings ArticleDOI
04 Jul 2012
TL;DR: The paper accentuates the need for, and emphasizes, a framework of Semantic Smart Gateways (SSGF) in the Semantic Web of Things (SWoT), proposing an ontology learning and an ontological alignment method respectively.
Abstract: The aim of this paper is to present authors' proposal regarding semantic interoperability for interconnected and semantically coordinated smart entities in a Web of Things More specific, the paper presents a use case scenario and requirements related to the semantic registration, coordination and retrieval of smart entities Motivated by these, the paper accentuates the need for, and emphasizes, a framework of Semantic Smart Gateways (SSGF) in the Semantic Web of Things (SWoT), proposing an ontology learning and an ontology alignment method respectively

Proceedings ArticleDOI
23 May 2012
TL;DR: This work demonstrates how the semantic web technology provides efficient solutions for the management of complex and distributed data in heterogeneous systems, and it can be used in the medical information systems as well.
Abstract: With the increased development of cloud computing, access control policies have become an important issue in the security filed of cloud computing. Semantic web is the extension of current Web which aims at automation, integration and reuse of data among different web applications such as clouding computing. However, Semantic web applications pose some new requirements for security mechanisms especially in the access control models. In this paper, we analyse existing access control methods and present a semantic based access control model which considers semantic relations among different entities in cloud computing environment. We have enriched the research for semantic web technology with role-based access control that is able to be applied in the field of medical information system or e-Healthcare system. This work demonstrates how the semantic web technology provides efficient solutions for the management of complex and distributed data in heterogeneous systems, and it can be used in the medical information systems as well.