scispace - formally typeset
Search or ask a question

Showing papers on "Semantic interoperability published in 2011"


Proceedings ArticleDOI
22 Jun 2011
TL;DR: A model-theoretical approach for semantic data compression and reliable semantic communication is investigated and it is shown that Shannon's source and channel coding theorems have semantic counterparts.
Abstract: This paper studies methods of quantitatively measuring semantic information in communication. We review existing work on quantifying semantic information, then investigate a model-theoretical approach for semantic data compression and reliable semantic communication. We relate our approach to the statistical measurement of information by Shannon, and show that Shannon's source and channel coding theorems have semantic counterparts.

163 citations



Journal ArticleDOI
TL;DR: The SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows, thus facilitating the intersection of Web services and Semantic Web technologies.
Abstract: The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner very similar to data housed in static triple-stores, thus facilitating the intersection of Web services and Semantic Web technologies.

117 citations


Journal ArticleDOI
TL;DR: An approach to translate definitions expressed in the openEHR Archetype Definition Language to a formal representation expressed using the Ontology Web Language (OWL), providing an approach to apply the SWRL rules to concrete instances of clinical data.

111 citations


Journal ArticleDOI
TL;DR: An overview of the well-established literature dealing with user model interoperability is presented, discussing the most representative work which has provided valuable solutions to face interoperability issues and some open issues and possible future deployments in the area.
Abstract: Nowadays a large number of user-adaptive systems has been developed. Commonly, the effort to build user models is repeated across applications and domains, due to the lack of interoperability and synchronization among user-adaptive systems. There is a strong need for the next generation of user models to be interoperable, i.e. to be able to exchange user model portions and to use the information that has been exchanged to enrich the user experience. This paper presents an overview of the well-established literature dealing with user model interoperability, discussing the most representative work which has provided valuable solutions to face interoperability issues. Based on a detailed decomposition and a deep analysis of the selected work, we have isolated a set of dimensions characterizing the user model interoperability process along which the work has been classified. Starting from this analysis, the paper presents some open issues and possible future deployments in the area.

107 citations


Proceedings ArticleDOI
29 Nov 2011
TL;DR: A comprehensive and systematic survey of Cloud computing interoperability efforts by standardization groups, industry and research community is carried out to derive an initial set of semantic interoperability requirements to be supported by existing as well as next generation Cloud systems.
Abstract: Cloud computing is a promising IT paradigm which enables the Internet¢s evolution into a global market of collaborating services. Cloud computing semantic interoperability plays a key role in making this a reality. Towards this direction, a comprehensive and systematic survey of Cloud computing interoperability efforts by standardization groups, industry and research community is carried out. The main objective of this survey is to derive an initial set of semantic interoperability requirements to be supported by existing as well as next generation Cloud systems. Ohe survey motivates and encourages the Cloud community to adopt a common Cloud computing interoperability framework with core dimensions the creation of a common data model and a standardized Cloud interface (API), which will constitute the base for the development of a semantically interoperable Cloud environment.

91 citations


Journal ArticleDOI
TL;DR: A knowledge framework is presented to use semantically enriched international product data standards, and knowledge representation elements as a basis for achieving seamless enterprise interoperability and make interoperable intelligent manufacturing systems a reality.
Abstract: Nowadays, competition is experienced not only among companies but among global supply chains and business networks. There is a demand for intelligent world-class solutions capable of reinforcing partnerships and collaborations with an improved cross-cultural understanding. However due to the proliferation of terminology, organizations from similar business environments have trouble cooperating, and are experiencing difficulties exchanging electronically vital information, such as product and manufacturing data, even when using international standards. To address similar interoperability problems, the Intelligent manufacturing systems program ( http://www.ims.org/content/glossary ) is providing an opportunity to develop industry-led R&D initiatives, building common semantics and integrated solutions. The SMART-fm project was one of those initiatives. It led to the development of the international standard for product data representation and exchange in the furniture sector (ISO 10303-236) and identified the challenge of semantic interoperability which is today a major challenge in modern enterprise integration. This paper presents a knowledge framework to address that challenge and make interoperable intelligent manufacturing systems a reality. It proposes to use semantically enriched international product data standards, and knowledge representation elements as a basis for achieving seamless enterprise interoperability.

81 citations


Journal ArticleDOI
TL;DR: The RE-USE architecture and associated profiles are focused on defining a set of scalable, standards-based, IHE-compliant profiles that can enable single-source data collection/entry and cross-system data reuse through semantic integration.

79 citations


Journal ArticleDOI
TL;DR: The underlying knowledge base, which is based on the formal ontology OntoCAPE, is presented, and the design and implementation of a prototypical integration software are described and the application of the software prototype in a large industrial use case is reported.

76 citations


Journal ArticleDOI
TL;DR: The objective in this paper is to take a step back, and consider what semantic interoperability means, assess whether or not it has been achieved, and, if not, determine what concrete next steps can be taken to get closer.

74 citations


Journal ArticleDOI
TL;DR: A semantic world model framework for hierarchical distributed representation of knowledge in autonomous underwater systems that will enhance interoperability, independence of operation, and situation awareness of the embedded service-oriented agents for autonomous platforms.
Abstract: This paper proposes a semantic world model framework for hierarchical distributed representation of knowledge in autonomous underwater systems. This framework aims to provide a more capable and holistic system, involving semantic interoperability among all involved information sources. This will enhance interoperability, independence of operation, and situation awareness of the embedded service-oriented agents for autonomous platforms. The results obtained specifically affect the mission flexibility, robustness, and autonomy. The presented framework makes use of the idea that heterogeneous real-world data of very different type must be processed by (and run through) several different layers, to be finally available in a suited format and at the right place to be accessible by high-level decision-making agents. In this sense, the presented approach shows how to abstract away from the raw real-world data step by step by means of semantic technologies. The paper concludes by demonstrating the benefits of the framework in a real scenario. A hardware fault is simulated in a REMUS 100 AUV while performing a mission. This triggers a knowledge exchange between the status monitoring agent and the adaptive mission planner embedded agent. By using the proposed framework, both services can interchange information while remaining domain independent during their interaction with the platform. The results of this paper are readily applicable to land and air robotics.

Journal ArticleDOI
TL;DR: An approach that makes use of Semantic Web Technologies to support the assessment of open questions in eLearning courses is described, which combines domain ontologies, semantic annotations and semantic similarity measurements.

Journal ArticleDOI
TL;DR: A layered architecture of Internet of Things framework where a semantically enhanced overlay interlink the other layers and facilitate secure access provision to Internet of things-enabled services is proposed.
Abstract: The future Internet will embrace the intelligence of Web 3.0 and the omnipresence of every day connected objects. The later was envisioned as the Internet of Things. Security and interoperability concerns are hindering the service innovations using the Internet of Things. This paper addresses secure access provision to Internet of Things-enabled services and interoperability of security attributes between different administrative domains. In this paper we proposed a layered architecture of Internet of Things framework where a semantically enhanced overlay interlink the other layers and facilitate secure access provision to Internet of Things-enabled services. The main element of semantic overlay is security reasoning through ontologies and semantic rules. Finally the interoperability of security aspect is addressed through ontology and a machine-to-machine platform. This paper provides implementation details of security reasoning and the interoperability aspects and discusses crucial challenges in these areas.

Proceedings ArticleDOI
04 Jul 2011
TL;DR: A full stack of APIs is proposed to decouple the development of a Cloud-based application from its deployment and execution, with particular attention paid to the interoperability API aiming to provide programming language interoperability and protocol syntax or semantic enforcements.
Abstract: The federation of Cloud resources can be treated at different levels of abstractions. In this paper we focus on application programming interfaces for building Cloud-based applications using services from multiple Cloud providers. A full stack of APIs is proposed to decouple the development of a Cloud-based application from its deployment and execution. A particular attention is paid to the design of the interoperability API aiming to provide programming language interoperability and protocol syntax or semantic enforcements.

Journal ArticleDOI
TL;DR: The interoperability infrastructure developed in previous work by this group has been reused and extended to cover the requirements of data transformation and has been applied to ISO 13606 and openEHR.

Book ChapterDOI
13 Jun 2011
TL;DR: This chapter introduces the approach that is investigated within the Connect project and that deals with the dynamic synthesis of emergent connectors that mediate the interaction protocols executed by the networked systems.
Abstract: This chapter deals with interoperability among pervasive networked systems, in particular accounting for the heterogeneity of protocols from the application down to the middleware layer, which is mandatory for today's and even more for tomorrow's open and highly heterogeneous networks. The chapter then surveys existing approaches to middleware interoperability, further providing a formal specification so as to allow for rigorous characterization and assessment. In general, existing approaches fail to address interoperability required by today's ubiquitous and heterogeneous networking environments where interaction protocols run by networked systems need to be mediated at both application and middleware layers. To meet such a goal, this chapter introduces the approach that is investigated within the Connect project and that deals with the dynamic synthesis of emergent connectors that mediate the interaction protocols executed by the networked systems.

Proceedings ArticleDOI
29 Nov 2011
TL;DR: A PaaS semantic interoperability framework (PSIF) is introduced that studies, models and tries to resolvesemantic interoperability conflicts raised during the deployment or the migration of an application by defining the following dimensions: Fundamental PAAS Entities, Types of Semantics, and Levels of Semantic Conflicts.
Abstract: Given the rapid uptake and the great diversity of PaaS offerings, understanding semantic interoperability at the PaaS level is essential for supporting inter-Cloud cooperation, seamless information exchange and application and data portability. In this vein, this paper introduces a PaaS semantic interoperability framework (PSIF). PSIF studies, models and tries to resolve semantic interoperability conflicts raised during the deployment or the migration of an application by defining the following dimensions: Fundamental PaaS Entities, Types of Semantics, and Levels of Semantic Conflicts. In the context of this paper, the development of common PaaS models and standardized management interfaces are raised as primary requirements in this context. PaaS architectures can then be augmented with a semantic layer that would host the common models and would be the link between heterogeneous PaaS offerings.

Proceedings ArticleDOI
27 Jun 2011
TL;DR: Some of the challenges in achieving interoperability for cloud computing are described and an adaptation of the U.S. Department of Defense's LISI Maturity Model is recommended to address cloud-to-cloud interoperability.
Abstract: Cloud computing describes a new distributed computing paradigm that allows system of systems to access a shared pool of configurable computing resources (e.g., networks, servers, storage, data, applications, and services) that can be rapidly provisioned and released over the Internet with minimal user-management effort or cloud-provider interaction. Interoperability is central to enabling sharing of resources from a pool of cloud-service providers in a seamless fashion. In this paper we describe some of the challenges in achieving interoperability for cloud computing and recommend an adaptation of the U.S. Department of Defense's LISI Maturity Model to address cloud-to-cloud interoperability.

Journal ArticleDOI
TL;DR: It is argued that a shared anchor, the biomedical reality under scrutiny, can effectively support the semantic integration of these ECG standards into a coherent ECG representation for the sake of a unified Electronic Health Record (EHR) model.

Journal ArticleDOI
TL;DR: A concept for a solution to the semantic interoperability problem in emergency management using an ontology is proposed by presenting a case study and discussing the possibility of applying the ontology to resolve semantic heterogeneity in emergency response.
Abstract: Emergency response is a complex activity involving many actors and heterogeneous spatial data. Two of the major challenges are the integration and extraction of these data and their transmission to emergency management actors. Although significant progress has been made regarding the systemic and syntactic heterogeneity of data in this context, semantic heterogeneity remains insufficiently addressed. Here, we discuss the possibility of applying the ontology to resolve semantic heterogeneity in emergency response. We propose a concept for a solution to the semantic interoperability problem in emergency management using an ontology by presenting a case study.

Journal ArticleDOI
TL;DR: This paper introduces the i* interoperability problem and derive an XML interchange format, called iStarML, as a practical solution to this problem, and details the tags and options of the interchange format.

Book ChapterDOI
21 Feb 2011
TL;DR: In this article, the authors propose the integration of traceability functionalities in information systems as a way to support such sustainability, where data, semantic, and structural mappings between partner enterprises in the complex network should be modelled as tuples and stored in a knowledge base for communication support with reasoning capabilities, thus allowing to trace, monitor and support the stability maintenance of a system's interoperable state.
Abstract: Enterprises are demanded to collaborate and establish partnerships to reach global business and markets. However, due to the different sources of models and semantics, organizations are experiencing difficulties exchanging vital information electronically and seamlessly, even when they operate in related business environments. This situation is even worst in the advent of the evolution of the enterprise systems and applications, whose dynamics result in increasing the interoperability problem due to the continuous need for model adjustments and semantics harmonization. To contribute for a long term stable interoperable enterprise operating environment, the authors propose the integration of traceability functionalities in information systems as a way to support such sustainability. Either data, semantic, and structural mappings between partner enterprises in the complex network should be modelled as tuples and stored in a knowledge base for communication support with reasoning capabilities, thus allowing to trace, monitor and support the stability maintenance of a system’s interoperable state.

Journal ArticleDOI
TL;DR: A non-instance learning-based approach that transforms the ontology mapping problem to a binary classification problem and utilizes machine learning techniques as a solution and demonstrates that the approach can be generalized to different domains without extra training efforts.
Abstract: Ontology mapping (OM) seeks to find semantic correspondences between similar elements of different ontologies. OM is critical to achieve semantic interoperability in the World Wide Web. To solve the OM problem, this article proposes a non-instance learning-based approach that transforms the OM problem into a binary classification problem and utilizes machine learning techniques as a solution. Same as other machine learning-based approaches, a number of features (i.e. linguistic, structural, and web features) are generated for each mapping candidate. However, in contrast to other learning-based mapping approaches, the features proposed in our approach are generic and do not rely on the existence and sufficiency of instances. Therefore, our approach can be generalized to different domains without extra training efforts. To evaluate our approach, two experiments (i.e. within-task vs cross-task) are implemented and the SVM (support vector machine) algorithm is applied. Experimental results show that our non-instance learning-based OM approach performs well on most of OAEI benchmark tests when training and testing on the same mapping task; and the results of approach vary according to the likelihood of training data and testing data when training and testing on different mapping tasks. Copyright © 2010 John Wiley & Sons, Ltd.

Proceedings ArticleDOI
24 Oct 2011
TL;DR: This approach introduces the concepts of Collaboration Objects, Data Ownership and Data Exchange Feedback Loops and utilizes a file based data exchange by means of AutomationML.
Abstract: Interoperability of engineering tools describes their ability to collaborate with each other across their tool borders, company locations and workflow phases. Therefore it is considered as an important indicator for engineering efficiency — but it is rarely supported by today's heterogeneous industrial software. This paper motivates the need of interoperability, describes the state of the art and its issues, derives requirements and presents a new concept to overcome the mentioned issues. This approach introduces the concepts of Collaboration Objects, Data Ownership and Data Exchange Feedback Loops and utilizes a file based data exchange by means of AutomationML. Finally, the paper describes the developed workflow concept together with required software functionality for a collaboration middleware.

Journal ArticleDOI
TL;DR: RICORDO is contributing to the semantic interoperability of DMRs through ontology-based annotation by supporting more effective navigation and re-use of clinical D MRs, as well as sustaining interoperability operations based on the criterion of biological similarity.
Abstract: Background: The practice and research of medicine generates considerable quantities of data and model resources (DMRs). Although in principle biomedical resources are re-usable, in practice few can currently be shared. In particular, the clinical communities in physiology and pharmacology research, as well as medical education, (i.e. PPME communities) are facing considerable operational and technical obstacles in sharing data and models. Findings: We outline the efforts of the PPME communities to achieve automated semantic interoperability for clinical resource documentation in collaboration with the RICORDO project. Current community practices in resource documentation and knowledge management are overviewed. Furthermore, requirements and improvements sought by the PPME communities to current documentation practices are discussed. The RICORDO plan and effort in creating a representational framework and associated open software toolkit for the automated management of PPME metadata resources is also described. Conclusions: RICORDO is providing the PPME community with tools to effect, share and reason over clinical resource annotations. This work is contributing to the semantic interoperability of DMRs through ontology-based annotation by (i) supporting more effective navigation and re-use of clinical DMRs, as well as (ii) sustaining interoperability operations based on the criterion of biological similarity. Operations facilitated by RICORDO will range from automated dataset matching to model merging and managing complex simulation workflows. In effect, RICORDO is contributing to community standards for resource sharing and interoperability.

Dissertation
01 Jan 2011
TL;DR: This work represents a novel approach to integrate world-knowledge into current semantic models and a means to cross the language boundary for a better and more robust semantic relatedness representation, thus opening the door for an improved abstraction of meaning that carries the potential of ultimately imparting understanding of natural language to machines.
Abstract: While pragmatics, through its integration of situational awareness and real world relevant knowledge, offers a high level of analysis that is suitable for real interpretation of natural dialogue, semantics, on the other end, represents a lower yet more tractable and affordable linguistic level of analysis using current technologies. Generally, the understanding of semantic meaning in literature has revolved around the famous quote “You shall know a word by the company it keeps”. In this thesis we investigate the role of context constituents in decoding the semantic meaning of the engulfing context; specifically we probe the role of salient concepts, defined as content-bearing expressions which afford encyclopedic definitions, as a suitable source of semantic clues to an unambiguous interpretation of context. Furthermore, we integrate this world knowledge in building a new and robust unsupervised semantic model and apply it to entail semantic relatedness between textual pairs, whether they are words, sentences or paragraphs. Moreover, we explore the abstraction of semantics across languages and utilize our findings into building a novel multilingual semantic relatedness model exploiting information acquired from various languages. We demonstrate the effectiveness and the superiority of our mono-lingual and multi-lingual models through a comprehensive set of evaluations on specialized synthetic datasets for semantic relatedness as well as real world applications such as paraphrase detection and short answer grading. Our work represents a novel approach to integrate world-knowledge into current semantic models and a means to cross the language boundary for a better and more robust semantic relatedness representation, thus opening the door for an improved abstraction of meaning that carries the potential of ultimately imparting understanding of natural language to machines.

Journal ArticleDOI
01 Jul 2011
TL;DR: This paper illustrates how the proposed mode of interaction helps users quickly find ontologies that satisfy their needs and presents several supportive techniques including a new method of constructing virtual documents of concepts for keyword search, a popularity-based scheme to rank concepts and ontologies, and a way to generate query-relevant structured snippets.
Abstract: Web ontologies provide shared concepts for describing domain entities and thus enable semantic interoperability between applications. To facilitate concept sharing and ontology reusing, we developed Falcons Concept Search, a novel keyword-based ontology search engine. In this paper, we illustrate how the proposed mode of interaction helps users quickly find ontologies that satisfy their needs and present several supportive techniques including a new method of constructing virtual documents of concepts for keyword search, a popularity-based scheme to rank concepts and ontologies, and a way to generate query-relevant structured snippets. We also report the results of a usability evaluation as well as user feedback.

Book ChapterDOI
13 Jun 2011
TL;DR: This chapter examines the issue of interoperability in considerable detail, looking initially at the problem space, and in particular the key barriers to interoperability, and then moving on to the solution space, focusing on research in the middleware and semantic interoperability communities.
Abstract: Distributed systems are becoming more complex in terms of both the level of heterogeneity encountered coupled with a high level of dynamism of such systems. Taken together, this makes it very difficult to achieve the crucial property of interoperability that is enabling two arbitrary systems to work together relying only on their declared service specification. This chapter examines this issue of interoperability in considerable detail, looking initially at the problem space, and in particular the key barriers to interoperability, and then moving on to the solution space, focusing on research in the middleware and semantic interoperability communities. We argue that existing approaches are simply unable to meet the demands of the complex distributed systems of today and that the lack of integration between the work on middleware and semantic interoperability is a clear impediment to progress in this area. We outline a roadmap towards meeting the challenges of interoperability including the need for integration across these two communities, resulting in middleware solutions that are intrinsically based on semantic meaning. We also advocate a dynamic approach to interoperability based on the concept of emergent middleware.

Journal ArticleDOI
TL;DR: Results are reported from two AHRC funded research projects that investigated the use of semantic techniques to link digital archive databases, vocabularies and associated grey literature.
Abstract: Differing terminology and database structure hinders meaningful cross search of excavation datasets. Matching free text grey literature reports with datasets poses yet more challenges. Conventional search techniques are unable to cross search between archaeological datasets and Web-based grey literature. Results are reported from two AHRC funded research projects that investigated the use of semantic techniques to link digital archive databases, vocabularies and associated grey literature. STAR (Semantic Technologies for Archaeological Resources) was a collaboration between the University of Glamorgan, Hypermedia Research Unit and English Heritage (EH). The main outcome is a research Demonstrator (available online), which cross searches over excavation datasets from different database schemas, including Raunds Roman, Raunds Prehistoric, Museum of London, Silchester Roman and Stanwick sampling. The system additionally cross searches over an extract of excavation reports from the OASIS index of grey literature, operated by the Archaeology Data Service (ADS). A conceptual framework provided by the CIDOC Conceptual Reference Model (CRM) integrates the different database structures and the metadata automatically generated from the OASIS reports by natural language processing techniques. The methods employed for extracting semantic RDF representations from the datasets and the information extraction from grey literature are described. The STELLAR project provides freely available tools to reduce the costs of mapping and extracting data to semantic search systems such as the Demonstrator and to linked data representation generally. Detailed use scenarios (and a screen capture video) provide a basis for a discussion of key issues, including cost-benefits, ontology modelling, mapping, terminology control, semantic implementation and information extraction issues. The scenarios show that semantic interoperability can be achieved by mapping and extracting different datasets and key concepts from OASIS reports to a central RDF based triple store. It is not necessary to expose the full detail of the ontological model; the Demonstrator shows that user interfaces for retrieval (or mapping) systems can be expressed using familiar archaeological concepts. Working with the CRM-EH archaeological extension of the CIDOC CRM ontology allows specific archaeological queries, while permitting interoperability at the more general CRM level, potentially extending to other areas of cultural heritage. The ability to connect published datasets with the hitherto under-utilised grey literature holds potential for meta studies, where aggregate patterns can be compared and hypotheses for future detailed investigation uncovered. Connecting the interpretation with the underlying context data via the semantic model facilitates the revisiting of previous interpretations by third parties, the possibility of juxtaposing parallel interpretations, or exposing the data to new research questions.

Journal ArticleDOI
TL;DR: It is concluded that envelopment of chemical computational resources as SADI SWS facilitates interdisciplinary research by enabling the definition of computational problems in terms of ontologies and formal logical statements instead of cumbersome and application-specific tasks and workflows.
Abstract: The diversity and the largely independent nature of chemical research efforts over the past half century are, most likely, the major contributors to the current poor state of chemical computational resource and database interoperability. While open software for chemical format interconversion and database entry cross-linking have partially addressed database interoperability, computational resource integration is hindered by the great diversity of software interfaces, languages, access methods, and platforms, among others. This has, in turn, translated into limited reproducibility of computational experiments and the need for application-specific computational workflow construction and semi-automated enactment by human experts, especially where emerging interdisciplinary fields, such as systems chemistry, are pursued. Fortunately, the advent of the Semantic Web, and the very recent introduction of RESTful Semantic Web Services (SWS) may present an opportunity to integrate all of the existing computational and database resources in chemistry into a machine-understandable, unified system that draws on the entirety of the Semantic Web. We have created a prototype framework of Semantic Automated Discovery and Integration (SADI) framework SWS that exposes the QSAR descriptor functionality of the Chemistry Development Kit. Since each of these services has formal ontology-defined input and output classes, and each service consumes and produces RDF graphs, clients can automatically reason about the services and available reference information necessary to complete a given overall computational task specified through a simple SPARQL query. We demonstrate this capability by carrying out QSAR analysis backed by a simple formal ontology to determine whether a given molecule is drug-like. Further, we discuss parameter-based control over the execution of SADI SWS. Finally, we demonstrate the value of computational resource envelopment as SADI services through service reuse and ease of integration of computational functionality into formal ontologies. The work we present here may trigger a major paradigm shift in the distribution of computational resources in chemistry. We conclude that envelopment of chemical computational resources as SADI SWS facilitates interdisciplinary research by enabling the definition of computational problems in terms of ontologies and formal logical statements instead of cumbersome and application-specific tasks and workflows.