scispace - formally typeset
Search or ask a question

Showing papers presented at "Semantic Web Applications and Tools for Life Sciences in 2013"


Proceedings Article
01 Jan 2013
TL;DR: A novel approach for logical representation of notions in organic chemistry, as well as for reasoning about generic chemical reactions, is presented and shows that expressive logical language can be useful for semantic modeling.
Abstract: We explore advantages that can be gained from using expressive logic languages for semantic modelling of chemical reactions. First, we present a novel approach for logical representation of notions in organic chemistry, as well as for reasoning about generic chemical reactions. Subsequently, using this new semantic modeling of reactions, we explore what reasoning problems can be solved. We focus on solving organic chemistry synthesis problems, where the goal is to synthesize the target molecule from a set of starting stage molecules. We argue that this problem can be reduced to a planning problem in Artificial Intelligence. We conduct experimental study including empirical assessment of a PROLOG planner and two state-of-the-art planners. We investigate if they are capable of solving a set of instances of the organic synthesis problem. We report numerical data from our study and do comparative analysis of the planners. The novelty of our work is in using state-of-the art planners for solving the organic synthesis problem. The significance of our work is in methodology that we developed and in showing that expressive logical language can be useful for semantic modeling.

7 citations


Proceedings Article
01 Jan 2013
TL;DR: A socialand context-aware multi-sensor platform is presented, which integrates information of fall detection systems and sensors at the home of the elderly, by using an ontology.
Abstract: A socialand context-aware multi-sensor platform is presented, which integrates information of fall detection systems and sensors at the home of the elderly, by using an ontology. This integrated contextual information allows to automatically and continuously assess the fall risk of the elderly, to more accurately detect falls and identify false alarms and to automatically notify the appropriate caregiver.

6 citations


Proceedings Article
01 Jan 2013
TL;DR: The rules used within the OpenPHACTS (http://www.openphacts.org) Identity Management Service to compute co-reference chains across multiple datasets are presented and the challenges of automatically computing co- reference and the need for capturing the context of the equivalence are highlighted.
Abstract: This paper presents the rules used within the OpenPHACTS (http://www.openphacts.org) Identity Management Service to compute co-reference chains across multiple datasets. The web of (linked) data has encouraged a proliferation of identifiers for the concepts cap- tured in datasets; with each dataset using their own identifier. A key data integration challenge is linking the co-referent identifiers, i.e. identifying and linking the equivalent concept in every dataset. Exacerbating this challenge, the datasets model the data differently, so when is one repre- sentation truly the same as another? Finally, different users have their own task and domain specific notions of equivalence that are driven by their operational knowledge. Consumers of the data need to be able to choose the notion of operational equivalence to be applied for the con- text of their application. We highlight the challenges of automatically computing co-reference and the need for capturing the context of the equivalence. This context is then used to control the co-reference computation. Ultimately, the context will enable data consumers to decide which co-references to include in their applications.

4 citations


Proceedings Article
01 Jan 2013
TL;DR: This study developed a template generation and visualization system based on an open-source Resource Description Framework store backend, a SmartGWT-based web user interface, and a “mind map” based tool for the visualization of generated domain templates.
Abstract: The Biomedical Research Integrated Domain Group (BRIDG) model is a formal domain analysis model for protocol-driven biomedical research, and serves as semantic foundation for application and message development in the standards developing organizations (SDOs). The increasing sophistication and complexity of the BRIDG model requires new approaches to the management and utilization of the underlying semantics to harmonize domain-specific standards. The objective of this study was to develop and evaluate a semantic web-based approach that integrates the BRIDG model with ISO 21090 data types to generate domain templates to support clinical study meta-data standards development. In it we developed a template generation and visualization system based on an open-source Resource Description Framework (RDF) store backend, a SmartGWT-based web user interface, and a “mind map” based tool for the visualization of generated domain templates. We also developed a RESTful web service for access to the generated domain templates in a Clinical Information Modeling Initiative (CIMI)-compliant format. A preliminary usability study is performed to evaluate the system in terms of the ease of use and the capability for meeting the requirements using a selected use case.

3 citations


Proceedings Article
01 Jan 2013
TL;DR: In this paper, the Tawny-OWLOWL library is used to generate large numbers of classes from much simpler data structures which is highly beneficial within biomedical ontology engineering.
Abstract: Developing ontologies can be expensive, time-consuming, as well as difficult to develop and maintain. This is especially true for more expressive and/or larger ontologies. Some ontologies are, however, relatively repetitive, reusing design patterns; building these with both generic and bespoke patterns should reduce duplication and increase regularity which in turn should impact on the cost of development. Here we report on the usage of patterns applied to two biomedical ontologies: firstly a novel ontology for karyotypes which has been built groundup using a pattern based approach; and, secondly, our initial refactoring of the SIO ontology to make explicit use of patterns at development time. To enable this, we use the Tawny-OWL library which enables fullprogrammatic development of ontologies. We show how this approach can generate large numbers of classes from much simpler data structures which is highly beneficial within biomedical ontology engineering.

2 citations


Proceedings Article
01 Jan 2013
TL;DR: The goal of this analysis was to show how the CNTRO ontology and is associated Timeline Library could be used to examine recommendations for length of drug administration and support guidance for use of longer antiplatelet therapy.
Abstract: In this paper, we show how we have applied the Clinical Narrative Temporal Relation Ontology (CNTRO) and its associated temporal reasoning system (the CNTRO Timeline Library) for automatically identifying, ordering, and calculating the duration of temporal events within adverse event report narratives. The Objective of this research is to evaluate the feasibility of the CNTRO Timeline Library using a real clinical use case application (late stent thrombosis adverse events). Narratives from late stent thrombosis adverse events documented within the Food and Drug Administration’s (FDA) Manufacturing and User Facility Device Experience (MAUDE) database were used as a test case. 238 annotated narratives were evaluated using the CNTRO Timeline Library. The CNTRO Timeline Library had a 95.38% accuracy in correctly ordering events within the narratives. The duration function of the CNTRO Timeline Library was also evaluated and found to have 80% accuracy in correctly determining the duration of an event across 41 narratives, and 76.6% accuracy in determining the duration between two given events across 77 narratives. Within this paper is an example of how the durations calculated by the CNTRO Timeline Library can be used to examine therapeutic guidelines. Complaint narratives were separated into two groups based on a long (greater than 6 months) or short (6 months or less) duration of antiplatelet therapy administration. The duration of antiplatelet administration was then compared to the duration between stent implantation and occurrence of late stent thrombosis. The goal of this analysis was to show how the CNTRO ontology and is associated Timeline Library could be used to examine recommendations for length of drug administration. In this use case, the result supports guidance for use of longer antiplatelet therapy. This example validates the CNTRO System’s ability to confirm known temporal trends.

2 citations


Proceedings Article
01 Dec 2013
TL;DR: The Open PHACTS Explorer is a web application that supports drug discovery via the OpenPHACTS API without requiring knowledge of SPARQL or the RDF data being searched.
Abstract: The Open PHACTS Explorer is a web application that supports drug discovery via the Open PHACTS API without requiring knowledge of SPARQL or the RDF data being searched. It provides a UI layer on top of the Open PHACTS linked data cache and also provides a javascript library to facilitate easy access to the Open PHACTS API.

1 citations


Proceedings Article
01 Jan 2013
TL;DR: A novel framework named Latent Semantic Manifold (LSM) is a mixture model based on the concepts of topology and probability that can find the latent semantics in web data and represent them in homogeneous groups.
Abstract: Search engines have become an indispensable tool for obtaining relevant information on the Web. The search engine often generates a large number of results, including several irrelevant items that obscure the comprehension of the generated results. Therefore, the search engines need to be enhanced to discover the latent semantics in high-dimensional web data. This paper purports to explain a novel framework, including its implementation and evaluation. To discover the latent semantics in high-dimensional web data, we proposed a framework named Latent Semantic Manifold (LSM). LSM is a mixture model based on the concepts of topology and probability. The framework can find the latent semantics in web data and represent them in homogeneous groups. The framework will be evaluated by experiments. The LSM framework outperformed compared to other frameworks. In addition, we deployed the framework to develop a tool. The tool was deployed for two years at two places library and one biomedical engineering laboratory of Taiwan. The tool assisted the researchers to do semantic searches of the PubMed database. LSM framework evaluation and deployment suggest that the framework could be used to enhance the functionalities of currently available search engines by discovering latent semantics in high-dimensional web data.

Proceedings Article
01 Jan 2013
TL;DR: This work reports an application of semantic data integration with faceted search for interactive analysis of gene expression data of Alzheimer’s disease and says this framework can be easily extended for other biomedical domains.
Abstract: In the course of gene expression analysis, it is required to interpret data by referencing knowledge bases of genetics, pathways, diseases and drugs. However, because those external resources are often stored in distributed databases in various formats, it is hard for biomedical scientists to use them in combination. Semantic Web technologies are suitable for integration of those heterogeneous datasets using Resource Description Framework (RDF) and providing a faceted search interface. In this work, we report an application of semantic data integration with faceted search for interactive analysis of gene expression data of Alzheimer’s disease. This framework can be easily extended for other biomedical domains.