scispace - formally typeset
Search or ask a question

Showing papers by "Nina Jeliazkova published in 2012"


Journal ArticleDOI
TL;DR: A review of ontology developments in the field of predictive toxicology can be found in this article, where the authors present a set of perspectives showing how ontologies are being used in toxicology initiatives and applications.
Abstract: The field of predictive toxicology requires the development of open, public, computable, standardized toxicology vocabularies and ontologies to support the applications required by in silico, in vitro, and in vivo toxicology methods and related analysis and reporting activities. In this article we review ontology developments based on a set of perspectives showing how ontologies are being used in predictive toxicology initiatives and applications. Perspectives on resources and initiatives reviewed include OpenTox, eTOX, Pistoia Alliance, ToxWiz, Virtual Liver, EU-ADR, BEL, ToxML, and Bioclipse. We also review existing ontology developments in neighboring fields that can contribute to establishing an ontological framework for predictive toxicology. A significant set of resources is already available to provide a foundation for an ontological framework for 21st century mechanistic-based toxicology research. Ontologies such as ToxWiz provide a basis for application to toxicology investigations, whereas other ontologies under development in the biological, chemical, and biomedical communities could be incorporated in an extended future framework. OpenTox has provided a semantic web framework for the implementation of such ontologies into software applications and linked data resources. Bioclipse developers have shown the benefit of interoperability obtained through ontology by being able to link their workbench application with remote OpenTox web services. Although these developments are promising, an increased international coordination of efforts is greatly needed to develop a more unified, standardized, and open toxicology ontology framework.

31 citations


Journal ArticleDOI
TL;DR: The OpenTox Framework as mentioned in this paper provides a unified access to toxicity data, predictive models and validation procedures using a common information model, based on the openTox ontologies, describing predictive algorithms, models and toxicity data.
Abstract: The OpenTox Framework, developed by the partners in the OpenTox project ( http://www.opentox.org ), aims at providing a unified access to toxicity data, predictive models and validation procedures. Interoperability of resources is achieved using a common information model, based on the OpenTox ontologies, describing predictive algorithms, models and toxicity data. As toxicological data may come from different, heterogeneous sources, a deployed ontology, unifying the terminology and the resources, is critical for the rational and reliable organization of the data, and its automatic processing. The following related ontologies have been developed for OpenTox: a) Toxicological ontology – listing the toxicological endpoints; b) Organs system and Effects ontology – addressing organs, targets/examinations and effects observed in in vivo studies; c) ToxML ontology – representing semi-automatic conversion of the ToxML schema; d) OpenTox ontology– representation of OpenTox framework components: chemical compounds, datasets, types of algorithms, models and validation web services; e) ToxLink–ToxCast assays ontology and f) OpenToxipedia community knowledge resource on toxicology terminology. OpenTox components are made available through standardized REST web services, where every compound, data set, and predictive method has a unique resolvable address (URI), used to retrieve its Resource Description Framework (RDF) representation, or to initiate the associated calculations and generate new RDF-based resources. The services support the integration of toxicity and chemical data from various sources, the generation and validation of computer models for toxic effects, seamless integration of new algorithms and scientifically sound validation routines and provide a flexible framework, which allows building arbitrary number of applications, tailored to solving different problems by end users (e.g. toxicologists). The OpenTox toxicological ontology projects may be accessed via the OpenTox ontology development page http://www.opentox.org/dev/ontology the OpenTox ontology is available as OWL at http://opentox.org/api/11/opentox.owl , the ToxML - OWL conversion utility is an open source resource available at http://ambit.svn.sourceforge.net/viewvc/ambit/branches/toxml-utils/

26 citations


Journal ArticleDOI
TL;DR: This communication sets out a roadmap for the development of an integrated toxicology ontology, harnessing existing resources where applicable, and describes the stakeholders' requirements analysis from the academic and industry perspectives, timelines, and expected benefits of this initiative.
Abstract: Foreign substances can have a dramatic and unpredictable adverse effect on human health. In the development of new therapeutic agents, it is essential that the potential adverse effects of all candidates be identified as early as possible. The field of predictive toxicology strives to profile the potential for adverse effects of novel chemical substances before they occur, both with traditional in vivo experimental approaches and increasingly through the development of in vitro and computational methods which can supplement and reduce the need for animal testing. To be maximally effective, the field needs access to the largest possible knowledge base of previous toxicology findings, and such results need to be made available in such a fashion so as to be interoperable, comparable, and compatible with standard toolkits. This necessitates the development of open, public, computable, and standardized toxicology vocabularies and ontologies so as to support the applications required by in silico, in vitro, and in vivo toxicology methods and related analysis and reporting activities. Such ontology development will support data management, model building, integrated analysis, validation and reporting, including regulatory reporting and alternative testing submission requirements as required by guidelines such as the REACH legislation, leading to new scientific advances in a mechanistically-based predictive toxicology. Numerous existing ontology and standards initiatives can contribute to the creation of a toxicology ontology supporting the needs of predictive toxicology and risk assessment. Additionally, new ontologies are needed to satisfy practical use cases and scenarios where gaps currently exist. Developing and integrating these resources will require a well-coordinated and sustained effort across numerous stakeholders engaged in a public-private partnership. In this communication, we set out a roadmap for the development of an integrated toxicology ontology, harnessing existing resources where applicable. We describe the stakeholders’ requirements analysis from the academic and industry perspectives, timelines, and expected benefits of this initiative, with a view to engagement with the wider community.

17 citations


Journal ArticleDOI
TL;DR: The topic of this review is a set of recently published web tools that facilitate predictive toxicology model building, offering web access to chemical structure-based methods, although some of the frameworks could also provide bioinformatics or hybrid data analysis functionalities.
Abstract: Introduction: The development and use of web tools in chemistry has accumulated more than 15 years of history already. Powered by the advances in the Internet technologies, the current generation of web systems are starting to expand into areas, traditional for desktop applications. The web platforms integrate data storage, cheminformatics and data analysis tools. The ease of use and the collaborative potential of the web is compelling, despite the challenges. Areas covered: The topic of this review is a set of recently published web tools that facilitate predictive toxicology model building. The focus is on software platforms, offering web access to chemical structure-based methods, although some of the frameworks could also provide bioinformatics or hybrid data analysis functionalities. A number of historical and current developments are cited. In order to provide comparable assessment, the following characteristics are considered: support for workflows, descriptor calculations, visualization, modeling ...

9 citations


Journal ArticleDOI
TL;DR: A new and efficient method for identifying activity cliffs and visualization of activity landscapes, which generates a list of individual compounds, ranked according to the likelihood of their involvement in the formation of activity cliffs, and goes beyond characterizing cliffs by structure pairs only.
Abstract: The Structure-Activity Relationships (SAR) landscape and activity cliffs concepts have their origins in medicinal chemistry and receptor-ligand interactions modelling. While intuitive, the definition of an activity cliff as a "pair of structurally similar compounds with large differences in potency" is commonly recognized as ambiguous. This paper proposes a new and efficient method for identifying activity cliffs and visualization of activity landscapes. The activity cliffs definition could be improved to reflect not the cliff steepness alone, but also the rate of the change of the steepness. The method requires explicitly setting similarity and activity difference thresholds, but provides means to explore multiple thresholds and to visualize in a single map how the thresholds affect the activity cliff identification. The identification of the activity cliffs is addressed by reformulating the problem as a statistical one, by introducing a probabilistic measure, namely, calculating the likelihood of a compound having large activity difference compared to other compounds, while being highly similar to them. The likelihood is effectively a quantification of a SAS Map with defined thresholds. Calculating the likelihood relies on four counts only, and does not require the pairwise matrix storage. This is a significant advantage, especially when processing large datasets. The method generates a list of individual compounds, ranked according to the likelihood of their involvement in the formation of activity cliffs, and goes beyond characterizing cliffs by structure pairs only. The visualisation is implemented by considering the activity plane fixed and analysing the irregularities of the similarity itself. It provides a convenient analogy to a topographic map and may help identifying the most appropriate similarity representation for each specific SAR space. The proposed method has been applied to several datasets, representing different biological activities. Finally, the method is implemented as part of an existing open source Ambit package and could be accessed via an OpenTox API compliant web service and via an interactive application, running within a modern, JavaScript enabled web browser. Combined with the functionalities already offered by the OpenTox framework, like data sharing and remote calculations, it could be a useful tool for exploring chemical landscapes online.

1 citations


Book ChapterDOI
01 Jan 2012
TL;DR: This approach in analogy of the decision support that is already in use in the pharmaceutical industry for designing new drug leads is used in two case studies in malaria research, using a combination of local and remote predictive models.
Abstract: Computational predictive toxicology draws knowledge from many independent sources, providing a rich support tool to assess a wide variety of toxicological properties. A key example would be for it to complement alternative testing methods. The integration of Bioclipse and OpenTox permits toxicity prediction based on the analysis of chemical structures, and visualization of the substructure contributions to the toxicity prediction. In analogy of the decision support that is already in use in the pharmaceutical industry for designing new drug leads, we use this approach in two case studies in malaria research, using a combination of local and remote predictive models. This way, we find drug leads without predicted toxicity.