scispace - formally typeset
Search or ask a question
Author

Stuart Owen

Bio: Stuart Owen is an academic researcher from University of Manchester. The author has contributed to research in topics: Ontology (information science) & Metadata. The author has an hindex of 14, co-authored 30 publications receiving 1987 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: An update to the taverna tool suite is provided, highlighting new features and developments in the workbench and the Taverna Server.
Abstract: The Taverna workflow tool suite (http://www.taverna.org.uk) is designed to combine distributed Web Services and/or local tools into complex analysis pipelines. These pipelines can be executed on local desktop machines or through larger infrastructure (such as supercomputers, Grids or cloud environments), using the Taverna Server. In bioinformatics, Taverna workflows are typically used in the areas of high-throughput omics analyses (for example, proteomics or transcriptomics), or for evidence gathering methods involving text mining or data mining. Through Taverna, scientists have access to several thousand different tools and resources that are freely available from a large range of life science institutions. Once constructed, the workflows are reusable, executable bioinformatics protocols that can be shared, reused and repurposed. A repository of public workflows is available at http://www.myexperiment.org. This article provides an update to the Taverna tool suite, highlighting new features and developments in the workbench and the Taverna Server.

724 citations

Journal ArticleDOI
01 Feb 2013
TL;DR: This paper makes the case for a scientific data publication model on top of linked data and introduces the notion of Research Objects as first class citizens for sharing and publishing.
Abstract: Scientific data represents a significant portion of the linked open data cloud and scientists stand to benefit from the data fusion capability this will afford. Publishing linked data into the cloud, however, does not ensure the required reusability. Publishing has requirements of provenance, quality, credit, attribution and methods to provide the reproducibility that enables validation of results. In this paper we make the case for a scientific data publication model on top of linked data and introduce the notion of Research Objects as first class citizens for sharing and publishing. Highlights? We identify and characterise different aspects of reuse and reproducibility. ? We examine requirements for such reuse. ? We propose a scientific data publication model that layers on top of linked data publishing.

368 citations

30 Jun 2010
TL;DR: How the recently overhauled technical architecture of Taverna addresses issues of efficiency, scalability, and extensibility, and presents performance results based on a collection of synthetic workflows is described, as well as a concrete case study involving a production workflow in the area of cancer research.
Abstract: The Taverna workflow management system is an open source project with a history of widespread adoption within multiple experimental science communities, and a long-term ambition of effectively supporting the evolving need of those communities for complex, data-intensive, service-based experimental pipelines. This short paper describes how the recently overhauled technical architecture of Taverna addresses issues of efficiency, scalability, and extensibility, and presents performance results based on a collection of synthetic workflows, as well as a concrete case study involving a production workflow in the area of cancer research.

168 citations

Journal ArticleDOI
TL;DR: The results suggest that genetic variations in several key MTX pathway genes may influence response to MTX in the RA patients and could contribute towards a better understanding of and ability to predict MTX response in RA.
Abstract: Genetic polymorphisms in key methotrexate pathway genes are associated with response to treatment in rheumatoid arthritis patients

102 citations

Journal ArticleDOI
TL;DR: The FAIRDOMHub is a repository for publishing FAIR (Findable, Accessible, Interoperable and Reusable) Data, Operating procedures and Models for the Systems Biology community and enables researchers to organize, share and publish data, models and protocols.
Abstract: The FAIRDOMHub is a repository for publishing FAIR (Findable, Accessible, Interoperable and Reusable) Data, Operating procedures and Models (https://fairdomhub.org/) for the Systems Biology community. It is a web-accessible repository for storing and sharing systems biology research assets. It enables researchers to organize, share and publish data, models and protocols, interlink them in the context of the systems biology investigations that produced them, and to interrogate them via API interfaces. By using the FAIRDOMHub, researchers can achieve more effective exchange with geographically distributed collaborators during projects, ensure results are sustained and preserved and generate reproducible publications that adhere to the FAIR guiding principles of data stewardship.

99 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The FAIR Data Principles as mentioned in this paper are a set of data reuse principles that focus on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals.
Abstract: There is an urgent need to improve the infrastructure supporting the reuse of scholarly data. A diverse set of stakeholders—representing academia, industry, funding agencies, and scholarly publishers—have come together to design and jointly endorse a concise and measureable set of principles that we refer to as the FAIR Data Principles. The intent is that these may act as a guideline for those wishing to enhance the reusability of their data holdings. Distinct from peer initiatives that focus on the human scholar, the FAIR Principles put specific emphasis on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals. This Comment is the first formal publication of the FAIR Principles, and includes the rationale behind them, and some exemplar implementations in the community.

7,602 citations

Journal ArticleDOI
TL;DR: Improvements to Galaxy's core framework, user interface, tools, and training materials enable Galaxy to be used for analyzing tens of thousands of datasets, and >5500 tools are now available from the Galaxy ToolShed.
Abstract: Galaxy (homepage: https://galaxyproject.org, main public server: https://usegalaxy.org) is a web-based scientific analysis platform used by tens of thousands of scientists across the world to analyze large biomedical datasets such as those found in genomics, proteomics, metabolomics and imaging. Started in 2005, Galaxy continues to focus on three key challenges of data-driven biomedical science: making analyses accessible to all researchers, ensuring analyses are completely reproducible, and making it simple to communicate analyses so that they can be reused and extended. During the last two years, the Galaxy team and the open-source community around Galaxy have made substantial improvements to Galaxy's core framework, user interface, tools, and training materials. Framework and user interface improvements now enable Galaxy to be used for analyzing tens of thousands of datasets, and >5500 tools are now available from the Galaxy ToolShed. The Galaxy community has led an effort to create numerous high-quality tutorials focused on common types of genomic analyses. The Galaxy developer and user communities continue to grow and be integral to Galaxy's development. The number of Galaxy public servers, developers contributing to the Galaxy framework and its tools, and users of the main Galaxy server have all increased substantially.

2,601 citations

Journal ArticleDOI
TL;DR: Galaxy seeks to make data-intensive research more accessible, transparent and reproducible by providing a Web-based environment in which users can perform computational analyses and have all of the details automatically tracked for later inspection, publication, or reuse.
Abstract: High-throughput data production technologies, particularly 'next-generation' DNA sequencing, have ushered in widespread and disruptive changes to biomedical research. Making sense of the large datasets produced by these technologies requires sophisticated statistical and computational methods , as well as substantial computational power. This has led to an acute crisis in life sciences, as researchers without informatics training attempt to perform computation-dependent analyses. Since 2005, the Galaxy project has worked to address this problem by providing a framework that makes advanced computational tools usable by non experts. Galaxy seeks to make data-intensive research more accessible , transparent and reproducible by providing a Web-based environment in which users can perform computational analyses and have all of the details automatically tracked for later inspection, publication , or reuse. In this report we highlight recently added features enabling biomedical analyses on a large scale.

1,774 citations

Journal ArticleDOI
TL;DR: Current treatment strategies for rheumatoid arthritis are reviewed and how such insights could ultimately lead to the earlier diagnosis of RA - as well as providing new opportunities for drug treatment and prevention through behavioral changes in high-risk individuals.
Abstract: Rheumatoid arthritis (RA) is a chronic systemic autoimmune disease that primarily affects the lining of the synovial joints and is associated with progressive disability, premature death, and socioeconomic burdens. A better understanding of how the pathological mechanisms drive the deterioration of RA progress in individuals is urgently required in order to develop therapies that will effectively treat patients at each stage of the disease progress. Here we dissect the etiology and pathology at specific stages: (i) triggering, (ii) maturation, (iii) targeting, and (iv) fulminant stage, concomitant with hyperplastic synovium, cartilage damage, bone erosion, and systemic consequences. Modern pharmacologic therapies (including conventional, biological, and novel potential small molecule disease-modifying anti-rheumatic drugs) remain the mainstay of RA treatment and there has been significant progress toward achieving disease remission without joint deformity. Despite this, a significant proportion of RA patients do not effectively respond to the current therapies and thus new drugs are urgently required. This review discusses recent advances of our understanding of RA pathogenesis, disease modifying drugs, and provides perspectives on next generation therapeutics for RA.

867 citations

Journal ArticleDOI
TL;DR: Cytoscape Automation (CA), which marries CyToscape to highly productive workflow systems, for example, Python/R in Jupyter/RStudio, is described, which exposes over 270 Cytoscapes core functions and 34 apps as REST-callable functions with standardized JSON interfaces backed by Swagger documentation.
Abstract: Cytoscape is one of the most successful network biology analysis and visualization tools, but because of its interactive nature, its role in creating reproducible, scalable, and novel workflows has been limited. We describe Cytoscape Automation (CA), which marries Cytoscape to highly productive workflow systems, for example, Python/R in Jupyter/RStudio. We expose over 270 Cytoscape core functions and 34 Cytoscape apps as REST-callable functions with standardized JSON interfaces backed by Swagger documentation. Independent projects to create and publish Python/R native CA interface libraries have reached an advanced stage, and a number of automation workflows are already published.

721 citations