scispace - formally typeset
Search or ask a question

Showing papers by "Rainer Breitling published in 2008"


Journal ArticleDOI
TL;DR: The processes and genes identified here present a framework for further study of the disease mechanism and provide candidate susceptibility genes and drug targets for Parkinson's disease and other α-synuclein related disorders.
Abstract: Inclusions in the brain containing alpha-synuclein are the pathological hallmark of Parkinson's disease, but how these inclusions are formed and how this links to disease is poorly understood. We have developed a C. elegans model that makes it possible to monitor, in living animals, the formation of alpha-synuclein inclusions. In worms of old age, inclusions contain aggregated alpha- synuclein, resembling a critical pathological feature. We used genome-wide RNA interference to identify processes involved in inclusion formation, and identified 80 genes that, when knocked down, resulted in a premature increase in the number of inclusions. Quality control and vesicle-trafficking genes expressed in the ER/Golgi complex and vesicular compartments were overrepresented, indicating a specific role for these processes in alpha-synuclein inclusion formation. Suppressors include aging-associated genes, such as sir-2.1/SIRT1 and lagr-1/LASS2. Altogether, our data suggest a link between alpha-synuclein inclusion formation and cellular aging, likely through an endomembrane-related mechanism. The processes and genes identified here present a framework for further study of the disease mechanism and provide candidate susceptibility genes and drug targets for Parkinson's disease and other alpha-synuclein related disorders.

371 citations


Journal ArticleDOI
TL;DR: It is concluded that careful meta-analysis is a powerful tool for integrating multiple array studies and achieves more reliable identification than an individual analysis, and rank products are more robust in gene ranking, which leads to a much higher reproducibility among independent studies.
Abstract: Motivation: The proliferation of public data repositories creates a need for meta-analysis methods to efficiently evaluate, integrate and validate related datasets produced by independent groups. A t-based approach has been proposed to integrate effect size from multiple studies by modeling both intra-and between-study variation. Recently, a non-parametric ‘rank product’ method, which is derived based on biological reasoning of fold-change criteria, has been applied to directly combine multiple datasets into one meta study. Fisher's Inverse χ2 method, which only depends on P-values from individual analyses of each dataset, has been used in a couple of medical studies. While these methods address the question from different angles, it is not clear how they compare with each other. Results: We comparatively evaluate the three methods; t-based hierarchical modeling, rank products and Fisher's Inverse χ2 test with P-values from either the t-based or the rank product method. A simulation study shows that the rank product method, in general, has higher sensitivity and selectivity than the t-based method in both individual and meta-analysis, especially in the setting of small sample size and/or large between-study variation. Not surprisingly, Fisher's χ2 method highly depends on the method used in the individual analysis. Application to real datasets demonstrates that meta-analysis achieves more reliable identification than an individual analysis, and rank products are more robust in gene ranking, which leads to a much higher reproducibility among independent studies. Though t-based meta-analysis greatly improves over the individual analysis, it suffers from a potentially large amount of false positives when P-values serve as threshold. We conclude that careful meta-analysis is a powerful tool for integrating multiple array studies. Contact: fxhong@jimmy.harvard.edu Supplementary information: Supplementary data are available at Bioinformatics online.

252 citations


Journal ArticleDOI
TL;DR: This work states that quantitative trait loci for molecular traits such as gene expression or protein levels (eQTL and pQTL) and the existence of hotspots, where a single polymorphism leads to widespread downstream changes in the expression of distant genes, are studied.
Abstract: Genetical genomics aims at identifying quantitative trait loci (QTLs) for molecular traits such as gene expression or protein levels (eQTL and pQTL, respectively). One of the central concepts in genetical genomics is the existence of hotspots [1], where a single polymorphism leads to widespread downstream changes in the expression of distant genes, which are all mapping to the same genomic locus. Several groups have hypothesized that many genetic polymorphisms—e.g., in major regulators or transcription factors—would lead to large and consistent biological effects that would be visible as eQTL hotspots.

209 citations


Journal ArticleDOI
TL;DR: Experimental high-throughput data can be used to improve and expand network reconstructions to include unexplored areas of metabolism, and data integration will play a particularly important part in exploiting the new experimental opportunities.
Abstract: The computational reconstruction and analysis of cellular models of microbial metabolism is one of the great success stories of systems biology. The extent and quality of metabolic network reconstructions is, however, limited by the current state of biochemical knowledge. Can experimental high-throughput data be used to improve and expand network reconstructions to include unexplored areas of metabolism? Recent advances in experimental technology and analytical methods bring this aim an important step closer to realization. Data integration will play a particularly important part in exploiting the new experimental opportunities.

93 citations


Journal ArticleDOI
TL;DR: A general approach is introduced that provides the foundations for a structured formal engineering of large-scale models of biochemical networks, using signal transduction as the main example.
Abstract: Quantitative models of biochemical networks (signal transduction cascades, metabolic pathways, gene regulatory circuits) are a central component of modern systems biology. Building and managing these complex models is a major challenge that can benefit from the application of formal methods adopted from theoretical computing science. Here we provide a general introduction to the field of formal modelling, which emphasizes the intuitive biochemical basis of the modelling process, but is also accessible for an audience with a background in computing science and/or model engineering. We show how signal transduction cascades can be modelled in a modular fashion, using both a qualitative approach--qualitative Petri nets, and quantitative approaches--continuous Petri nets and ordinary differential equations (ODEs). We review the major elementary building blocks of a cellular signalling model, discuss which critical design decisions have to be made during model building, and present a number of novel computational tools that can help to explore alternative modular models in an easy and intuitive manner. These tools, which are based on Petri net theory, offer convenient ways of composing hierarchical ODE models, and permit a qualitative analysis of their behaviour. We illustrate the central concepts using signal transduction as our main example. The ultimate aim is to introduce a general approach that provides the foundations for a structured formal engineering of large-scale models of biochemical networks.

78 citations


Journal ArticleDOI
TL;DR: This open source, multi-platform software has been successfully used to interpret metabolomic experiments and will enable others using filtered, high mass accuracy mass spectrometric data sets to build and analyse networks.
Abstract: We present a Cytoscape plugin for the inference and visualization of networks from high-resolution mass spectrometry metabolomic data. The software also provides access to basic topological analysis. This open source, multi-platform software has been successfully used to interpret metabolomic experiments and will enable others using filtered, high mass accuracy mass spectrometric data sets to build and analyse networks.

61 citations


Journal ArticleDOI
TL;DR: A novel computational method is demonstrated that improves the mass accuracy of the LTQ‐Orbitrap mass spectrometer from an initial ±1–2 ppm, obtained by the standard software, to an absolute median of 0.21‽ppm.
Abstract: With the advent of a new generation of high-resolution mass spectrometers, the fields of proteomics and metabolomics have gained powerful new tools. In this paper, we demonstrate a novel computational method that improves the mass accuracy of the LTQ-Orbitrap mass spectrometer from an initial +/-1-2 ppm, obtained by the standard software, to an absolute median of 0.21 ppm (SD 0.21 ppm). With the increased mass accuracy it becomes much easier to match mass chromatograms in replicates and different sample types, even if compounds are detected at very low intensities. The proposed method exploits the ubiquitous presence of background ions in LC-MS profiles for accurate alignment and internal mass calibration, making it applicable for all types of MS equipment. The accuracy of this approach will facilitate many downstream systems biology applications, including mass-based molecule identification, ab initio metabolic network reconstruction, and untargeted metabolomics in general.

61 citations


Journal ArticleDOI
TL;DR: A generalization of genetical genomics is proposed, which combines genetic and sensibly chosen environmental perturbations to study the plasticity of molecular networks and forms a crucial step toward understanding why individuals respond differently to drugs, toxins, pathogens, nutrients and other environmental influences.

54 citations


Journal ArticleDOI
TL;DR: The Prosecutor application, an application that enables researchers to rapidly infer gene function based on available gene expression data and functional annotations, uses a sensitive algorithm to achieve a high association rate of linking genes with unknown function to annotated genes.
Abstract: Despite a plethora of functional genomic efforts, the function of many genes in sequenced genomes remains unknown. The increasing amount of microarray data for many species allows employing the guilt-by-association principle to predict function on a large scale: genes exhibiting similar expression patterns are more likely to participate in shared biological processes.

8 citations