scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Next Generation Connectivity Map: L1000 Platform and the First 1,000,000 Profiles.

TL;DR: The expanded CMap is reported, made possible by a new, low-cost, high-throughput reduced representation expression profiling method that is shown to be highly reproducible, comparable to RNA sequencing, and suitable for computational inference of the expression levels of 81% of non-measured transcripts.
About: This article is published in Cell.The article was published on 2017-11-30 and is currently open access. It has received 1943 citations till now.
Citations
More filters
Journal ArticleDOI
TL;DR: In this paper, a graph neural network (GNN) version of the Learning under Privileged Information paradigm is proposed to predict new disease gene associations, which does not require the genetic features to be the same at training and test stages.
Abstract: Motivation Recently, machine learning models have achieved tremendous success in prioritizing candidate genes for genetic diseases. These models are able to accurately quantify the similarity among disease and genes based on the intuition that similar genes are more likely to be associated with similar diseases. However, the genetic features these methods rely on are often hard to collect due to high experimental cost and various other technical limitations. Existing solutions of this problem significantly increase the risk of overfitting and decrease the generalizability of the models. Results In this work, we propose a graph neural network (GNN) version of the Learning under Privileged Information paradigm to predict new disease gene associations. Unlike previous gene prioritization approaches, our model does not require the genetic features to be the same at training and test stages. If a genetic feature is hard to measure and therefore missing at the test stage, our model could still efficiently incorporate its information during the training process. To implement this, we develop a Heteroscedastic Gaussian Dropout algorithm, where the dropout probability of the GNN model is determined by another GNN model with a mirrored GNN architecture. To evaluate our method, we compared our method with four state-of-the-art methods on the Online Mendelian Inheritance in Man dataset to prioritize candidate disease genes. Extensive evaluations show that our model could improve the prediction accuracy when all the features are available compared to other methods. More importantly, our model could make very accurate predictions when >90% of the features are missing at the test stage. Availability and implementation Our method is realized with Python 3.7 and Pytorch 1.5.0 and method and data are freely available at: https://github.com/juanshu30/Disease-Gene-Prioritization-with-Privileged-Information-and-Heteroscedastic-Dropout.

14 citations

Posted ContentDOI
07 Sep 2017-bioRxiv
TL;DR: First-of-its-kind public resource of proteomic responses to systematically administered perturbagens demonstrated, which could be leveraged against public domain external datasets to recognize therapeutic hypotheses that are consistent with ongoing clinical trials for the treatment of multiple myeloma and acute lymphocytic leukemia.
Abstract: Though the added value of proteomic measurements to gene expression profiling has been demonstrated, profiling of gene expression on its own remains the dominant means of understanding cellular responses to perturbation. Direct protein measurements are typically limited due to issues of cost and scale; however, the recent development of high-throughput, targeted sentinel mass spectrometry assays provides an opportunity for proteomics to contribute at a meaningful scale in high-value areas for drug development. To demonstrate the feasibility of a systematic and comprehensive library of perturbational proteomic signatures, we profiled 90 drugs (in triplicate) in six cell lines using two different proteomic assays -- one measuring global changes of epigenetic marks on histone proteins and another measuring a set of peptides reporting on the phosphoproteome -- for a total of more than 3,400 samples. This effort represents a first-of-its-kind resource for proteomics. The majority of tested drugs generated reproducible responses in both phosphosignaling and chromatin states, but we observed differences in the responses that were cell line- and assay-specific. We formalized the process of comparing response signatures within the data using a concept called connectivity, which enabled us to integrate data across cell types and assays. Furthermore, it facilitated incorporation of transcriptional signatures. Consistent connectivity among cell types revealed cellular responses that transcended cell-specific effects, while consistent connectivity among assays revealed unexpected associations between drugs that were confirmed by experimental follow-up. We further demonstrated how the resource could be leveraged against public domain external datasets to recognize therapeutic hypotheses that are consistent with ongoing clinical trials for the treatment of multiple myeloma and acute lymphocytic leukemia (ALL). These data are available for download via the Gene Expression Omnibus (accession GSE101406), and web apps for interacting with this resource are available at https://clue.io/proteomics.

14 citations

Posted ContentDOI
06 Aug 2018-bioRxiv
TL;DR: A novel method that combines information in literature and structured databases, and applies feature learning to generate vector space embeddings is developed and can be applied to other areas in which multi-modal information is used to build predictive models.
Abstract: Drug repurposing is the problem of finding new uses for known drugs, and may either involve finding a new protein target or a new indication for a known mechanism. Several computational methods for drug repurposing exist, and many of these methods rely on combinations of different sources of information, extract hand-crafted features and use a computational model to predict targets or indications for a drug. One of the distinguishing features between different drug repurposing systems is the selection of features. Recently, a set of novel machine learning methods have become available that can efficiently learn features from datasets, and these methods can be applied, among others, to text and structured data in knowledge graphs. We developed a novel method that combines information in literature and structured databases, and applies feature learning to generate vector space embeddings. We apply our method to the identification of drug targets and indications for known drugs based on heterogeneous information about drugs, target proteins, and diseases. We demonstrate that our method is able to combine complementary information from both structured databases and from literature, and we show that our method can compete with well-established methods for drug repurposing. Our approach is generic and can be applied to other areas in which multi-modal information is used to build predictive models.

14 citations


Cites methods from "A Next Generation Connectivity Map:..."

  • ...Additionally, omics data, in particular gene expression has been used for analyzing or inferring new drugs indications (Subramanian et al., 2017)....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the authors used a probabilistic Bayesian network approach to identify genes involved in inflammation, immune activation, and reduced bioenergetics associated with Major Depressive Disorder.
Abstract: Major depressive disorder (MDD) is a brain disorder often characterized by recurrent episode and remission phases. The molecular correlates of MDD have been investigated in case-control comparisons, but the biological alterations associated with illness trait (regardless of clinical phase) or current state (symptomatic and remitted phases) remain largely unknown, limiting targeted drug discovery. To characterize MDD trait- and state-dependent changes, in single or recurrent depressive episode or remission, we generated transcriptomic profiles of subgenual anterior cingulate cortex of postmortem subjects in first MDD episode (n = 20), in remission after a single episode (n = 15), in recurrent episode (n = 20), in remission after recurring episodes (n = 15) and control subject (n = 20). We analyzed the data at the gene, biological pathway, and cell-specific molecular levels, investigated putative causal events and therapeutic leads. MDD-trait was associated with genes involved in inflammation, immune activation, and reduced bioenergetics (q < 0.05) whereas MDD-states were associated with altered neuronal structure and reduced neurotransmission (q < 0.05). Cell-level deconvolution of transcriptomic data showed significant change in density of GABAergic interneurons positive for corticotropin-releasing hormone, somatostatin, or vasoactive-intestinal peptide (p < 3 × 10-3). A probabilistic Bayesian-network approach showed causal roles of immune-system-activation (q < 8.67 × 10-3), cytokine-response (q < 4.79 × 10-27) and oxidative-stress (q < 2.05 × 10-3) across MDD-phases. Gene-sets associated with these putative causal changes show inverse associations with the transcriptomic effects of dopaminergic and monoaminergic ligands. The study provides first insights into distinct cellular and molecular pathologies associated with trait- and state-MDD, on plasticity mechanisms linking the two pathologies, and on a method of drug discovery focused on putative disease-causing pathways.

14 citations

Journal ArticleDOI
TL;DR: In this paper , the authors identify robust associations between various mutational signatures and drug activity across cancer cell lines; these are as numerous as associations with established genetic markers such as driver gene alterations.
Abstract: Abstract Genomic analyses have revealed mutational footprints associated with DNA maintenance gone awry, or with mutagen exposures. Because cancer therapeutics often target DNA synthesis or repair, we asked if mutational signatures make useful markers of drug sensitivity. We detect mutational signatures in cancer cell line exomes (where matched healthy tissues are not available) by adjusting for the confounding germline mutation spectra across ancestries. We identify robust associations between various mutational signatures and drug activity across cancer cell lines; these are as numerous as associations with established genetic markers such as driver gene alterations. Signatures of prior exposures to DNA damaging agents – including chemotherapy – tend to associate with drug resistance, while signatures of deficiencies in DNA repair tend to predict sensitivity towards particular therapeutics. Replication analyses across independent drug and CRISPR genetic screening data sets reveal hundreds of robust associations, which are provided as a resource for drug repurposing guided by mutational signature markers.

14 citations

References
More filters
Journal ArticleDOI
TL;DR: The Gene Set Enrichment Analysis (GSEA) method as discussed by the authors focuses on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation.
Abstract: Although genomewide RNA expression analysis has become a routine tool in biomedical research, extracting biological insight from such information remains a major challenge. Here, we describe a powerful analytical method called Gene Set Enrichment Analysis (GSEA) for interpreting gene expression data. The method derives its power by focusing on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation. We demonstrate how GSEA yields insights into several cancer-related data sets, including leukemia and lung cancer. Notably, where single-gene analysis finds little similarity between two independent studies of patient survival in lung cancer, GSEA reveals many biological pathways in common. The GSEA method is embodied in a freely available software package, together with an initial database of 1,325 biologically defined gene sets.

34,830 citations

Journal Article
TL;DR: A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.
Abstract: We present a new technique called “t-SNE” that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large datasets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of datasets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the datasets.

30,124 citations

Journal ArticleDOI
TL;DR: The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data and provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-power gene expression and genomic hybridization experiments.
Abstract: The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data. GEO provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-throughput gene expression and genomic hybridization experiments. GEO is not intended to replace in house gene expression databases that benefit from coherent data sets, and which are constructed to facilitate a particular analytic method, but rather complement these by acting as a tertiary, central data distribution hub. The three central data entities of GEO are platforms, samples and series, and were designed with gene expression and genomic hybridization experiments in mind. A platform is, essentially, a list of probes that define what set of molecules may be detected. A sample describes the set of molecules that are being probed and references a single platform used to generate its molecular abundance data. A series organizes samples into the meaningful data sets which make up an experiment. The GEO repository is publicly accessible through the World Wide Web at http://www.ncbi.nlm.nih.gov/geo.

10,968 citations

Journal ArticleDOI
TL;DR: How BLAT was optimized is described, which is more accurate and 500 times faster than popular existing tools for mRNA/DNA alignments and 50 times faster for protein alignments at sensitivity settings typically used when comparing vertebrate sequences.
Abstract: Analyzing vertebrate genomes requires rapid mRNA/DNA and cross-species protein alignments A new tool, BLAT, is more accurate and 500 times faster than popular existing tools for mRNA/DNA alignments and 50 times faster for protein alignments at sensitivity settings typically used when comparing vertebrate sequences BLAT's speed stems from an index of all nonoverlapping K-mers in the genome This index fits inside the RAM of inexpensive computers, and need only be computed once for each genome assembly BLAT has several major stages It uses the index to find regions in the genome likely to be homologous to the query sequence It performs an alignment between homologous regions It stitches together these aligned regions (often exons) into larger alignments (typically genes) Finally, BLAT revisits small internal exons possibly missed at the first stage and adjusts large gap boundaries that have canonical splice sites where feasible This paper describes how BLAT was optimized Effects on speed and sensitivity are explored for various K-mer sizes, mismatch schemes, and number of required index matches BLAT is compared with other alignment programs on various test sets and then used in several genome-wide applications http://genomeucscedu hosts a web-based BLAT server for the human genome

8,326 citations

Journal ArticleDOI
TL;DR: This paper proposed parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples.
Abstract: SUMMARY Non-biological experimental variation or “batch effects” are commonly observed across multiple batches of microarray experiments, often rendering the task of combining data from these batches difficult. The ability to combine microarray data sets is advantageous to researchers to increase statistical power to detect biological phenomena from studies where logistical considerations restrict sample size or in studies that require the sequential hybridization of arrays. In general, it is inappropriate to combine data sets without adjusting for batch effects. Methods have been proposed to filter batch effects from data, but these are often complicated and require large batch sizes (>25) to implement. Because the majority of microarray studies are conducted using much smaller sample sizes, existing methods are not sufficient. We propose parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples. We illustrate our methods using two example data sets and show that our methods are justifiable, easy to apply, and useful in practice. Software for our method is freely available at: http://biosun1.harvard.edu/complab/batch/.

6,319 citations

Related Papers (5)