scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Next Generation Connectivity Map: L1000 Platform and the First 1,000,000 Profiles.

TL;DR: The expanded CMap is reported, made possible by a new, low-cost, high-throughput reduced representation expression profiling method that is shown to be highly reproducible, comparable to RNA sequencing, and suitable for computational inference of the expression levels of 81% of non-measured transcripts.
About: This article is published in Cell.The article was published on 2017-11-30 and is currently open access. It has received 1943 citations till now.
Citations
More filters
Journal ArticleDOI
05 Apr 2022-PLOS ONE
TL;DR: A computational algorithm for the enrichment of synergistic drug combinations using gene regulatory network knowledge and an operational module unit (OMU) system which is generated from single drug genomic and phenotypic data is developed.
Abstract: Drug combination therapies can improve drug efficacy, reduce drug dosage, and overcome drug resistance in cancer treatments. Current research strategies to determine which drug combinations have a synergistic effect rely mainly on clinical or empirical experience and screening predefined pools of drugs. Given the number of possible drug combinations, the speed, and scope to find new drug combinations are very limited using these methods. Due to the exponential growth in the number of drug combinations, it is difficult to test all possible combinations in the lab. There are several large-scale public genomic and phenotypic resources that provide data from single drug-treated cells as well as data from small molecule treated cells. These databases provide a wealth of information regarding cellular responses to drugs and offer an opportunity to overcome the limitations of the current methods. Developing a new advanced data processing and analysis strategy is imperative and a computational prediction algorithm is highly desirable. In this paper, we developed a computational algorithm for the enrichment of synergistic drug combinations using gene regulatory network knowledge and an operational module unit (OMU) system which we generate from single drug genomic and phenotypic data. As a proof of principle, we applied the pipeline to a group of anticancer drugs and demonstrate how the algorithm could help researchers efficiently find possible synergistic drug combinations using single drug data to evaluate all possible drug pairs.

5 citations

Journal ArticleDOI
TL;DR: Systems chemical biology is primed to reveal an integrated understanding of fundamental biology and to discover new chemical probes to comprehensively dissect and systematically understand that biology, thereby providing a path to novel strategies for discovering therapeutics, designing drug combinations, avoiding toxicity, and harnessing beneficial polypharmacology.
Abstract: For the past several decades, chemical biologists have been leveraging chemical principles for understanding biology, tackling disease, and biomanufacturing, while systems biologists have holistica...

5 citations

Journal ArticleDOI
TL;DR: In this article , the authors integrate data from a large compendium of transcriptomic responses to chemical exposure with a comprehensive database of chemical-protein associations to train binary classifiers that predict mechanism(s) of action from transcriptome responses.
Abstract: The advent of high-throughput transcriptomic screening technologies has resulted in a wealth of publicly available gene expression data associated with chemical treatments. From a regulatory perspective, data sets that cover a large chemical space and contain reference chemicals offer utility for the prediction of molecular initiating events associated with chemical exposure. Here, we integrate data from a large compendium of transcriptomic responses to chemical exposure with a comprehensive database of chemical-protein associations to train binary classifiers that predict mechanism(s) of action from transcriptomic responses. First, we linked reference chemicals present in the LINCS L1000 gene expression data collection to chemical identifiers in RefChemDB, a database of chemical-protein interactions. Next, we trained binary classifiers on MCF7 human breast cancer cell line derived gene expression profiles and chemical-protein labels using six classification algorithms to identify optimal analysis parameters. To validate classifier accuracy, we used holdout data sets, training-excluded reference chemicals, and empirical significance testing of null models derived from permuted chemical-protein associations. To identify classifiers that have variable predicting performance across training data derived from different cellular contexts, we trained a separate set of binary classifiers on the PC3 human prostate cancer cell line.We trained classifiers using expression data associated with chemical treatments linked to 51 molecular initiating events. This analysis identified and validated 9 high-performing classifiers with empirical p-values lower than 0.05 and internal accuracies ranging from 0.73 to 0.94 and holdout accuracies of 0.68 to 0.92. High-ranking predictions for training-excluded reference chemicals demonstrating that predictive accuracy extends beyond the set of chemicals used in classifier training. To explore differences in classifier performance as a function of training data cellular context, MCF7-trained classifier accuracies were compared to classifiers trained on the PC3 gene expression data for the same molecular initiating events.This methodology can offer insight in prioritizing candidate perturbagens of interest for targeted screens. This approach can also help guide the selection of relevant cellular contexts for screening classes of candidate perturbagens using cell line specific model performance.

5 citations

Posted ContentDOI
31 Aug 2021-medRxiv
TL;DR: The authors applied genomic structural equation modeling to conduct a GWAS of the new Genetics of Opioid Addiction Consortium (GENOA) data and published studies (Psychiatric Genomics Consortium, Million Veteran Program, and Partners Health), comprising 23,367 cases and effective sample size of 88,114 individuals of European ancestry.
Abstract: Opioid addiction (OA) has strong heritability, yet few genetic variant associations have been robustly identified. Only rs1799971, the A118G variant in OPRM1, has been identified as a genome-wide significant association with OA and independently replicated. We applied genomic structural equation modeling to conduct a GWAS of the new Genetics of Opioid Addiction Consortium (GENOA) data and published studies (Psychiatric Genomics Consortium, Million Veteran Program, and Partners Health), comprising 23,367 cases and effective sample size of 88,114 individuals of European ancestry. Genetic correlations among the various OA phenotypes were uniformly high (rg > 0.9). We observed the strongest evidence to date for OPRM1: lead SNP rs9478500 (p=2.56×10−9). Gene-based analyses identified novel genome-wide significant associations with PPP6C and FURIN. Variants within these loci appear to be pleiotropic for addiction and related traits.

5 citations

Proceedings ArticleDOI
01 Feb 2020
TL;DR: GLIT is a model that uses transcriptional response data and chemical structures and can be used for drug-induced liver injury prediction and outperformed a baseline model that used only drug structure information.
Abstract: Drug-Induced Liver Injury (DILI) is a major cause of failed drug candidates in clinical trials and withdrawal of approved drugs from the market. Therefore, machine learning-based DILI prediction can be key in increasing the success rate of drug discovery because drug candidates that are predicted to potentially induce liver injury can be rejected before clinical trials. However, existing DILI prediction models mainly focus on the chemical structures of drugs. Since we cannot determine whether a drug will cause liver injury based solely on its structure, DILI prediction based on the transcriptional effect of a drug on a cell is necessary. In this paper, we propose GLIT which is a model that uses transcriptional response data and chemical structures and can be used for drug-induced liver injury prediction. GLIT learns the embedding vectors of drug structures and drug-induced gene expression profiles using graph attention networks in a biological knowledge graph for predicting DILI. GLIT outperformed a baseline model that uses only drug structure information by 7% and 19.2% in terms of correct classification rate (CCR) and Matthews correlation coefficient (MCC), respectively. In addition, we conducted a literature survey to confirm whether the class labels of drugs, in the unknown DILI class, predicted by GLIT are correct.

5 citations


Cites background from "A Next Generation Connectivity Map:..."

  • ...For example, the Broad Institute’s Connectivity Map (CMap) [13] contains 1....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: The Gene Set Enrichment Analysis (GSEA) method as discussed by the authors focuses on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation.
Abstract: Although genomewide RNA expression analysis has become a routine tool in biomedical research, extracting biological insight from such information remains a major challenge. Here, we describe a powerful analytical method called Gene Set Enrichment Analysis (GSEA) for interpreting gene expression data. The method derives its power by focusing on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation. We demonstrate how GSEA yields insights into several cancer-related data sets, including leukemia and lung cancer. Notably, where single-gene analysis finds little similarity between two independent studies of patient survival in lung cancer, GSEA reveals many biological pathways in common. The GSEA method is embodied in a freely available software package, together with an initial database of 1,325 biologically defined gene sets.

34,830 citations

Journal Article
TL;DR: A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.
Abstract: We present a new technique called “t-SNE” that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large datasets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of datasets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the datasets.

30,124 citations

Journal ArticleDOI
TL;DR: The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data and provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-power gene expression and genomic hybridization experiments.
Abstract: The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data. GEO provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-throughput gene expression and genomic hybridization experiments. GEO is not intended to replace in house gene expression databases that benefit from coherent data sets, and which are constructed to facilitate a particular analytic method, but rather complement these by acting as a tertiary, central data distribution hub. The three central data entities of GEO are platforms, samples and series, and were designed with gene expression and genomic hybridization experiments in mind. A platform is, essentially, a list of probes that define what set of molecules may be detected. A sample describes the set of molecules that are being probed and references a single platform used to generate its molecular abundance data. A series organizes samples into the meaningful data sets which make up an experiment. The GEO repository is publicly accessible through the World Wide Web at http://www.ncbi.nlm.nih.gov/geo.

10,968 citations

Journal ArticleDOI
TL;DR: How BLAT was optimized is described, which is more accurate and 500 times faster than popular existing tools for mRNA/DNA alignments and 50 times faster for protein alignments at sensitivity settings typically used when comparing vertebrate sequences.
Abstract: Analyzing vertebrate genomes requires rapid mRNA/DNA and cross-species protein alignments A new tool, BLAT, is more accurate and 500 times faster than popular existing tools for mRNA/DNA alignments and 50 times faster for protein alignments at sensitivity settings typically used when comparing vertebrate sequences BLAT's speed stems from an index of all nonoverlapping K-mers in the genome This index fits inside the RAM of inexpensive computers, and need only be computed once for each genome assembly BLAT has several major stages It uses the index to find regions in the genome likely to be homologous to the query sequence It performs an alignment between homologous regions It stitches together these aligned regions (often exons) into larger alignments (typically genes) Finally, BLAT revisits small internal exons possibly missed at the first stage and adjusts large gap boundaries that have canonical splice sites where feasible This paper describes how BLAT was optimized Effects on speed and sensitivity are explored for various K-mer sizes, mismatch schemes, and number of required index matches BLAT is compared with other alignment programs on various test sets and then used in several genome-wide applications http://genomeucscedu hosts a web-based BLAT server for the human genome

8,326 citations

Journal ArticleDOI
TL;DR: This paper proposed parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples.
Abstract: SUMMARY Non-biological experimental variation or “batch effects” are commonly observed across multiple batches of microarray experiments, often rendering the task of combining data from these batches difficult. The ability to combine microarray data sets is advantageous to researchers to increase statistical power to detect biological phenomena from studies where logistical considerations restrict sample size or in studies that require the sequential hybridization of arrays. In general, it is inappropriate to combine data sets without adjusting for batch effects. Methods have been proposed to filter batch effects from data, but these are often complicated and require large batch sizes (>25) to implement. Because the majority of microarray studies are conducted using much smaller sample sizes, existing methods are not sufficient. We propose parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples. We illustrate our methods using two example data sets and show that our methods are justifiable, easy to apply, and useful in practice. Software for our method is freely available at: http://biosun1.harvard.edu/complab/batch/.

6,319 citations

Related Papers (5)