scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Next Generation Connectivity Map: L1000 Platform and the First 1,000,000 Profiles.

TL;DR: The expanded CMap is reported, made possible by a new, low-cost, high-throughput reduced representation expression profiling method that is shown to be highly reproducible, comparable to RNA sequencing, and suitable for computational inference of the expression levels of 81% of non-measured transcripts.
About: This article is published in Cell.The article was published on 2017-11-30 and is currently open access. It has received 1943 citations till now.
Citations
More filters
Journal ArticleDOI
TL;DR: The DRIM system is developed, an integrative multi-omics and time-series data analysis framework that identifies perturbed sub-pathways and regulation mechanisms upon drug treatment and provides analysis results of sub- Pathway regulatory mechanisms in breast cancer cell lines of different drug sensitivity.
Abstract: Pharmacogenomics is the study of how genes affect a person's response to drugs. Thus, understanding the effect of drug at the molecular level can be helpful in both drug discovery and personalized medicine. Over the years, transcriptome data upon drug treatment has been collected and several databases compiled before drug treatment cancer cell multi-omics data with drug sensitivity (IC 50, AUC) or time-series transcriptomic data after drug treatment. However, analyzing transcriptome data upon drug treatment is challenging since more than 20,000 genes interact in complex ways. In addition, due to the difficulty of both time-series analysis and multi-omics integration, current methods can hardly perform analysis of databases with different data characteristics. One effective way is to interpret transcriptome data in terms of well-characterized biological pathways. Another way is to leverage state-of-the-art methods for multi-omics data integration. In this paper, we developed Drug Response analysis Integrating Multi-omics and time-series data (DRIM), an integrative multi-omics and time-series data analysis framework that identifies perturbed sub-pathways and regulation mechanisms upon drug treatment. The system takes drug name and cell line identification numbers or user's drug control/treat time-series gene expression data as input. Then, analysis of multi-omics data upon drug treatment is performed in two perspectives. For the multi-omics perspective analysis, IC 50-related multi-omics potential mediator genes are determined by embedding multi-omics data to gene-centric vector space using a tensor decomposition method and an autoencoder deep learning model. Then, perturbed pathway analysis of potential mediator genes is performed. For the time-series perspective analysis, time-varying perturbed sub-pathways upon drug treatment are constructed. Additionally, a network involving transcription factors (TFs), multi-omics potential mediator genes, and perturbed sub-pathways is constructed, and paths to perturbed pathways from TFs are determined by an influence maximization method. To demonstrate the utility of our system, we provide analysis results of sub-pathway regulatory mechanisms in breast cancer cell lines of different drug sensitivity. DRIM is available at: http://biohealth.snu.ac.kr/software/DRIM/.

9 citations

Journal ArticleDOI
Fangping Wan1, Shuya Li1, Tingzhong Tian1, Yipin Lei, Dan Zhao1, Jianyang Zeng1 
TL;DR: It is demonstrated that the cell-line-specific gene expression profiles derived from the shRNA perturbation experiments performed in the LINCS L1000 project can provide useful features for predicting SL interactions in human.
Abstract: Synthetic lethality (SL), an important type of genetic interaction, can provide useful insight into the target identification process for the development of anticancer therapeutics. Although several well-established SL gene pairs have been verified to be conserved in humans, most SL interactions remain cell-line specific. Here, we demonstrated that the cell-line-specific gene expression profiles derived from the shRNA perturbation experiments performed in the LINCS L1000 project can provide useful features for predicting SL interactions in human. In this paper, we developed a semi-supervised neural network-based method called EXP2SL to accurately identify SL interactions from the L1000 gene expression profiles. Through a systematic evaluation on the SL datasets of three different cell lines, we demonstrated that our model achieved better performance than the baseline methods and verified the effectiveness of using the L1000 gene expression features and the semi-supervise training technique in SL prediction.

9 citations


Cites background or methods from "A Next Generation Connectivity Map:..."

  • ...Notably, in the original L1000 preprocessing pipeline (Subramanian et al., 2017), the control profiles were replaced by all the profiles on the plate, called population control....

    [...]

  • ...For a given cell line, suppose that there areN genes (marked as the indices 1, 2,…, N) with measured shRNA data from the LINCS L1000project (Subramanian et al., 2017)....

    [...]

  • ...Specifically, the raw data from the LINCS L1000 project were preprocessed based on the pipeline in the original paper (Subramanian et al., 2017) with minor modifications; We first directly obtained the Level 3 data from L1000, which contained the quantile normalized gene expression profiles....

    [...]

  • ...…of shRNAs and obtain a robust signature for each single gene, the z-scores obtained fromthe replicated trials of the same shRNA were first processed using an algorithm with L1000 Level 5 data (Subramanian et al., 2017), then the same protocol was used to reduce the shRNAs targeting the same gene....

    [...]

  • ...The cell-line specific gene expression profiles resulting from the shRNA knockdownexperiments in theLINCSL1000project (Subramanian et al., 2017)were used to capture the informationof cell-line specific genetic background....

    [...]

Journal ArticleDOI
TL;DR: An algorithm to highlight the hub genes that play a major role in creating the upstream and downstream relationships within a given set of differentially expressed genes is developed and can facilitate to understand drug-induced gene regulatory networks for finding potential functional genes.
Abstract: // Masaru Koido 1 , Yuri Tani 1 , Satomi Tsukahara 1 , Yuka Okamoto 1 and Akihiro Tomida 1 1 Cancer Chemotherapy Center, Japanese Foundation for Cancer Research, 3-8-31 Ariake, Koto-ku, Tokyo 135-8550, Japan Correspondence to: Akihiro Tomida, email: akihiro.tomida@jfcr.or.jp Keywords: drug-induced gene expression change; transcriptome; network analysis; data mining; LINCS Received: March 06, 2018 Accepted: May 19, 2018 Published: June 26, 2018 ABSTRACT It has been difficult to elucidate the structure of gene regulatory networks under anticancer drug treatment. Here, we developed an algorithm to highlight the hub genes that play a major role in creating the upstream and downstream relationships within a given set of differentially expressed genes. The directionality of the relationships between genes was defined using information from comprehensive collections of transcriptome profiles after gene knockdown and overexpression. As expected, among the drug-perturbed genes, our algorithm tended to derive plausible hub genes, such as transcription factors. Our validation experiments successfully showed the anticipated activity of certain hub gene in establishing the gene regulatory network that was associated with cell growth inhibition. Notably, giving such top priority to the hub gene was not achieved by ranking fold change in expression and by the conventional gene set enrichment analysis of drug-induced transcriptome data. Thus, our data-driven approach can facilitate to understand drug-induced gene regulatory networks for finding potential functional genes.

9 citations


Cites background or methods from "A Next Generation Connectivity Map:..."

  • ...Normalized transcriptome data of 14 compounds on OCI-LY3 cells were obtained from NCBI Gene Expression Omnibus under the series accession no....

    [...]

  • ...The microarray datasets of MYC siRNA experiments were deposited in the NCBI Gene Expression Omnibus under the series accession no....

    [...]

  • ...Furthermore, based on measured expression levels of landmark genes, the expression levels of ~21,000 unmeasured genes were inferred by a linear regression model, in which the weight coefficient was estimated from the substantial transcriptome data [10, 11]....

    [...]

  • ...978 landmark genes were determined as informative genes from multivariate analysis using 12063 public transcriptome microarray data catalogued in the Gene Expression Omnibus [10]....

    [...]

  • ...3 million cell conditions, consisting of compound treatments (multiple doses) and genetic perturbation treatments (knockdown by shRNA, overexpression, and ligand treatment) at multiple time points in several different cell lines [10]....

    [...]

Posted ContentDOI
02 Aug 2018-bioRxiv
TL;DR: A network-based strategy that selects genes that are both well correlated to drug response and closeby in the human protein interaction network is devised, and it is confirmed that the identified drug sensitivity modules are tightly connected to the mechanisms of action of the drugs.
Abstract: The integration of large-scale drug sensitivity screens and genome-wide experiments is changing the field of pharmacogenomics, revealing molecular determinants of drug response without the need for a priori, hypothesis-driven assumptions about drug action. In particular, transcriptomic signatures of drug sensitivity may guide drug repositioning, the discovery of synergistic drug combinations and suggest new therapeutic biomarkers. However, the inherent complexity of transcriptomic signatures, with thousands of genes differentially expressed, makes them hard to interpret, giving poor mechanistic insights and hampering translation to the clinics. Here we show how network biology can help simplify transcriptomic drug signatures, filtering out irrelevant genes, accounting for tissue-specific biases and ultimately yielding functionally-coherent, less noisy drug modules. We successfully analyzed 170 drugs tested in 637 cancer cell lines, proving a broad applicability of our approach and evincing an intimate relationship between modules’ gene expression levels and drugs’ mechanisms of action. Further, we have characterized multiple aspects of our transcriptomic modules. As a result, the drugs included in this study are now annotated well beyond the reductionist (target-centered) view.

9 citations


Cites background from "A Next Generation Connectivity Map:..."

  • ...The L1000 Next-generation Connectivity Map, for instance, contains about one million post-treatment gene expression signatures for about twenty thousand molecules [65]....

    [...]

  • ...In this line, the release of CCL screens with readouts other than growth inhibition or proliferation rate [65, 66] will help unveil the connections between the genetic background of the cells and the phenotypic outcome of drug treatment....

    [...]

Journal ArticleDOI
TL;DR: In this article , a profile-based screening approach was proposed to identify compounds that match to pathways based on the phenotypic information in public cell image data, created using the Cell Painting assay.
Abstract:

Summary

Identifying the chemical regulators of biological pathways is a time-consuming bottleneck in developing therapeutics and research compounds. Typically, thousands to millions of candidate small molecules are tested in target-based biochemical screens or phenotypic cell-based screens, both expensive experiments customized to each disease. Here, our uncustomized, virtual, profile-based screening approach instead identifies compounds that match to pathways based on the phenotypic information in public cell image data, created using the Cell Painting assay. Our straightforward correlation-based computational strategy retrospectively uncovered the expected, known small-molecule regulators for 32% of positive-control gene queries. In prospective, discovery mode, we efficiently identified new compounds related to three query genes and validated them in subsequent gene-relevant assays, including compounds that phenocopy or pheno-oppose YAP1 overexpression and kill a Yap1-dependent sarcoma cell line. This image-profile-based approach could replace many customized labor- and resource-intensive screens and accelerate the discovery of biologically and therapeutically useful compounds.

9 citations

References
More filters
Journal ArticleDOI
TL;DR: The Gene Set Enrichment Analysis (GSEA) method as discussed by the authors focuses on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation.
Abstract: Although genomewide RNA expression analysis has become a routine tool in biomedical research, extracting biological insight from such information remains a major challenge. Here, we describe a powerful analytical method called Gene Set Enrichment Analysis (GSEA) for interpreting gene expression data. The method derives its power by focusing on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation. We demonstrate how GSEA yields insights into several cancer-related data sets, including leukemia and lung cancer. Notably, where single-gene analysis finds little similarity between two independent studies of patient survival in lung cancer, GSEA reveals many biological pathways in common. The GSEA method is embodied in a freely available software package, together with an initial database of 1,325 biologically defined gene sets.

34,830 citations

Journal Article
TL;DR: A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.
Abstract: We present a new technique called “t-SNE” that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large datasets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of datasets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the datasets.

30,124 citations

Journal ArticleDOI
TL;DR: The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data and provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-power gene expression and genomic hybridization experiments.
Abstract: The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data. GEO provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-throughput gene expression and genomic hybridization experiments. GEO is not intended to replace in house gene expression databases that benefit from coherent data sets, and which are constructed to facilitate a particular analytic method, but rather complement these by acting as a tertiary, central data distribution hub. The three central data entities of GEO are platforms, samples and series, and were designed with gene expression and genomic hybridization experiments in mind. A platform is, essentially, a list of probes that define what set of molecules may be detected. A sample describes the set of molecules that are being probed and references a single platform used to generate its molecular abundance data. A series organizes samples into the meaningful data sets which make up an experiment. The GEO repository is publicly accessible through the World Wide Web at http://www.ncbi.nlm.nih.gov/geo.

10,968 citations

Journal ArticleDOI
TL;DR: How BLAT was optimized is described, which is more accurate and 500 times faster than popular existing tools for mRNA/DNA alignments and 50 times faster for protein alignments at sensitivity settings typically used when comparing vertebrate sequences.
Abstract: Analyzing vertebrate genomes requires rapid mRNA/DNA and cross-species protein alignments A new tool, BLAT, is more accurate and 500 times faster than popular existing tools for mRNA/DNA alignments and 50 times faster for protein alignments at sensitivity settings typically used when comparing vertebrate sequences BLAT's speed stems from an index of all nonoverlapping K-mers in the genome This index fits inside the RAM of inexpensive computers, and need only be computed once for each genome assembly BLAT has several major stages It uses the index to find regions in the genome likely to be homologous to the query sequence It performs an alignment between homologous regions It stitches together these aligned regions (often exons) into larger alignments (typically genes) Finally, BLAT revisits small internal exons possibly missed at the first stage and adjusts large gap boundaries that have canonical splice sites where feasible This paper describes how BLAT was optimized Effects on speed and sensitivity are explored for various K-mer sizes, mismatch schemes, and number of required index matches BLAT is compared with other alignment programs on various test sets and then used in several genome-wide applications http://genomeucscedu hosts a web-based BLAT server for the human genome

8,326 citations

Journal ArticleDOI
TL;DR: This paper proposed parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples.
Abstract: SUMMARY Non-biological experimental variation or “batch effects” are commonly observed across multiple batches of microarray experiments, often rendering the task of combining data from these batches difficult. The ability to combine microarray data sets is advantageous to researchers to increase statistical power to detect biological phenomena from studies where logistical considerations restrict sample size or in studies that require the sequential hybridization of arrays. In general, it is inappropriate to combine data sets without adjusting for batch effects. Methods have been proposed to filter batch effects from data, but these are often complicated and require large batch sizes (>25) to implement. Because the majority of microarray studies are conducted using much smaller sample sizes, existing methods are not sufficient. We propose parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples. We illustrate our methods using two example data sets and show that our methods are justifiable, easy to apply, and useful in practice. Software for our method is freely available at: http://biosun1.harvard.edu/complab/batch/.

6,319 citations

Related Papers (5)