scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Next Generation Connectivity Map: L1000 Platform and the First 1,000,000 Profiles.

TL;DR: The expanded CMap is reported, made possible by a new, low-cost, high-throughput reduced representation expression profiling method that is shown to be highly reproducible, comparable to RNA sequencing, and suitable for computational inference of the expression levels of 81% of non-measured transcripts.
About: This article is published in Cell.The article was published on 2017-11-30 and is currently open access. It has received 1943 citations till now.
Citations
More filters
Posted ContentDOI
30 Sep 2018-bioRxiv
TL;DR: A disease interaction network inferred from similarities in patients’ molecular profiles is presented, which significantly recapitulates epidemiologically documented comorbidities, providing the basis for their interpretation at a molecular level.
Abstract: Comorbidity is an impactful medical problem that is attracting increasing attention in healthcare and biomedical research. However, little is known about the molecular processes leading to the development of a specific disease in patients affected by other conditions. We present a disease interaction network inferred from similarities in patients9 molecular profiles, which significantly recapitulates epidemiologically documented comorbidities, providing the basis for their interpretation at a molecular level. Furthermore, expanding on the analysis of subgroups of patients with similar molecular profiles, our approach discovers comorbidity relations not previously described, implicates distinct genes in such relations, and identifies drugs whose side effects are potentially associated to the observed comorbidities.

3 citations

Posted ContentDOI
11 Apr 2020-bioRxiv
TL;DR: A library of Food and Drug Administration-approved drugs is screened using a simple assay in the nematode C. elegans and three compounds are found that caused morphological changes that advocate the continued exploration of current medicines using a variety of model organisms to better understand drugs already prescribed to millions of patients.
Abstract: Urgent need for treatments limit studies of therapeutic drugs before approval by regulatory agencies. Analyses of drugs after approval can therefore improve our understanding of their mechanism of action and enable better therapies. We screened a library of 1443 Food and Drug Administration (FDA)-approved drugs using a simple assay in the nematode C. elegans and found three compounds that caused morphological changes. While the anticoagulant ticlopidine and the antifungal sertaconazole caused morphologically distinct pharyngeal defects upon acute exposure, the proton-pump inhibitor dexlansoprazole caused molting defects and required exposure during larval development. Such easily detectable defects in a powerful genetic model system advocate the continued exploration of current medicines using a variety of model organisms to better understand drugs already prescribed to millions of patients.

3 citations


Cites background from "A Next Generation Connectivity Map:..."

  • ...Intriguingly, the set of genes affected by ticlopidine in human cell lines are negatively connected with genes affected by lansoprazole, a racemic mixture of levolansoprazole and dexlansoprazole (2nd rank on CMap [29]), suggesting a possible convergence of these two disparate compounds on the same molecular effectors....

    [...]

  • ...Intriguingly, the set of genes affected by ticlopidine in human cell lines are negatively connected with genes affected by lansoprazole, a racemic mixture of levolansoprazole and dexlansoprazole (2nd rank on CMap [29]), suggesting a possible convergence of these two disparate compounds on the same...

    [...]

Posted ContentDOI
22 Jun 2019-bioRxiv
TL;DR: This study demonstrates that IMC data expand the number of measured parameters in single cells and brings higher-dimension analysis to the field of cell-based screening in early lead compound discovery.
Abstract: In pharmaceutical research, high-content screening is an integral part of lead candidate development. Measuring drug response in vitro over 40 parameters including biomarkers, signaling molecules, cell morphological changes, proliferation indices and toxicity in a single sample could significantly enhance discovery of new therapeutics. As a proof of concept, we present a workflow for multidimensional Imaging Mass Cytometry (IMC™) and data processing with open source computational tools. CellProfiler was used to identify single cells through establishing cellular boundaries, followed by histoCAT™ (histology topography cytometry analysis toolbox) for extracting single-cell quantitative information visualized as t-SNE plots and heatmaps. Human breast cancer-derived cell lines SKBR3, HCC1143 and MCF-7 were screened for expression of cellular markers to generate digital images with a resolution comparable to conventional fluorescence microscopy. Predicted pharmacodynamic effects were measured in MCF-7 cells dosed with three target-specific compounds: growth stimulatory EGF, microtubule depolymerization agent nocodazole and genotoxic chemotherapeutic drug etoposide. We show strong pairwise correlation between nuclear markers pHistone3 S28 , Ki-67 and p4E-BP1 T37/T46 in classified mitotic cells and anti-correlation with cell surface markers. Our study demonstrates that IMC data expands the number of measured parameters in single cells and brings higher-dimension analysis to the field of cell-based screening in early lead compound discovery.

3 citations

Book ChapterDOI
01 Jan 2019
TL;DR: This chapter will describe in detail the main methods, applications, and computational resources for drug repositioning from transcriptome data, the type of data that has made the most progress in the field.
Abstract: Traditional drug-discovery approaches are based on the high-throughput screening of thousands of molecules simultaneously in order to identify compounds that show activity against therapeutic targets. This is a very costly and time-consuming process, and cost-effective alternative techniques, mostly based on computational approaches, are one of the main focuses of research in this sector. In this context, drug repurposing is a potential alternative for drug discovery that addresses the problem of the high-cost/efficiency ratio of traditional pipelines. Drug repurposing is based on using existing drugs for new therapeutic applications, which also saves a lot of time because they may directly translate into phase II or III clinical trials. In this context, the development of -omics techniques and the accumulation of large volumes of data, especially whole-genome gene expression data, have allowed researchers to develop new computational approaches for effective drug repurposing. In this chapter, we will describe in detail the main methods, applications, and computational resources for drug repositioning from transcriptome data, the type of data that has made the most progress in the field.

3 citations

Posted ContentDOI
03 Apr 2020-bioRxiv
TL;DR: A novel Siamese spectral-based graph convolutional network model for inferring the protein targets of chemical compounds from gene transcriptional profiles that was successfully trained to learn from known compound-target pairs by uncovering the hidden correlations between compound perturbation profiles and gene knockdown profiles.
Abstract: Computational target fishing aims to investigate the mechanism of action or the side effects of bioactive small molecules. Unfortunately, conventional ligand-based computational methods only explore a confined chemical space, and structure-based methods are limited by the availability of crystal structures. Moreover, these methods cannot describe cellular context-dependent effects and are thus not useful for exploring the targets of drugs in specific cells. To address these challenges, we propose a novel Siamese spectral-based graph convolutional network (SSGCN) model for inferring the protein targets of chemical compounds from gene transcriptional profiles. Although the gene signature of a compound perturbation only provides indirect clues of the interacting targets, the SSGCN model was successfully trained to learn from known compound-target pairs by uncovering the hidden correlations between compound perturbation profiles and gene knockdown profiles. Using a benchmark set, the model achieved impressive target inference results compared with previous methods such as Connectivity Map and ProTINA. More importantly, the powerful generalization ability of the model observed with the external LINCS phase II dataset suggests that the model is an efficient target fishing or repositioning tool for bioactive compounds.

3 citations


Cites background or methods from "A Next Generation Connectivity Map:..."

  • ...For example, the LINCS L1000 dataset (17) is a comprehensive resource of gene expression changes observed in human cell lines perturbed with small molecules and genetic constructs....

    [...]

  • ...As revealed in the original study (17), the similarity between shRNAs targeting the same gene is only slightly greater than random....

    [...]

  • ...shRNA experiments might exhibit off-target effects due to the “shared seed” sequence among shRNAs (17,39)....

    [...]

  • ...The comparative analysis-based methods infer targets based on gene signature similarities (17,24,26)....

    [...]

  • ...coverages of the target space (Supplementary Figure S1) and batch effects such as temperature, wetness and different laboratory technicians (17,49), the overall results of the SSGCN model are still highly impressive....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: The Gene Set Enrichment Analysis (GSEA) method as discussed by the authors focuses on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation.
Abstract: Although genomewide RNA expression analysis has become a routine tool in biomedical research, extracting biological insight from such information remains a major challenge. Here, we describe a powerful analytical method called Gene Set Enrichment Analysis (GSEA) for interpreting gene expression data. The method derives its power by focusing on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation. We demonstrate how GSEA yields insights into several cancer-related data sets, including leukemia and lung cancer. Notably, where single-gene analysis finds little similarity between two independent studies of patient survival in lung cancer, GSEA reveals many biological pathways in common. The GSEA method is embodied in a freely available software package, together with an initial database of 1,325 biologically defined gene sets.

34,830 citations

Journal Article
TL;DR: A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.
Abstract: We present a new technique called “t-SNE” that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large datasets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of datasets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the datasets.

30,124 citations

Journal ArticleDOI
TL;DR: The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data and provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-power gene expression and genomic hybridization experiments.
Abstract: The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data. GEO provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-throughput gene expression and genomic hybridization experiments. GEO is not intended to replace in house gene expression databases that benefit from coherent data sets, and which are constructed to facilitate a particular analytic method, but rather complement these by acting as a tertiary, central data distribution hub. The three central data entities of GEO are platforms, samples and series, and were designed with gene expression and genomic hybridization experiments in mind. A platform is, essentially, a list of probes that define what set of molecules may be detected. A sample describes the set of molecules that are being probed and references a single platform used to generate its molecular abundance data. A series organizes samples into the meaningful data sets which make up an experiment. The GEO repository is publicly accessible through the World Wide Web at http://www.ncbi.nlm.nih.gov/geo.

10,968 citations

Journal ArticleDOI
TL;DR: How BLAT was optimized is described, which is more accurate and 500 times faster than popular existing tools for mRNA/DNA alignments and 50 times faster for protein alignments at sensitivity settings typically used when comparing vertebrate sequences.
Abstract: Analyzing vertebrate genomes requires rapid mRNA/DNA and cross-species protein alignments A new tool, BLAT, is more accurate and 500 times faster than popular existing tools for mRNA/DNA alignments and 50 times faster for protein alignments at sensitivity settings typically used when comparing vertebrate sequences BLAT's speed stems from an index of all nonoverlapping K-mers in the genome This index fits inside the RAM of inexpensive computers, and need only be computed once for each genome assembly BLAT has several major stages It uses the index to find regions in the genome likely to be homologous to the query sequence It performs an alignment between homologous regions It stitches together these aligned regions (often exons) into larger alignments (typically genes) Finally, BLAT revisits small internal exons possibly missed at the first stage and adjusts large gap boundaries that have canonical splice sites where feasible This paper describes how BLAT was optimized Effects on speed and sensitivity are explored for various K-mer sizes, mismatch schemes, and number of required index matches BLAT is compared with other alignment programs on various test sets and then used in several genome-wide applications http://genomeucscedu hosts a web-based BLAT server for the human genome

8,326 citations

Journal ArticleDOI
TL;DR: This paper proposed parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples.
Abstract: SUMMARY Non-biological experimental variation or “batch effects” are commonly observed across multiple batches of microarray experiments, often rendering the task of combining data from these batches difficult. The ability to combine microarray data sets is advantageous to researchers to increase statistical power to detect biological phenomena from studies where logistical considerations restrict sample size or in studies that require the sequential hybridization of arrays. In general, it is inappropriate to combine data sets without adjusting for batch effects. Methods have been proposed to filter batch effects from data, but these are often complicated and require large batch sizes (>25) to implement. Because the majority of microarray studies are conducted using much smaller sample sizes, existing methods are not sufficient. We propose parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples. We illustrate our methods using two example data sets and show that our methods are justifiable, easy to apply, and useful in practice. Software for our method is freely available at: http://biosun1.harvard.edu/complab/batch/.

6,319 citations

Related Papers (5)