scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Next Generation Connectivity Map: L1000 Platform and the First 1,000,000 Profiles.

TL;DR: The expanded CMap is reported, made possible by a new, low-cost, high-throughput reduced representation expression profiling method that is shown to be highly reproducible, comparable to RNA sequencing, and suitable for computational inference of the expression levels of 81% of non-measured transcripts.
About: This article is published in Cell.The article was published on 2017-11-30 and is currently open access. It has received 1943 citations till now.
Citations
More filters
Journal ArticleDOI
TL;DR: Linking drug and gene dependency together with genomic data sets uncovered contexts in which molecular networks when perturbed mediate cancer cell loss‐of‐fitness and thereby provide independent and orthogonal evidence of biomarkers for drug development.
Abstract: Low success rates during drug development are due, in part, to the difficulty of defining drug mechanism-of-action and molecular markers of therapeutic activity Here, we integrated 199,219 drug sensitivity measurements for 397 unique anti-cancer drugs with genome-wide CRISPR loss-of-function screens in 484 cell lines to systematically investigate cellular drug mechanism-of-action We observed an enrichment for positive associations between the profile of drug sensitivity and knockout of a drug's nominal target, and by leveraging protein-protein networks, we identified pathways underpinning drug sensitivity This revealed an unappreciated positive association between mitochondrial E3 ubiquitin-protein ligase MARCH5 dependency and sensitivity to MCL1 inhibitors in breast cancer cell lines We also estimated drug on-target and off-target activity, informing on specificity, potency and toxicity Linking drug and gene dependency together with genomic data sets uncovered contexts in which molecular networks when perturbed mediate cancer cell loss-of-fitness and thereby provide independent and orthogonal evidence of biomarkers for drug development This study illustrates how integrating cell line drug sensitivity with CRISPR loss-of-function screens can elucidate mechanism-of-action to advance drug development

52 citations


Cites background or methods from "A Next Generation Connectivity Map:..."

  • ...…et al, 2007; Médard et al, 2015) and cellular thermal shift assay (Savitski et al, 2014) to measure drug– protein interactions, and multiplexed imaging or flow-cytometry to measure multiple cellular parameters upon drug treatment (Li et al, 2017; Subramanian et al, 2017; Reinecke et al, 2019)....

    [...]

  • ...…license Molecular Systems Biology 16: e9405 | 2020 1 of 14 Pharmacological screens (Barretina et al, 2012; Garnett et al, 2012; Iorio et al, 2016; Subramanian et al, 2017; Lee et al, 2018) have been used to profile the activity of hundreds of compounds in highly annotated collections of cancer…...

    [...]

  • ...Parallel integration of gene loss-of-function screens with drug response can be used to investigate drug mechanism-of-action (Deans et al, 2016; Subramanian et al, 2017; Jost & Weissman, 2018; Wang et al, 2018; Zimmermann et al, 2018; Hustedt et al, 2019a,b)....

    [...]

Journal ArticleDOI
TL;DR: This work describes an approach to scoring and creating libraries based on binding selectivity, target coverage, and induced cellular phenotypes as well as chemical structure, stage of clinical development, and user preference, and describes a mechanism of action library that optimally covers 1,852 targets in the liganded genome.

52 citations

Journal ArticleDOI
12 Feb 2021-Science
TL;DR: A machine learning approach developed to identify small molecules that broadly correct gene networks dysregulated in a human induced pluripotent stem cell disease model of a common form of heart disease involving the aortic valve, finding XCT790 was effective in broadly restoring dysregulated genes toward the normal state.
Abstract: Mapping the gene-regulatory networks dysregulated in human disease would allow the design of network-correcting therapies that treat the core disease mechanism. However, small molecules are traditionally screened for their effects on one to several outputs at most, biasing discovery and limiting the likelihood of true disease-modifying drug candidates. Here, we developed a machine-learning approach to identify small molecules that broadly correct gene networks dysregulated in a human induced pluripotent stem cell (iPSC) disease model of a common form of heart disease involving the aortic valve (AV). Gene network correction by the most efficacious therapeutic candidate, XCT790, generalized to patient-derived primary AV cells and was sufficient to prevent and treat AV disease in vivo in a mouse model. This strategy, made feasible by human iPSC technology, network analysis, and machine learning, may represent an effective path for drug discovery.

51 citations

Journal ArticleDOI
TL;DR: In this article, the authors performed proteomic profiling of 124 paired oesophageal cancer and adjacent non-tumour tissues and identified two subtypes that are associated with patient survival for therapeutic targeting.
Abstract: Esophageal cancer (EC) is a type of aggressive cancer without clinically relevant molecular subtypes, hindering the development of effective strategies for treatment. To define molecular subtypes of EC, we perform mass spectrometry-based proteomic and phosphoproteomics profiling of EC tumors and adjacent non-tumor tissues, revealing a catalog of proteins and phosphosites that are dysregulated in ECs. The EC cohort is stratified into two molecular subtypes—S1 and S2—based on proteomic analysis, with the S2 subtype characterized by the upregulation of spliceosomal and ribosomal proteins, and being more aggressive. Moreover, we identify a subtype signature composed of ELOA and SCAF4, and construct a subtype diagnostic and prognostic model. Potential drugs are predicted for treating patients of S2 subtype, and three candidate drugs are validated to inhibit EC. Taken together, our proteomic analysis define molecular subtypes of EC, thus providing a potential therapeutic outlook for improving disease outcomes in patients with EC. Proteomics can aid in the identification of molecular subtypes in cancers. Here, the authors perform proteomic profiling of 124 paired oesophageal cancer and adjacent non-tumour tissues and identify two subtypes that are associated with patient survival for therapeutic targeting.

50 citations

Journal ArticleDOI
TL;DR: A 6-step workflow that integrated diverse types of toxicology data into a novel AOP scheme for pulmonary fibrosis, coupled with a network of functional elements, resulting in a novel, open-source AOP-linked molecular pathway.

50 citations


Cites methods from "A Next Generation Connectivity Map:..."

  • ...…data sets can be expected to be generated as part of future efforts that utilize high-throughput transcriptomics technologies similar to the TempO-seq Tox21 Phase III L1500þ or the Broad Institute Connectivity Map L1000 platform (Andersen et al. 2015; Collins et al. 2017; Subramanian et al. 2017)....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: The Gene Set Enrichment Analysis (GSEA) method as discussed by the authors focuses on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation.
Abstract: Although genomewide RNA expression analysis has become a routine tool in biomedical research, extracting biological insight from such information remains a major challenge. Here, we describe a powerful analytical method called Gene Set Enrichment Analysis (GSEA) for interpreting gene expression data. The method derives its power by focusing on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation. We demonstrate how GSEA yields insights into several cancer-related data sets, including leukemia and lung cancer. Notably, where single-gene analysis finds little similarity between two independent studies of patient survival in lung cancer, GSEA reveals many biological pathways in common. The GSEA method is embodied in a freely available software package, together with an initial database of 1,325 biologically defined gene sets.

34,830 citations

Journal Article
TL;DR: A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.
Abstract: We present a new technique called “t-SNE” that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large datasets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of datasets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the datasets.

30,124 citations

Journal ArticleDOI
TL;DR: The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data and provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-power gene expression and genomic hybridization experiments.
Abstract: The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data. GEO provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-throughput gene expression and genomic hybridization experiments. GEO is not intended to replace in house gene expression databases that benefit from coherent data sets, and which are constructed to facilitate a particular analytic method, but rather complement these by acting as a tertiary, central data distribution hub. The three central data entities of GEO are platforms, samples and series, and were designed with gene expression and genomic hybridization experiments in mind. A platform is, essentially, a list of probes that define what set of molecules may be detected. A sample describes the set of molecules that are being probed and references a single platform used to generate its molecular abundance data. A series organizes samples into the meaningful data sets which make up an experiment. The GEO repository is publicly accessible through the World Wide Web at http://www.ncbi.nlm.nih.gov/geo.

10,968 citations

Journal ArticleDOI
TL;DR: How BLAT was optimized is described, which is more accurate and 500 times faster than popular existing tools for mRNA/DNA alignments and 50 times faster for protein alignments at sensitivity settings typically used when comparing vertebrate sequences.
Abstract: Analyzing vertebrate genomes requires rapid mRNA/DNA and cross-species protein alignments A new tool, BLAT, is more accurate and 500 times faster than popular existing tools for mRNA/DNA alignments and 50 times faster for protein alignments at sensitivity settings typically used when comparing vertebrate sequences BLAT's speed stems from an index of all nonoverlapping K-mers in the genome This index fits inside the RAM of inexpensive computers, and need only be computed once for each genome assembly BLAT has several major stages It uses the index to find regions in the genome likely to be homologous to the query sequence It performs an alignment between homologous regions It stitches together these aligned regions (often exons) into larger alignments (typically genes) Finally, BLAT revisits small internal exons possibly missed at the first stage and adjusts large gap boundaries that have canonical splice sites where feasible This paper describes how BLAT was optimized Effects on speed and sensitivity are explored for various K-mer sizes, mismatch schemes, and number of required index matches BLAT is compared with other alignment programs on various test sets and then used in several genome-wide applications http://genomeucscedu hosts a web-based BLAT server for the human genome

8,326 citations

Journal ArticleDOI
TL;DR: This paper proposed parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples.
Abstract: SUMMARY Non-biological experimental variation or “batch effects” are commonly observed across multiple batches of microarray experiments, often rendering the task of combining data from these batches difficult. The ability to combine microarray data sets is advantageous to researchers to increase statistical power to detect biological phenomena from studies where logistical considerations restrict sample size or in studies that require the sequential hybridization of arrays. In general, it is inappropriate to combine data sets without adjusting for batch effects. Methods have been proposed to filter batch effects from data, but these are often complicated and require large batch sizes (>25) to implement. Because the majority of microarray studies are conducted using much smaller sample sizes, existing methods are not sufficient. We propose parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples. We illustrate our methods using two example data sets and show that our methods are justifiable, easy to apply, and useful in practice. Software for our method is freely available at: http://biosun1.harvard.edu/complab/batch/.

6,319 citations

Related Papers (5)