scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Next Generation Connectivity Map: L1000 Platform and the First 1,000,000 Profiles.

TL;DR: The expanded CMap is reported, made possible by a new, low-cost, high-throughput reduced representation expression profiling method that is shown to be highly reproducible, comparable to RNA sequencing, and suitable for computational inference of the expression levels of 81% of non-measured transcripts.
About: This article is published in Cell.The article was published on 2017-11-30 and is currently open access. It has received 1943 citations till now.
Citations
More filters
Journal ArticleDOI
TL;DR: This work presents an innovative enhancement of the NMTF method that consists of a shortest-path evaluation of drug-protein pairs using the protein-to-protein interaction network, which allows inferring novel protein targets that were never considered as drug targets before, increasing the information fed to theNMTF method.
Abstract: Classical drug design methodologies are hugely costly and time-consuming, with approximately 85% of the new proposed molecules failing in the first three phases of the FDA drug approval process. Thus, strategies to find alternative indications for already approved drugs that leverage computational methods are of crucial relevance. We previously demonstrated the efficacy of the Non-negative Matrix Tri-Factorization, a method that allows exploiting both data integration and machine learning, to infer novel indications for approved drugs. In this work, we present an innovative enhancement of the NMTF method that consists of a shortest-path evaluation of drug-protein pairs using the protein-to-protein interaction network. This approach allows inferring novel protein targets that were never considered as drug targets before, increasing the information fed to the NMTF method. Indeed, this novel advance enables the investigation of drug-centric predictions, simultaneously identifying therapeutic classes, protein targets and diseases associated with a particular drug. To test our methodology, we applied the NMTF and shortest-path enhancement methods to an outdated collection of data and compared the predictions against the most updated version, obtaining very good performance, with an Average Precision Score of 0.82. The data enhancement strategy allowed increasing the number of putative protein targets from 3,691 to 15,295, while the predictive performance of the method is slightly increased. Finally, we also validated our top-scored predictions according to the literature, finding relevant confirmation of predicted interactions between drugs and protein targets, as well as of predicted annotations between drugs and both therapeutic classes and diseases.

22 citations


Cites methods from "A Next Generation Connectivity Map:..."

  • ...So et al. [15] proposed a framework to compute similarities between transcriptomic signatures from genome-wide association studies (GWAS) and the Connectivity Map [19]....

    [...]

  • ...[15] proposed a framework to compute similarities between transcriptomic signatures from genome-wide association studies (GWAS) and the Connectivity Map [19]....

    [...]

Journal ArticleDOI
22 Mar 2019
TL;DR: Wang et al. as discussed by the authors introduced state-of-the-art free data sources, web servers and softwares that can be used in the TCM network pharmacology, including databases of TCM, drug targets and diseases, Web servers for the prediction of drug targets, and tools for network and functional analysis.
Abstract: Traditional Chinese medicine (TCM) treats diseases in a holistic manner, while TCM formulae are multi-component, multi-target agents at the molecular level. Thus there are many parallels between the key ideas of TCM pharmacology and network pharmacology. These years, TCM network pharmacology has developed as an interdisciplinary of TCM science and network pharmacology, which studies the mechanism of TCM at the molecular level and in the context of biological networks. It provides a new research paradigm that can use modern biomedical science to interpret the mechanism of TCM, which is promising to accelerate the modernization and internationalization of TCM. In this paper we introduce state-of-the-art free data sources, web servers and softwares that can be used in the TCM network pharmacology, including databases of TCM, drug targets and diseases, web servers for the prediction of drug targets, and tools for network and functional analysis. This review could help experimental pharmacologists make better use of the existing data and methods in their study of TCM.

22 citations

Journal ArticleDOI
TL;DR: The motivation to leverage GWAS for drug discovery is summarized, the critical bottlenecks in the field are outlined, and several promising strategies such as functional genomics and network-based approaches are highlighted to enhance the translational value of CAD GWAS findings in driving novel therapeutics.
Abstract: The success of genome-wide association studies (GWAS) has significantly advanced our understanding of the etiology of coronary artery disease (CAD) and opens new opportunities to reinvigorate the stalling CAD drug development. However, there exists remarkable disconnection between the CAD GWAS findings and commercialized drugs. While this could implicate major untapped translational and therapeutic potentials in CAD GWAS, it also brings forward extensive technical challenges. In this review we summarize the motivation to leverage GWAS for drug discovery, outline the critical bottlenecks in the field, and highlight several promising strategies such as functional genomics and network-based approaches to enhance the translational value of CAD GWAS findings in driving novel therapeutics.

22 citations

Journal ArticleDOI
TL;DR: Results suggest that inhibition of BET proteins may present a novel therapeutic opportunity in the treatment of NASH and liver fibrosis.
Abstract: Non-alcoholic fatty liver disease (NAFLD) is a leading form of chronic liver disease with large unmet need. Non-alcoholic steatohepatitis (NASH), a progressive variant of NAFLD, can lead to fibrosis, cirrhosis, and hepatocellular carcinoma. To identify potential new therapeutics for NASH, we used a computational approach based on Connectivity Map (CMAP) analysis, which pointed us to bromodomain and extra-terminal motif (BET) inhibitors for treating NASH. To experimentally validate this hypothesis, we tested a small-molecule inhibitor of the BET family of proteins, GSK1210151A (I-BET151), in the STAM mouse NASH model at two different dosing timepoints (onset of NASH and progression to fibrosis). I-BET151 decreased the non-alcoholic fatty liver disease activity score (NAS), a clinical endpoint for assessing the severity of NASH, as well as progression of liver fibrosis and interferon-γ expression. Transcriptional characterization of these mice through RNA-sequencing was consistent with predictions from the CMAP analysis of a human NASH signature and pointed to alterations in molecular mechanisms related to interferon signaling and cholesterol biosynthesis, as well as reversal of gene expression patterns linked to fibrotic markers. Altogether, these results suggest that inhibition of BET proteins may present a novel therapeutic opportunity in the treatment of NASH and liver fibrosis.

22 citations

Journal ArticleDOI
TL;DR: This review discusses key resources available for pharmacogenomics and pharmacogenetics research and highlights recent work within the field.
Abstract: The field of pharmacogenomics is an area of great potential for near-term human health impacts from the big genomic data revolution. Pharmacogenomics research momentum is building with numerous hypotheses currently being investigated through the integration of molecular profiles of different cell lines and large genomic data sets containing information on cellular and human responses to therapies. Additionally, the results of previous pharmacogenetic research efforts have been formulated into clinical guidelines that are beginning to impact how healthcare is conducted on the level of the individual patient. This trend will only continue with the recent release of new datasets containing linked genotype and electronic medical record data. This review discusses key resources available for pharmacogenomics and pharmacogenetics research and highlights recent work within the field.

22 citations

References
More filters
Journal ArticleDOI
TL;DR: The Gene Set Enrichment Analysis (GSEA) method as discussed by the authors focuses on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation.
Abstract: Although genomewide RNA expression analysis has become a routine tool in biomedical research, extracting biological insight from such information remains a major challenge. Here, we describe a powerful analytical method called Gene Set Enrichment Analysis (GSEA) for interpreting gene expression data. The method derives its power by focusing on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation. We demonstrate how GSEA yields insights into several cancer-related data sets, including leukemia and lung cancer. Notably, where single-gene analysis finds little similarity between two independent studies of patient survival in lung cancer, GSEA reveals many biological pathways in common. The GSEA method is embodied in a freely available software package, together with an initial database of 1,325 biologically defined gene sets.

34,830 citations

Journal Article
TL;DR: A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.
Abstract: We present a new technique called “t-SNE” that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large datasets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of datasets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the datasets.

30,124 citations

Journal ArticleDOI
TL;DR: The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data and provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-power gene expression and genomic hybridization experiments.
Abstract: The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data. GEO provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-throughput gene expression and genomic hybridization experiments. GEO is not intended to replace in house gene expression databases that benefit from coherent data sets, and which are constructed to facilitate a particular analytic method, but rather complement these by acting as a tertiary, central data distribution hub. The three central data entities of GEO are platforms, samples and series, and were designed with gene expression and genomic hybridization experiments in mind. A platform is, essentially, a list of probes that define what set of molecules may be detected. A sample describes the set of molecules that are being probed and references a single platform used to generate its molecular abundance data. A series organizes samples into the meaningful data sets which make up an experiment. The GEO repository is publicly accessible through the World Wide Web at http://www.ncbi.nlm.nih.gov/geo.

10,968 citations

Journal ArticleDOI
TL;DR: How BLAT was optimized is described, which is more accurate and 500 times faster than popular existing tools for mRNA/DNA alignments and 50 times faster for protein alignments at sensitivity settings typically used when comparing vertebrate sequences.
Abstract: Analyzing vertebrate genomes requires rapid mRNA/DNA and cross-species protein alignments A new tool, BLAT, is more accurate and 500 times faster than popular existing tools for mRNA/DNA alignments and 50 times faster for protein alignments at sensitivity settings typically used when comparing vertebrate sequences BLAT's speed stems from an index of all nonoverlapping K-mers in the genome This index fits inside the RAM of inexpensive computers, and need only be computed once for each genome assembly BLAT has several major stages It uses the index to find regions in the genome likely to be homologous to the query sequence It performs an alignment between homologous regions It stitches together these aligned regions (often exons) into larger alignments (typically genes) Finally, BLAT revisits small internal exons possibly missed at the first stage and adjusts large gap boundaries that have canonical splice sites where feasible This paper describes how BLAT was optimized Effects on speed and sensitivity are explored for various K-mer sizes, mismatch schemes, and number of required index matches BLAT is compared with other alignment programs on various test sets and then used in several genome-wide applications http://genomeucscedu hosts a web-based BLAT server for the human genome

8,326 citations

Journal ArticleDOI
TL;DR: This paper proposed parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples.
Abstract: SUMMARY Non-biological experimental variation or “batch effects” are commonly observed across multiple batches of microarray experiments, often rendering the task of combining data from these batches difficult. The ability to combine microarray data sets is advantageous to researchers to increase statistical power to detect biological phenomena from studies where logistical considerations restrict sample size or in studies that require the sequential hybridization of arrays. In general, it is inappropriate to combine data sets without adjusting for batch effects. Methods have been proposed to filter batch effects from data, but these are often complicated and require large batch sizes (>25) to implement. Because the majority of microarray studies are conducted using much smaller sample sizes, existing methods are not sufficient. We propose parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples. We illustrate our methods using two example data sets and show that our methods are justifiable, easy to apply, and useful in practice. Software for our method is freely available at: http://biosun1.harvard.edu/complab/batch/.

6,319 citations

Related Papers (5)