scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Next Generation Connectivity Map: L1000 Platform and the First 1,000,000 Profiles.

TL;DR: The expanded CMap is reported, made possible by a new, low-cost, high-throughput reduced representation expression profiling method that is shown to be highly reproducible, comparable to RNA sequencing, and suitable for computational inference of the expression levels of 81% of non-measured transcripts.
About: This article is published in Cell.The article was published on 2017-11-30 and is currently open access. It has received 1943 citations till now.
Citations
More filters
Journal ArticleDOI
TL;DR: This review describes network data mining algorithms that are commonly used to study drug’s MoA and to improve the understanding of the basis of chronic diseases.
Abstract: Abstract The network approach is quickly becoming a fundamental building block of computational methods aiming at elucidating the mechanism of action (MoA) and therapeutic effect of drugs. By modeling the effect of drugs and diseases on different biological networks, it is possible to better explain the interplay between disease perturbations and drug targets as well as how drug compounds induce favorable biological responses and/or adverse effects. Omics technologies have been extensively used to generate the data needed to study the mechanisms of action of drugs and diseases. These data are often exploited to define condition-specific networks and to study whether drugs can reverse disease perturbations. In this review, we describe network data mining algorithms that are commonly used to study drug’s MoA and to improve our understanding of the basis of chronic diseases. These methods can support fundamental stages of the drug development process, including the identification of putative drug targets, the in silico screening of drug compounds and drug combinations for the treatment of diseases. We also discuss recent studies using biological and omics-driven networks to search for possible repurposed FDA-approved drug treatments for SARS-CoV-2 infections (COVID-19).

6 citations

Journal ArticleDOI
TL;DR: This work proposes to account for cell type composition when comparing transcriptomes of healthy and diseased brain samples, so that the loss of neurons can be decoupled from pathology-associated molecular effects in AD and PD brains.
Abstract: Alzheimer's disease (AD) and Parkinson's disease (PD) are the two most common neurodegenerative disorders worldwide, with age being their major risk factor. The increasing worldwide life expectancy, together with the scarcity of available treatment choices, makes it thus pressing to find the molecular basis of AD and PD so that the causing mechanisms can be targeted. To study these mechanisms, gene expression profiles have been compared between diseased and control brain tissues. However, this approach is limited by mRNA expression profiles derived for brain tissues highly reflecting their degeneration in cellular composition but not necessarily disease-related molecular states. We therefore propose to account for cell type composition when comparing transcriptomes of healthy and diseased brain samples, so that the loss of neurons can be decoupled from pathology-associated molecular effects. This approach allowed us to identify genes and pathways putatively altered systemically and in a cell-type-dependent manner in AD and PD brains. Moreover, using chemical perturbagen data, we computationally identified candidate small molecules for specifically targeting the profiled AD/PD-associated molecular alterations. Our approach therefore not only brings new insights into the disease-specific and common molecular etiologies of AD and PD but also, in these realms, foster the discovery of more specific targets for functional and therapeutic exploration.

6 citations

Journal ArticleDOI
TL;DR: The results suggest that DDX3X is epigenetically repressed in tumor tissue and that lower DDX 3X is correlated with the poor overall survival of RCC patients and high tumor size, lymph node metastasis, and distant metastasis (TNM staging system), and suggest digoxin as a precise and personalized compound for curing those patients with low DDx3X expression levels.
Abstract: DEAD (Asp-Glu-Ala-Asp) box polypeptide 3, X-linked (DDX3X) is a member of the DEAD-box family of RNA helicases whose function has been revealed to be involved in RNA metabolism. Recent studies further indicate the abnormal expression in pan-cancers and the relevant biological effects on modulating cancer progression. However, DDX3X's role in renal cell carcinoma (RCC) progression remains largely unknown. In this study, a medical informatics-based analysis using The Cancer Genome Atlas (TCGA) dataset was performed to evaluate clinical prognoses related to DDX3X. The results suggest that DDX3X is epigenetically repressed in tumor tissue and that lower DDX3X is correlated with the poor overall survival of RCC patients and high tumor size, lymph node metastasis, and distant metastasis (TNM staging system). Furthermore, knowledge-based transcriptomic analysis by Ingenuity Pathway Analysis (IPA) revealed that the SPINK1-metallothionein pathway is a top 1-repressed canonical signaling pathway by DDX3X. Furthermore, SPINK1 and the metallothionein gene family all serve as poor prognostic indicators, and the expression levels of those genes are inversely correlated with DDX3X in RCC. Furthermore, digoxin was identified via Connectivity Map analysis (L1000) for its capability to reverse gene signatures in patients with low DDX3X. Importantly, cancer cell proliferation and migration were decreased upon digoxin treatment in RCC cells. The results of this study indicate the significance of the DDX3Xlow/SPINK1high/metallothioneinhigh axis for predicting poor survival outcome in RCC patients and suggest digoxin as a precise and personalized compound for curing those patients with low DDX3X expression levels.

6 citations

Journal ArticleDOI
TL;DR: In this paper, the authors provide insights into the current challenges and opportunities of applying network pharmacology to illustrate the effectiveness of traditional Chinese medicines (TCMs) against the coronavirus disease 2019 (COVID-19).
Abstract: The purpose of this perspective is to provide insights into the current challenges and opportunities of applying network pharmacology (NP) to illustrate the effectiveness of traditional Chinese medicines (TCMs) against the coronavirus disease 2019 (COVID-19). Emerging studies have indicated that the progression of COVID-19 is associated with hematologic and immunologic responses in patients, and TCMs may fight against COVID-19 regarding the two aspects. However, the underlying mechanisms remain largely unclear [1]. This perspective is intended as a brief report derived from our previous experience in investigating the efficacy of TCMs, via conventional reductionism-based research methods, holistic NP, systems biology, or “omics” research, and prevailing big data analysis.

6 citations

Journal ArticleDOI
Juan C. Caicedo1
TL;DR: In this article , the relative strength of three high-throughput data sources-chemical structures, imaging (Cell Painting), and gene-expression profiles (L1000)-to predict compound bioactivity using a historical collection of 16,170 compounds tested in 270 assays for a total of 585,439 readouts was evaluated.
Abstract: Predicting assay results for compounds virtually using chemical structures and phenotypic profiles has the potential to reduce the time and resources of screens for drug discovery. Here, we evaluate the relative strength of three high-throughput data sources-chemical structures, imaging (Cell Painting), and gene-expression profiles (L1000)-to predict compound bioactivity using a historical collection of 16,170 compounds tested in 270 assays for a total of 585,439 readouts. All three data modalities can predict compound activity for 6-10% of assays, and in combination they predict 21% of assays with high accuracy, which is a 2 to 3 times higher success rate than using a single modality alone. In practice, the accuracy of predictors could be lower and still be useful, increasing the assays that can be predicted from 37% with chemical structures alone up to 64% when combined with phenotypic data. Our study shows that unbiased phenotypic profiling can be leveraged to enhance compound bioactivity prediction to accelerate the early stages of the drug-discovery process.

6 citations

References
More filters
Journal ArticleDOI
TL;DR: The Gene Set Enrichment Analysis (GSEA) method as discussed by the authors focuses on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation.
Abstract: Although genomewide RNA expression analysis has become a routine tool in biomedical research, extracting biological insight from such information remains a major challenge. Here, we describe a powerful analytical method called Gene Set Enrichment Analysis (GSEA) for interpreting gene expression data. The method derives its power by focusing on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation. We demonstrate how GSEA yields insights into several cancer-related data sets, including leukemia and lung cancer. Notably, where single-gene analysis finds little similarity between two independent studies of patient survival in lung cancer, GSEA reveals many biological pathways in common. The GSEA method is embodied in a freely available software package, together with an initial database of 1,325 biologically defined gene sets.

34,830 citations

Journal Article
TL;DR: A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.
Abstract: We present a new technique called “t-SNE” that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large datasets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of datasets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the datasets.

30,124 citations

Journal ArticleDOI
TL;DR: The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data and provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-power gene expression and genomic hybridization experiments.
Abstract: The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data. GEO provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-throughput gene expression and genomic hybridization experiments. GEO is not intended to replace in house gene expression databases that benefit from coherent data sets, and which are constructed to facilitate a particular analytic method, but rather complement these by acting as a tertiary, central data distribution hub. The three central data entities of GEO are platforms, samples and series, and were designed with gene expression and genomic hybridization experiments in mind. A platform is, essentially, a list of probes that define what set of molecules may be detected. A sample describes the set of molecules that are being probed and references a single platform used to generate its molecular abundance data. A series organizes samples into the meaningful data sets which make up an experiment. The GEO repository is publicly accessible through the World Wide Web at http://www.ncbi.nlm.nih.gov/geo.

10,968 citations

Journal ArticleDOI
TL;DR: How BLAT was optimized is described, which is more accurate and 500 times faster than popular existing tools for mRNA/DNA alignments and 50 times faster for protein alignments at sensitivity settings typically used when comparing vertebrate sequences.
Abstract: Analyzing vertebrate genomes requires rapid mRNA/DNA and cross-species protein alignments A new tool, BLAT, is more accurate and 500 times faster than popular existing tools for mRNA/DNA alignments and 50 times faster for protein alignments at sensitivity settings typically used when comparing vertebrate sequences BLAT's speed stems from an index of all nonoverlapping K-mers in the genome This index fits inside the RAM of inexpensive computers, and need only be computed once for each genome assembly BLAT has several major stages It uses the index to find regions in the genome likely to be homologous to the query sequence It performs an alignment between homologous regions It stitches together these aligned regions (often exons) into larger alignments (typically genes) Finally, BLAT revisits small internal exons possibly missed at the first stage and adjusts large gap boundaries that have canonical splice sites where feasible This paper describes how BLAT was optimized Effects on speed and sensitivity are explored for various K-mer sizes, mismatch schemes, and number of required index matches BLAT is compared with other alignment programs on various test sets and then used in several genome-wide applications http://genomeucscedu hosts a web-based BLAT server for the human genome

8,326 citations

Journal ArticleDOI
TL;DR: This paper proposed parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples.
Abstract: SUMMARY Non-biological experimental variation or “batch effects” are commonly observed across multiple batches of microarray experiments, often rendering the task of combining data from these batches difficult. The ability to combine microarray data sets is advantageous to researchers to increase statistical power to detect biological phenomena from studies where logistical considerations restrict sample size or in studies that require the sequential hybridization of arrays. In general, it is inappropriate to combine data sets without adjusting for batch effects. Methods have been proposed to filter batch effects from data, but these are often complicated and require large batch sizes (>25) to implement. Because the majority of microarray studies are conducted using much smaller sample sizes, existing methods are not sufficient. We propose parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples. We illustrate our methods using two example data sets and show that our methods are justifiable, easy to apply, and useful in practice. Software for our method is freely available at: http://biosun1.harvard.edu/complab/batch/.

6,319 citations

Related Papers (5)