scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Next Generation Connectivity Map: L1000 Platform and the First 1,000,000 Profiles.

TL;DR: The expanded CMap is reported, made possible by a new, low-cost, high-throughput reduced representation expression profiling method that is shown to be highly reproducible, comparable to RNA sequencing, and suitable for computational inference of the expression levels of 81% of non-measured transcripts.
About: This article is published in Cell.The article was published on 2017-11-30 and is currently open access. It has received 1943 citations till now.
Citations
More filters
Journal ArticleDOI
Shiqi Li1, Fuhui Zhang1, Xiuchan Xiao, Yanzhi Guo1, Zhining Wen1, Menglong Li1, Xuemei Pu1 
TL;DR: Wang et al. as discussed by the authors constructed the transcriptomics-based and network-based prediction models to quickly screen the potential drug combination for prostate cancer, and further assess their performance by in vitro assays.
Abstract: Prostate cancer (PRAD) is a major cause of cancer-related deaths. Current monotherapies show limited efficacy due to often rapidly emerging resistance. Combination therapies could provide an alternative solution to address this problem with enhanced therapeutic effect, reduced cytotoxicity, and delayed the appearance of drug resistance. However, it is prohibitively cost and labor-intensive for the experimental approaches to pick out synergistic combinations from the millions of possibilities. Thus, it is highly desired to explore other efficient strategies to assist experimental researches. Inspired by the challenge, we construct the transcriptomics-based and network-based prediction models to quickly screen the potential drug combination for Prostate cancer, and further assess their performance by in vitro assays. The transcriptomics-based method screens nine possible combinations. However, the network-based method gives discrepancies for at least three drug pairs. Further experimental results indicate the dose-dependent effects of the three docetaxel-containing combinations, and confirm the synergistic effects of the other six combinations predicted by the transcriptomics-based model. For the network-based predictions, in vitro tests give opposite results to the two combinations (i.e. mitoxantrone-cyproheptadine and cabazitaxel-cyproheptadine). Namely, the transcriptomics-based method outperforms the network-based one for the specific disease like Prostate cancer, which provide guideline for selection of the computational methods in the drug combination screening. More importantly, six combinations (the three mitoxantrone-containing and the three cabazitaxel-containing combinations) are found to be promising candidates to synergistically conquer Prostate cancer.

7 citations

Journal ArticleDOI
01 Mar 2022
TL;DR: In this paper , the authors discuss past, present, and future developments of pharmacogenetics methodology, focusing on three milestones: how early research established the genetic basis of drug responses, how technological progress made it possible to assess the full extent of pharmacological variants, and how multi-dimensional omics datasets can improve the identification, functional validation, and mechanistic understanding of the interplay between genes and drugs.
Abstract: The origins of pharmacogenetics date back to the 1950s, when it was established that inter-individual differences in drug response are partially determined by genetic factors. Since then, pharmacogenetics has grown into its own field, motivated by the translation of identified gene-drug interactions into therapeutic applications. Despite numerous challenges ahead, our understanding of the human pharmacogenetic landscape has greatly improved thanks to the integration of tools originating from disciplines as diverse as biochemistry, molecular biology, statistics, and computer sciences. In this review, we discuss past, present, and future developments of pharmacogenetics methodology, focusing on three milestones: how early research established the genetic basis of drug responses, how technological progress made it possible to assess the full extent of pharmacological variants, and how multi-dimensional omics datasets can improve the identification, functional validation, and mechanistic understanding of the interplay between genes and drugs. We outline novel strategies to repurpose and integrate molecular and clinical data originating from biobanks to gain insights analogous to those obtained from randomized controlled trials. Emphasizing the importance of increased diversity, we envision future directions for the field that should pave the way to the clinical implementation of pharmacogenetics.

7 citations

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors developed and validated an effective signature based on autophagy-, apoptosis-and necrosis-related genes for prognostic implications in Glioblastoma (GBM) patients.
Abstract: Glioblastoma (GBM) is considered the most malignant and devastating intracranial tumor without effective treatment. Autophagy, apoptosis, and necrosis, three classically known cell death pathways, can provide novel clinical and immunological insights, which may assist in designing personalized therapeutics. In this study, we developed and validated an effective signature based on autophagy-, apoptosis- and necrosis-related genes for prognostic implications in GBM patients.Variations in the expression of genes involved in autophagy, apoptosis and necrosis were explored in 518 GBM patients from The Cancer Genome Atlas (TCGA) database. Univariate Cox analysis, least absolute shrinkage and selection operator (LASSO) analysis, and multivariate Cox analysis were performed to construct a combined prognostic signature. Kaplan-Meier survival, receiver-operating characteristic (ROC) curves and Cox regression analyses based on overall survival (OS) and progression-free survival (PFS) were conducted to estimate the independent prognostic performance of the gene signature. The Chinese Glioma Genome Atlas (CGGA) dataset was used for external validation. Finally, we investigated the differences in the immune microenvironment between different prognostic groups and predicted potential compounds targeting each group.A 16-gene cell death index (CDI) was established. Patients were clustered into either the high risk or the low risk groups according to the CDI score, and those in the low risk group presented significantly longer OS and PFS than the high CDI group. ROC curves demonstrated outstanding performance of the gene signature in both the training and validation groups. Furthermore, immune cell analysis identified higher infiltration of neutrophils, macrophages, Treg, T helper cells, and aDCs, and lower infiltration of B cells in the high CDI group. Interestingly, this group also showed lower expression levels of immune checkpoint molecules PDCD1 and CD200, and higher expression levels of PDCD1LG2, CD86, CD48 and IDO1.Our study proposes that the CDI signature can be utilized as a prognostic predictor and may guide patients' selection for preferential use of immunotherapy in GBM.

7 citations

Journal ArticleDOI
TL;DR: A novel domain-adversarial multi-task framework for integrating shared knowledge from multiple domains that first uses an adversarial strategy to learn target representations and then models nonlinear dependency among several domains.
Abstract: Motivation With the rapid development of high-throughput technologies, parallel acquisition of large-scale drug-informatics data provides significant opportunities to improve pharmaceutical research and development. One important application is the purpose prediction of small-molecule compounds with the objective of specifying the therapeutic properties of extensive purpose-unknown compounds and repurposing the novel therapeutic properties of FDA-approved drugs. Such a problem is extremely challenging because compound attributes include heterogeneous data with various feature patterns, such as drug fingerprints, drug physicochemical properties and drug perturbation gene expressions. Moreover, there is a complex non-linear dependency among heterogeneous data. In this study, we propose a novel domain-adversarial multi-task framework for integrating shared knowledge from multiple domains. The framework first uses an adversarial strategy to learn target representations and then models non-linear dependency among several domains. Results Experiments on two real-world datasets illustrate that our approach achieves an obvious improvement over competitive baselines. The novel therapeutic properties of purpose-unknown compounds that we predicted have been widely reported or brought to clinics. Furthermore, our framework can integrate various attributes beyond the three domains examined herein and can be applied in industry for screening significant numbers of small-molecule drug candidates. Availability and implementation The source code and datasets are available at https://github.com/JohnnyY8/DAMT-Model. Supplementary information Supplementary data are available at Bioinformatics online.

7 citations

References
More filters
Journal ArticleDOI
TL;DR: The Gene Set Enrichment Analysis (GSEA) method as discussed by the authors focuses on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation.
Abstract: Although genomewide RNA expression analysis has become a routine tool in biomedical research, extracting biological insight from such information remains a major challenge. Here, we describe a powerful analytical method called Gene Set Enrichment Analysis (GSEA) for interpreting gene expression data. The method derives its power by focusing on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation. We demonstrate how GSEA yields insights into several cancer-related data sets, including leukemia and lung cancer. Notably, where single-gene analysis finds little similarity between two independent studies of patient survival in lung cancer, GSEA reveals many biological pathways in common. The GSEA method is embodied in a freely available software package, together with an initial database of 1,325 biologically defined gene sets.

34,830 citations

Journal Article
TL;DR: A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.
Abstract: We present a new technique called “t-SNE” that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large datasets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of datasets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the datasets.

30,124 citations

Journal ArticleDOI
TL;DR: The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data and provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-power gene expression and genomic hybridization experiments.
Abstract: The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data. GEO provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-throughput gene expression and genomic hybridization experiments. GEO is not intended to replace in house gene expression databases that benefit from coherent data sets, and which are constructed to facilitate a particular analytic method, but rather complement these by acting as a tertiary, central data distribution hub. The three central data entities of GEO are platforms, samples and series, and were designed with gene expression and genomic hybridization experiments in mind. A platform is, essentially, a list of probes that define what set of molecules may be detected. A sample describes the set of molecules that are being probed and references a single platform used to generate its molecular abundance data. A series organizes samples into the meaningful data sets which make up an experiment. The GEO repository is publicly accessible through the World Wide Web at http://www.ncbi.nlm.nih.gov/geo.

10,968 citations

Journal ArticleDOI
TL;DR: How BLAT was optimized is described, which is more accurate and 500 times faster than popular existing tools for mRNA/DNA alignments and 50 times faster for protein alignments at sensitivity settings typically used when comparing vertebrate sequences.
Abstract: Analyzing vertebrate genomes requires rapid mRNA/DNA and cross-species protein alignments A new tool, BLAT, is more accurate and 500 times faster than popular existing tools for mRNA/DNA alignments and 50 times faster for protein alignments at sensitivity settings typically used when comparing vertebrate sequences BLAT's speed stems from an index of all nonoverlapping K-mers in the genome This index fits inside the RAM of inexpensive computers, and need only be computed once for each genome assembly BLAT has several major stages It uses the index to find regions in the genome likely to be homologous to the query sequence It performs an alignment between homologous regions It stitches together these aligned regions (often exons) into larger alignments (typically genes) Finally, BLAT revisits small internal exons possibly missed at the first stage and adjusts large gap boundaries that have canonical splice sites where feasible This paper describes how BLAT was optimized Effects on speed and sensitivity are explored for various K-mer sizes, mismatch schemes, and number of required index matches BLAT is compared with other alignment programs on various test sets and then used in several genome-wide applications http://genomeucscedu hosts a web-based BLAT server for the human genome

8,326 citations

Journal ArticleDOI
TL;DR: This paper proposed parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples.
Abstract: SUMMARY Non-biological experimental variation or “batch effects” are commonly observed across multiple batches of microarray experiments, often rendering the task of combining data from these batches difficult. The ability to combine microarray data sets is advantageous to researchers to increase statistical power to detect biological phenomena from studies where logistical considerations restrict sample size or in studies that require the sequential hybridization of arrays. In general, it is inappropriate to combine data sets without adjusting for batch effects. Methods have been proposed to filter batch effects from data, but these are often complicated and require large batch sizes (>25) to implement. Because the majority of microarray studies are conducted using much smaller sample sizes, existing methods are not sufficient. We propose parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples. We illustrate our methods using two example data sets and show that our methods are justifiable, easy to apply, and useful in practice. Software for our method is freely available at: http://biosun1.harvard.edu/complab/batch/.

6,319 citations

Related Papers (5)