scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Next Generation Connectivity Map: L1000 Platform and the First 1,000,000 Profiles.

TL;DR: The expanded CMap is reported, made possible by a new, low-cost, high-throughput reduced representation expression profiling method that is shown to be highly reproducible, comparable to RNA sequencing, and suitable for computational inference of the expression levels of 81% of non-measured transcripts.
About: This article is published in Cell.The article was published on 2017-11-30 and is currently open access. It has received 1943 citations till now.
Citations
More filters
Journal ArticleDOI
TL;DR: In this paper, the role of synthetic lethality in cancer risk was investigated by quantifying the extent of co-inactivation of cancer synthetic lethal (cSL) gene pairs and finding that normal tissues with more down-regulated cSL gene pairs have lower and delayed cancer risk.
Abstract: Various characteristics of cancers exhibit tissue specificity, including lifetime cancer risk, onset age, and cancer driver genes. Previously, the large variation in cancer risk across human tissues was found to strongly correlate with the number of stem cell divisions and abnormal DNA methylation levels. Here, we study the role of synthetic lethality in cancer risk. Analyzing normal tissue transcriptomics data in the Genotype-Tissue Expression project, we quantify the extent of co-inactivation of cancer synthetic lethal (cSL) gene pairs and find that normal tissues with more down-regulated cSL gene pairs have lower and delayed cancer risk. Consistently, more cSL gene pairs become up-regulated in cells treated by carcinogens and throughout premalignant stages in vivo. We also show that the tissue specificity of numerous tumor suppressor genes is associated with the expression of their cSL partner genes across normal tissues. Overall, our findings support the possible role of synthetic lethality in tumorigenesis.

9 citations

Posted ContentDOI
12 Nov 2019-bioRxiv
TL;DR: A molecular quantitative trait locus map of gene expression and protein abundance in disease is constructed using primary chondrocytes, synoviocytes and peripheral blood from patients with osteoarthritis to identify molecularly-defined patient subgroups that correlate with clinical characteristics.
Abstract: Osteoarthritis is a serious joint disease that causes pain and functional disability for a quarter of a billion people worldwide1, with no disease-stratifying tools nor modifying therapy. Here, we use primary chondrocytes, synoviocytes and peripheral blood from patients with osteoarthritis to construct a molecular quantitative trait locus map of gene expression and protein abundance in disease. By integrating data across omics levels, we identify likely effector genes for osteoarthritis-associated genetic signals. We detect stark molecular differences between macroscopically intact (low-grade) and highly degenerated (high-grade) cartilage, reflecting activation of the extracellular matrix-receptor interaction pathway. Using unsupervised consensus clustering on transcriptome-wide sequencing, we identify molecularly-defined patient subgroups that correlate with clinical characteristics. Between-cluster differences are driven by inflammation, presenting the opportunity to stratify patients on the basis of their molecular profile for tailored intervention. We construct and validate a 7-gene classifier that reproducibly distinguishes between these disease subtypes. Finally, we identify potentially actionable compounds for disease modification and drug repositioning. Our findings contribute to both patient stratification and therapy development in this globally important area of unmet need.

9 citations

Posted ContentDOI
31 Oct 2019-bioRxiv
TL;DR: Using live-cell, time-lapse imaging, and a library of 1,833 small molecules including FDA-approved drugs and investigational agents, a large compendium of kinetic cell death ‘modulatory profiles’ for inducers of apoptosis and ferroptosis is assembled.
Abstract: Cell death can be executed by regulated apoptotic and non-apoptotic pathways, including the iron-dependent process of ferroptosis. Small molecules are essential tools for studying the regulation of cell death. Using live-cell, time-lapse imaging, and a library of 1,833 small molecules including FDA-approved drugs and investigational agents, we assemble a large compendium of kinetic cell death ‘modulatory profiles’ for inducers of apoptosis and ferroptosis. From this dataset we identified dozens of small molecule inhibitors of ferroptosis, including numerous investigational and FDA-approved drugs with unexpected off-target antioxidant or iron chelating activities. ATP-competitive mechanistic target of rapamycin (mTOR) inhibitors, by contrast, were on-target ferroptosis inhibitors. Further investigation revealed both mTOR-dependent and mTOR-independent mechanisms linking amino acid levels to the regulation of ferroptosis sensitivity in cancer cells. These results highlight widespread bioactive compound pleiotropy and link amino acid sensing to the regulation of ferroptosis.

9 citations

Journal ArticleDOI
TL;DR: In this paper, the authors used empirical learning curves for evaluating and comparing the data scaling properties of two neural networks (NNs) and two gradient boosting decision tree (GBDT) models trained on four cell line drug screening datasets.
Abstract: Motivated by the size and availability of cell line drug sensitivity data, researchers have been developing machine learning (ML) models for predicting drug response to advance cancer treatment. As drug sensitivity studies continue generating drug response data, a common question is whether the generalization performance of existing prediction models can be further improved with more training data. We utilize empirical learning curves for evaluating and comparing the data scaling properties of two neural networks (NNs) and two gradient boosting decision tree (GBDT) models trained on four cell line drug screening datasets. The learning curves are accurately fitted to a power law model, providing a framework for assessing the data scaling behavior of these models. The curves demonstrate that no single model dominates in terms of prediction performance across all datasets and training sizes, thus suggesting that the actual shape of these curves depends on the unique pair of an ML model and a dataset. The multi-input NN (mNN), in which gene expressions of cancer cells and molecular drug descriptors are input into separate subnetworks, outperforms a single-input NN (sNN), where the cell and drug features are concatenated for the input layer. In contrast, a GBDT with hyperparameter tuning exhibits superior performance as compared with both NNs at the lower range of training set sizes for two of the tested datasets, whereas the mNN consistently performs better at the higher range of training sizes. Moreover, the trajectory of the curves suggests that increasing the sample size is expected to further improve prediction scores of both NNs. These observations demonstrate the benefit of using learning curves to evaluate prediction models, providing a broader perspective on the overall data scaling characteristics. A fitted power law learning curve provides a forward-looking metric for analyzing prediction performance and can serve as a co-design tool to guide experimental biologists and computational scientists in the design of future experiments in prospective research studies.

9 citations

Journal ArticleDOI
Miaowei Wu1, Weilei Hu1, Guosheng Wang1, Yihan Yao1, Xiao-Fang Yu1 
TL;DR: Nicotinamide N-methyltransferase can be used as a prognostic biomarker that reflect immune infiltration level and a novel therapeutic target in GC.
Abstract: Gastric cancer (GC) is the third most common cause of cancer-related death in the word. Immunotherapy is a promising treatment of cancer. However, it is unclear which GC subpopulation would benefit most from immunotherapy and it is necessary to develop effective biomarkers for predicting immunotherapy response. Nicotinamide N-methyltransferase (NNMT) is a metabolic regulator of cancer-associated fibroblast (CAF) differentiation and cancer progression. In this study, we explored the correlations of NNMT to tumor-infiltrating immune cells (TIICs) and immune marker sets in The Cancer Genome Atlas Stomach Adenocarcinoma STAD (TCGA-STAD). Subsequently, we screened the NNMT correlated genes and performed the enrichment analysis of these genes. We eventually predicted the 19 most potential small-molecule drugs using the connectivity map (CMap) and Comparative Toxicogenomics Database (CTD). Also, nadolol, tranexamic acid, felbinac and dapsone were considered the four most promising drugs for GC. In summary, NNMT can be used as a prognostic biomarker that reflect immune infiltration level and a novel therapeutic target in GC.

9 citations


Cites methods from "A Next Generation Connectivity Map:..."

  • ...The prognostic value of NNMT mRNA expression was explored using an online database, Kaplan-Meier Plotter2 (Szasz et al., 2016), which contained gene expression data and survival information of GC patients....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: The Gene Set Enrichment Analysis (GSEA) method as discussed by the authors focuses on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation.
Abstract: Although genomewide RNA expression analysis has become a routine tool in biomedical research, extracting biological insight from such information remains a major challenge. Here, we describe a powerful analytical method called Gene Set Enrichment Analysis (GSEA) for interpreting gene expression data. The method derives its power by focusing on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation. We demonstrate how GSEA yields insights into several cancer-related data sets, including leukemia and lung cancer. Notably, where single-gene analysis finds little similarity between two independent studies of patient survival in lung cancer, GSEA reveals many biological pathways in common. The GSEA method is embodied in a freely available software package, together with an initial database of 1,325 biologically defined gene sets.

34,830 citations

Journal Article
TL;DR: A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.
Abstract: We present a new technique called “t-SNE” that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large datasets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of datasets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the datasets.

30,124 citations

Journal ArticleDOI
TL;DR: The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data and provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-power gene expression and genomic hybridization experiments.
Abstract: The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data. GEO provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-throughput gene expression and genomic hybridization experiments. GEO is not intended to replace in house gene expression databases that benefit from coherent data sets, and which are constructed to facilitate a particular analytic method, but rather complement these by acting as a tertiary, central data distribution hub. The three central data entities of GEO are platforms, samples and series, and were designed with gene expression and genomic hybridization experiments in mind. A platform is, essentially, a list of probes that define what set of molecules may be detected. A sample describes the set of molecules that are being probed and references a single platform used to generate its molecular abundance data. A series organizes samples into the meaningful data sets which make up an experiment. The GEO repository is publicly accessible through the World Wide Web at http://www.ncbi.nlm.nih.gov/geo.

10,968 citations

Journal ArticleDOI
TL;DR: How BLAT was optimized is described, which is more accurate and 500 times faster than popular existing tools for mRNA/DNA alignments and 50 times faster for protein alignments at sensitivity settings typically used when comparing vertebrate sequences.
Abstract: Analyzing vertebrate genomes requires rapid mRNA/DNA and cross-species protein alignments A new tool, BLAT, is more accurate and 500 times faster than popular existing tools for mRNA/DNA alignments and 50 times faster for protein alignments at sensitivity settings typically used when comparing vertebrate sequences BLAT's speed stems from an index of all nonoverlapping K-mers in the genome This index fits inside the RAM of inexpensive computers, and need only be computed once for each genome assembly BLAT has several major stages It uses the index to find regions in the genome likely to be homologous to the query sequence It performs an alignment between homologous regions It stitches together these aligned regions (often exons) into larger alignments (typically genes) Finally, BLAT revisits small internal exons possibly missed at the first stage and adjusts large gap boundaries that have canonical splice sites where feasible This paper describes how BLAT was optimized Effects on speed and sensitivity are explored for various K-mer sizes, mismatch schemes, and number of required index matches BLAT is compared with other alignment programs on various test sets and then used in several genome-wide applications http://genomeucscedu hosts a web-based BLAT server for the human genome

8,326 citations

Journal ArticleDOI
TL;DR: This paper proposed parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples.
Abstract: SUMMARY Non-biological experimental variation or “batch effects” are commonly observed across multiple batches of microarray experiments, often rendering the task of combining data from these batches difficult. The ability to combine microarray data sets is advantageous to researchers to increase statistical power to detect biological phenomena from studies where logistical considerations restrict sample size or in studies that require the sequential hybridization of arrays. In general, it is inappropriate to combine data sets without adjusting for batch effects. Methods have been proposed to filter batch effects from data, but these are often complicated and require large batch sizes (>25) to implement. Because the majority of microarray studies are conducted using much smaller sample sizes, existing methods are not sufficient. We propose parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples. We illustrate our methods using two example data sets and show that our methods are justifiable, easy to apply, and useful in practice. Software for our method is freely available at: http://biosun1.harvard.edu/complab/batch/.

6,319 citations

Related Papers (5)