scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Next Generation Connectivity Map: L1000 Platform and the First 1,000,000 Profiles.

TL;DR: The expanded CMap is reported, made possible by a new, low-cost, high-throughput reduced representation expression profiling method that is shown to be highly reproducible, comparable to RNA sequencing, and suitable for computational inference of the expression levels of 81% of non-measured transcripts.
About: This article is published in Cell.The article was published on 2017-11-30 and is currently open access. It has received 1943 citations till now.
Citations
More filters
Posted Content
29 Mar 2020
TL;DR: In this paper, the authors proposed six candidate drugs, including geldanamycin, panobinostat, trichostatin A, narciclasine, COL-3 and CGP-60474, that could best reverse abnormal gene expression caused by spike protein of (SARS)-CoV-2-induced inhibition of ACE2 in lung cells.
Abstract: Lung injury with severe respiratory failure is the leading cause of death in COVID-19. Inhibition of ACE2 caused by spike protein of (SARS)-CoV-2 is the most plausible mechanism of lung injury in COVID-19. We proposed six candidate drugs, including geldanamycin, panobinostat, trichostatin A, narciclasine, COL-3 and CGP-60474, that could best reverse abnormal gene expression caused by (SARS)-CoV-2-induced inhibition of ACE2 in lung cells, for the promise of treating lung injuries in COVID-19.

8 citations

Journal ArticleDOI
TL;DR: In this paper, the advent of human-induced pluripotent stem cell (hiPSC) technology presents an advantageous system that complements animal models of neurodegenerative diseases.
Abstract: Neurodegenerative diseases affect millions of people worldwide and are characterized by the chronic and progressive deterioration of neural function. Neurodegenerative diseases, such as Alzheimer’s disease (AD), Parkinson’s disease (PD), amyotrophic lateral sclerosis (ALS), and Huntington’s disease (HD), represent a huge social and economic burden due to increasing prevalence in our aging society, severity of symptoms, and lack of effective disease-modifying therapies. This lack of effective treatments is partly due to a lack of reliable models. Modeling neurodegenerative diseases is difficult because of poor access to human samples (restricted in general to postmortem tissue) and limited knowledge of disease mechanisms in a human context. Animal models play an instrumental role in understanding these diseases but fail to comprehensively represent the full extent of disease due to critical differences between humans and other mammals. The advent of human-induced pluripotent stem cell (hiPSC) technology presents an advantageous system that complements animal models of neurodegenerative diseases. Coupled with advances in gene-editing technologies, hiPSC-derived neural cells from patients and healthy donors now allow disease modeling using human samples that can be used for drug discovery.

8 citations

Journal ArticleDOI
TL;DR: In this paper , the role of L1 in metabolic reprogramming of lung cancer and the potential for a therapeutic strategy for treating lung cancer was discussed. But, the authors did not identify the specific metabolic processes and mitochondrial functions and was associated with genomic instability, hypomethylation, tumor stage and tumor immune microenvironment.
Abstract: Abstract Background Long Interspersed Nuclear Element-1 (LINE-1, L1) is increasingly regarded as a genetic risk for lung cancer. Transcriptionally active LINE-1 forms a L1-gene chimeric transcript (LCTs), through somatic L1 retrotransposition (LRT) or L1 antisense promoter (L1-ASP) activation, to play an oncogenic role in cancer progression. Methods Here, we developed Retrotransposon-gene fusion estimation program (ReFuse), to identify and quantify LCTs in RNA sequencing data from TCGA lung cancer cohort ( n = 1146) and a single cell RNA sequencing dataset then further validated those LCTs in an independent cohort ( n = 134). We next examined the functional roles of a cancer specific LCT ( L1-FGGY ) in cell proliferation and tumor progression in LUSC cell lines and mice. Results The LCT events correspond with specific metabolic processes and mitochondrial functions and was associated with genomic instability, hypomethylation, tumor stage and tumor immune microenvironment (TIME). Functional analysis of a tumor specific and frequent LCT involving FGGY ( L1-FGGY ) reveal that the arachidonic acid (AA) metabolic pathway was activated by the loss of FGGY through the L1-FGGY chimeric transcript to promote tumor growth, which was effectively targeted by a combined use of an anti-HIV drug (NVR) and a metabolic inhibitor (ML355). Lastly, we identified a set of transcriptomic signatures to stratify the LUSC patients with a higher risk for poor outcomes who may benefit from treatments using NVR alone or combined with an anti-metabolism drug. Conclusions This study is the first to characterize the role of L1 in metabolic reprogramming of lung cancer and provide rationale for L1-specifc prognosis and potential for a therapeutic strategy for treating lung cancer. Trial registration Study on the mechanisms of the mobile element L1-FGGY promoting the proliferation, invasion and immune escape of lung squamous cell carcinoma through the 12-LOX/Wnt pathway, Ek2020111. Registered 27 March 2020 ‐ Retrospectively registered.

8 citations

Journal ArticleDOI
08 May 2020
TL;DR: The Imaging-AMARETTO algorithms and software tools to systematically interrogate regulatory networks derived from multiomics data within and across related patient studies for their relevance to radiography and histopathology imaging features predicting clinical outcomes are presented.
Abstract: PURPOSEThe availability of increasing volumes of multiomics, imaging, and clinical data in complex diseases such as cancer opens opportunities for the formulation and development of computational i...

8 citations

Journal ArticleDOI
TL;DR: Here, the computational tools that have been developed to integrate cancer cell lines' genomic profiles and sensitivity to small molecule perturbations obtained from different screenings are reviewed.
Abstract: Since the pioneering NCI-60 panel of the late'80's, several major screenings of genetic profiling and drug testing in cancer cell lines have been conducted to investigate how genetic backgrounds and transcriptional patterns shape cancer's response to therapy and to identify disease-specific genes associated with drug response. Historically, pharmacogenomics screenings have been largely heterogeneous in terms of investigated cell lines, assay technologies, number of compounds, type and quality of genomic data, and methods for their computational analysis. The analysis of this enormous and heterogeneous amount of data required the development of computational methods for the integration of genomic profiles with drug responses across multiple screenings. Here, we will review the computational tools that have been developed to integrate cancer cell lines' genomic profiles and sensitivity to small molecule perturbations obtained from different screenings.

8 citations

References
More filters
Journal ArticleDOI
TL;DR: The Gene Set Enrichment Analysis (GSEA) method as discussed by the authors focuses on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation.
Abstract: Although genomewide RNA expression analysis has become a routine tool in biomedical research, extracting biological insight from such information remains a major challenge. Here, we describe a powerful analytical method called Gene Set Enrichment Analysis (GSEA) for interpreting gene expression data. The method derives its power by focusing on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation. We demonstrate how GSEA yields insights into several cancer-related data sets, including leukemia and lung cancer. Notably, where single-gene analysis finds little similarity between two independent studies of patient survival in lung cancer, GSEA reveals many biological pathways in common. The GSEA method is embodied in a freely available software package, together with an initial database of 1,325 biologically defined gene sets.

34,830 citations

Journal Article
TL;DR: A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.
Abstract: We present a new technique called “t-SNE” that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large datasets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of datasets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the datasets.

30,124 citations

Journal ArticleDOI
TL;DR: The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data and provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-power gene expression and genomic hybridization experiments.
Abstract: The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data. GEO provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-throughput gene expression and genomic hybridization experiments. GEO is not intended to replace in house gene expression databases that benefit from coherent data sets, and which are constructed to facilitate a particular analytic method, but rather complement these by acting as a tertiary, central data distribution hub. The three central data entities of GEO are platforms, samples and series, and were designed with gene expression and genomic hybridization experiments in mind. A platform is, essentially, a list of probes that define what set of molecules may be detected. A sample describes the set of molecules that are being probed and references a single platform used to generate its molecular abundance data. A series organizes samples into the meaningful data sets which make up an experiment. The GEO repository is publicly accessible through the World Wide Web at http://www.ncbi.nlm.nih.gov/geo.

10,968 citations

Journal ArticleDOI
TL;DR: How BLAT was optimized is described, which is more accurate and 500 times faster than popular existing tools for mRNA/DNA alignments and 50 times faster for protein alignments at sensitivity settings typically used when comparing vertebrate sequences.
Abstract: Analyzing vertebrate genomes requires rapid mRNA/DNA and cross-species protein alignments A new tool, BLAT, is more accurate and 500 times faster than popular existing tools for mRNA/DNA alignments and 50 times faster for protein alignments at sensitivity settings typically used when comparing vertebrate sequences BLAT's speed stems from an index of all nonoverlapping K-mers in the genome This index fits inside the RAM of inexpensive computers, and need only be computed once for each genome assembly BLAT has several major stages It uses the index to find regions in the genome likely to be homologous to the query sequence It performs an alignment between homologous regions It stitches together these aligned regions (often exons) into larger alignments (typically genes) Finally, BLAT revisits small internal exons possibly missed at the first stage and adjusts large gap boundaries that have canonical splice sites where feasible This paper describes how BLAT was optimized Effects on speed and sensitivity are explored for various K-mer sizes, mismatch schemes, and number of required index matches BLAT is compared with other alignment programs on various test sets and then used in several genome-wide applications http://genomeucscedu hosts a web-based BLAT server for the human genome

8,326 citations

Journal ArticleDOI
TL;DR: This paper proposed parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples.
Abstract: SUMMARY Non-biological experimental variation or “batch effects” are commonly observed across multiple batches of microarray experiments, often rendering the task of combining data from these batches difficult. The ability to combine microarray data sets is advantageous to researchers to increase statistical power to detect biological phenomena from studies where logistical considerations restrict sample size or in studies that require the sequential hybridization of arrays. In general, it is inappropriate to combine data sets without adjusting for batch effects. Methods have been proposed to filter batch effects from data, but these are often complicated and require large batch sizes (>25) to implement. Because the majority of microarray studies are conducted using much smaller sample sizes, existing methods are not sufficient. We propose parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples. We illustrate our methods using two example data sets and show that our methods are justifiable, easy to apply, and useful in practice. Software for our method is freely available at: http://biosun1.harvard.edu/complab/batch/.

6,319 citations

Related Papers (5)