scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Next Generation Connectivity Map: L1000 Platform and the First 1,000,000 Profiles.

TL;DR: The expanded CMap is reported, made possible by a new, low-cost, high-throughput reduced representation expression profiling method that is shown to be highly reproducible, comparable to RNA sequencing, and suitable for computational inference of the expression levels of 81% of non-measured transcripts.
About: This article is published in Cell.The article was published on 2017-11-30 and is currently open access. It has received 1943 citations till now.
Citations
More filters
Posted ContentDOI
23 Oct 2020-bioRxiv
TL;DR: Validation experiments in human cell lines showed that 11 of the 16 compounds tested to date had measurable antiviral activity against SARS-CoV-2, encouraging as the authors continue to work towards a further analysis of these predicted drugs as potential therapeutics for the treatment of COVID-19.
Abstract: The novel SARS-CoV-2 virus emerged in December 2019 and has few effective treatments. We applied a computational drug repositioning pipeline to SARS-CoV-2 differential gene expression signatures derived from publicly available data. We utilized three independent published studies to acquire or generate lists of differentially expressed genes between control and SARS-CoV-2-infected samples. Using a rank-based pattern matching strategy based on the Kolmogorov-Smirnov Statistic, the signatures were queried against drug profiles from Connectivity Map (CMap). We validated sixteen of our top predicted hits in live SARS-CoV-2 antiviral assays in either Calu-3 or 293T-ACE2 cells. Validation experiments in human cell lines showed that 11 of the 16 compounds tested to date (including clofazimine, haloperidol and others) had measurable antiviral activity against SARS-CoV-2. These initial results are encouraging as we continue to work towards a further analysis of these predicted drugs as potential therapeutics for the treatment of COVID-19.

11 citations

Journal ArticleDOI
TL;DR: A prognostic model was constructed to split EC patients into the high-risk and low-risk group with statistically different survival outcomes, indicating good potential for the prognostic signature in survival surveillance.

11 citations

Journal ArticleDOI
TL;DR: In this paper, a selective HDAC1/2 inhibitor (B390) has multifaceted therapeutic potential in pancreatic ductal adenocarcinoma (PDAC) by restoring the expression and function of DUSP2.
Abstract: Pancreatic ductal adenocarcinoma (PDAC) is a highly aggressive cancer characterized by early dissemination and poor drug response. Therefore, it is an unmet medical need to develop new strategies for treatment. As aberrant activation of ERK due to KRAS activating mutation is a driving force for PDAC, a brake system that can terminate ERK signaling represents an ideal druggable target. Herein, we demonstrate that forced expression of dual specificity phosphatase-2 (DUSP2), a specific ERK phosphatase, abrogated tumor formation and loss of Dusp2 facilitated Kras-driven PDAC progression. We report that a selective HDAC1/2 inhibitor (B390) has multifaceted therapeutic potential in PDAC by restoring the expression and function of DUSP2. In vitro study showed that treatment with B390 inhibited growth and migration abilities of PDAC cells, decreased extracellular vesicle-associated VEGF-C expression, and suppressed lymphatic endothelial cell proliferation. In vivo, B390 not only suppressed tumor growth by increasing tumor cell death, it also inhibited lymphangiogenesis and lymphovascular invasion. Taken together, our data demonstrate that B390 was able to alleviate loss of DUSP2-mediated pathologic processes, which provides the proof-of-concept evidence to demonstrate the potential of using selective HDAC1/2 inhibitors in PDAC treatment and suggests reinstating DUSP2 expression may be a strategy to subside PDAC progression.

11 citations

Journal ArticleDOI
23 Jun 2022-eLife
TL;DR: Deubiquitinating enzymes (DUBs) are proteases that remove ubiquitin conjugates from proteins, thereby regulating protein turnover and are involved in a wide range of cellular activities and emerging therapeutic targets for cancer and other diseases as discussed by the authors .
Abstract: Deubiquitinating enzymes (DUBs), ~100 of which are found in human cells, are proteases that remove ubiquitin conjugates from proteins, thereby regulating protein turnover. They are involved in a wide range of cellular activities and are emerging therapeutic targets for cancer and other diseases. Drugs targeting USP1 and USP30 are in clinical development for cancer and kidney disease respectively. However, the majority of substrates and pathways regulated by DUBs remain unknown, impeding efforts to prioritize specific enzymes for research and drug development. To assemble a knowledgebase of DUB activities, co-dependent genes, and substrates, we combined targeted experiments using CRISPR libraries and inhibitors with systematic mining of functional genomic databases. Analysis of the Dependency Map, Connectivity Map, Cancer Cell Line Encyclopedia, and multiple protein-protein interaction databases yielded specific hypotheses about DUB function, a subset of which were confirmed in follow-on experiments. The data in this paper are browsable online in a newly developed DUB Portal and promise to improve understanding of DUBs as a family as well as the activities of incompletely characterized DUBs (e.g. USPL1 and USP32) and those already targeted with investigational cancer therapeutics (e.g. USP14, UCHL5, and USP7).

11 citations

Posted ContentDOI
24 Mar 2019-bioRxiv
TL;DR: This work analyzed the D–GEX method and determined that the inference can be improved using a logistic sigmoid activation function instead of the hyperbolic tangent, and proposed a novel transformative adaptive activation function that improves the gene expression inference even further and which generalizes several existing adaptive activation functions.
Abstract: Motivation Gene expression profiling was made cheaper by the NIH LINCS program that profiles only ~1, 000 selected landmark genes and uses them to reconstruct the whole profile. The D–GEX method employs neural networks to infer the whole profile. However, the original D–GEX can be further significantly improved. Results We have analyzed the D–GEX method and determined that the inference can be improved using a logistic sigmoid activation function instead of the hyperbolic tangent. Moreover, we propose a novel transformative adaptive activation function that improves the gene expression inference even further and which generalizes several existing adaptive activation functions. Our improved neural network achieves average mean absolute error of 0.1340 which is a significant improvement over our reimplementation of the original D–GEX which achieves average mean absolute error 0.1637

11 citations


Cites methods from "A Next Generation Connectivity Map:..."

  • ...In order to economically facilitate such experiments, the LINCS program resulted in the development of the L1000 Luminex bead technology that measures the expression profile of ∼1, 000 selected landmark genes and then reconstructs the full gene profile of ∼10, 000 target genes [5]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: The Gene Set Enrichment Analysis (GSEA) method as discussed by the authors focuses on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation.
Abstract: Although genomewide RNA expression analysis has become a routine tool in biomedical research, extracting biological insight from such information remains a major challenge. Here, we describe a powerful analytical method called Gene Set Enrichment Analysis (GSEA) for interpreting gene expression data. The method derives its power by focusing on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation. We demonstrate how GSEA yields insights into several cancer-related data sets, including leukemia and lung cancer. Notably, where single-gene analysis finds little similarity between two independent studies of patient survival in lung cancer, GSEA reveals many biological pathways in common. The GSEA method is embodied in a freely available software package, together with an initial database of 1,325 biologically defined gene sets.

34,830 citations

Journal Article
TL;DR: A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.
Abstract: We present a new technique called “t-SNE” that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large datasets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of datasets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the datasets.

30,124 citations

Journal ArticleDOI
TL;DR: The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data and provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-power gene expression and genomic hybridization experiments.
Abstract: The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data. GEO provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-throughput gene expression and genomic hybridization experiments. GEO is not intended to replace in house gene expression databases that benefit from coherent data sets, and which are constructed to facilitate a particular analytic method, but rather complement these by acting as a tertiary, central data distribution hub. The three central data entities of GEO are platforms, samples and series, and were designed with gene expression and genomic hybridization experiments in mind. A platform is, essentially, a list of probes that define what set of molecules may be detected. A sample describes the set of molecules that are being probed and references a single platform used to generate its molecular abundance data. A series organizes samples into the meaningful data sets which make up an experiment. The GEO repository is publicly accessible through the World Wide Web at http://www.ncbi.nlm.nih.gov/geo.

10,968 citations

Journal ArticleDOI
TL;DR: How BLAT was optimized is described, which is more accurate and 500 times faster than popular existing tools for mRNA/DNA alignments and 50 times faster for protein alignments at sensitivity settings typically used when comparing vertebrate sequences.
Abstract: Analyzing vertebrate genomes requires rapid mRNA/DNA and cross-species protein alignments A new tool, BLAT, is more accurate and 500 times faster than popular existing tools for mRNA/DNA alignments and 50 times faster for protein alignments at sensitivity settings typically used when comparing vertebrate sequences BLAT's speed stems from an index of all nonoverlapping K-mers in the genome This index fits inside the RAM of inexpensive computers, and need only be computed once for each genome assembly BLAT has several major stages It uses the index to find regions in the genome likely to be homologous to the query sequence It performs an alignment between homologous regions It stitches together these aligned regions (often exons) into larger alignments (typically genes) Finally, BLAT revisits small internal exons possibly missed at the first stage and adjusts large gap boundaries that have canonical splice sites where feasible This paper describes how BLAT was optimized Effects on speed and sensitivity are explored for various K-mer sizes, mismatch schemes, and number of required index matches BLAT is compared with other alignment programs on various test sets and then used in several genome-wide applications http://genomeucscedu hosts a web-based BLAT server for the human genome

8,326 citations

Journal ArticleDOI
TL;DR: This paper proposed parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples.
Abstract: SUMMARY Non-biological experimental variation or “batch effects” are commonly observed across multiple batches of microarray experiments, often rendering the task of combining data from these batches difficult. The ability to combine microarray data sets is advantageous to researchers to increase statistical power to detect biological phenomena from studies where logistical considerations restrict sample size or in studies that require the sequential hybridization of arrays. In general, it is inappropriate to combine data sets without adjusting for batch effects. Methods have been proposed to filter batch effects from data, but these are often complicated and require large batch sizes (>25) to implement. Because the majority of microarray studies are conducted using much smaller sample sizes, existing methods are not sufficient. We propose parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples. We illustrate our methods using two example data sets and show that our methods are justifiable, easy to apply, and useful in practice. Software for our method is freely available at: http://biosun1.harvard.edu/complab/batch/.

6,319 citations

Related Papers (5)