scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Next Generation Connectivity Map: L1000 Platform and the First 1,000,000 Profiles.

TL;DR: The expanded CMap is reported, made possible by a new, low-cost, high-throughput reduced representation expression profiling method that is shown to be highly reproducible, comparable to RNA sequencing, and suitable for computational inference of the expression levels of 81% of non-measured transcripts.
About: This article is published in Cell.The article was published on 2017-11-30 and is currently open access. It has received 1943 citations till now.
Citations
More filters
Book ChapterDOI
TL;DR: This chapter presents an overview of artificial intelligence and its derivatives, giving a historical perspective, a succinct technical explanation of the underlying basis, and some examples of its applications.

3 citations

Journal ArticleDOI
TL;DR: In this paper , a deep learning two-step model that transforms L1000 profiles to RNA-seq-like profiles is presented. But the model is limited to the L1000 dataset and cannot handle the full genome space.
Abstract: Abstract The L1000 technology, a cost-effective high-throughput transcriptomics technology, has been applied to profile a collection of human cell lines for their gene expression response to > 30,000 chemical and genetic perturbations. In total, there are currently over 3 million available L1000 profiles. Such a dataset is invaluable for the discovery of drug and target candidates and for inferring mechanisms of action for small molecules. The L1000 assay only measures the mRNA expression of 978 landmark genes while 11,350 additional genes are computationally reliably inferred. The lack of full genome coverage limits knowledge discovery for half of the human protein coding genes, and the potential for integration with other transcriptomics profiling data. Here we present a Deep Learning two-step model that transforms L1000 profiles to RNA-seq-like profiles. The input to the model are the measured 978 landmark genes while the output is a vector of 23,614 RNA-seq-like gene expression profiles. The model first transforms the landmark genes into RNA-seq-like 978 gene profiles using a modified CycleGAN model applied to unpaired data. The transformed 978 RNA-seq-like landmark genes are then extrapolated into the full genome space with a fully connected neural network model. The two-step model achieves 0.914 Pearson’s correlation coefficients and 1.167 root mean square errors when tested on a published paired L1000/RNA-seq dataset produced by the LINCS and GTEx programs. The processed RNA-seq-like profiles are made available for download, signature search, and gene centric reverse search with unique case studies.

3 citations

Posted ContentDOI
13 Dec 2019-bioRxiv
TL;DR: The largest immunome (transcriptome profiles of 40 different immune cells) is curated and integrated with disease gene networks and drug-gene database, to generate a Disease-gree IMmune cell Expression network (DIME), which allows users to explore disease-immune-cell associations and disease drug networks to pave way for future (pre-) clinical research.
Abstract: Immune system is crucial for the development and progression of immune-mediated and non-immune mediated complex diseases. Studies have shown that multiple complex diseases are associated with several immunologically relevant genes. Despite such growing evidence, the effect of disease associated genes on immune functions has not been well explored. Here, we curated the largest immunome (transcriptome profiles of 40 different immune cells) and integrated it with disease gene networks and drug-gene database, to generate a Disease-gene IMmune cell Expression network (DIME). We used the DIME network to: (1) study 13,510 genes and identify disease associated genes and immune cells for >15,000 complex diseases; (2) study pleiotropy between various phenotypically distinct rheumatic and other non-rheumatic diseases; and (3) identify novel targets for drug repurposing and discovery. We implemented DIME as a tool (https://bitbucket.org/systemsimmunology/dime) that allows users to explore disease-immune-cell associations and disease drug networks to pave way for future (pre-) clinical research.

3 citations

Journal ArticleDOI
TL;DR: In this article , the authors explored transcriptional BC alterations to gain a better understanding of age-related tumour biology, also subtype-stratified, and found evidence of higher tumour cell proliferation in young BC patients, also when adjusting for molecular subtypes, and identified a novel age-based six-gene signature pointing to aggressive tumour features, tumour proliferation, and reduced survival-also in patient subsets with expected good prognosis.
Abstract: Breast cancer (BC) diagnosed at ages <40 years presents with more aggressive tumour phenotypes and poorer clinical outcome compared to older BC patients. Here, we explored transcriptional BC alterations to gain a better understanding of age-related tumour biology, also subtype-stratified.We studied publicly available global BC mRNA expression (n = 3999) and proteomics data (n = 113), exploring differentially expressed genes, enriched gene sets, and gene networks in the young compared to older patients.We identified transcriptional patterns reflecting increased proliferation and oncogenic signalling in BC of the young, also in subtype-stratified analyses. Six up-regulated hub genes built a novel age-related score, significantly associated with aggressive clinicopathologic features. A high 6 Gene Proliferation Score (6GPS) demonstrated independent prognostic value when adjusted for traditional clinicopathologic variables and the molecular subtypes. The 6GPS significantly associated also with disease-specific survival within the luminal, lymph node-negative and Oncotype Dx intermediate subset.We here demonstrate evidence of higher tumour cell proliferation in young BC patients, also when adjusting for molecular subtypes, and identified a novel age-based six-gene signature pointing to aggressive tumour features, tumour proliferation, and reduced survival-also in patient subsets with expected good prognosis.

3 citations

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a framework to bridge the gaps in the virtual screening of drug combinations in large-scale databases, which integrates phenotypic information into molecular scaffolds, which can be used to screen the drug library and identify potent drug combinations.
Abstract: Combinational therapy is used for a long time in cancer treatment to overcome drug resistance related to monotherapy. Increased pharmacological data and the rapid development of deep learning methods have enabled the construction of models to predict and screen drug pairs. However, the size of drug libraries is restricted to hundreds to thousands of compounds. The ScaffComb framework, which aims to bridge the gaps in the virtual screening of drug combinations in large-scale databases, is proposed here. Inspired by phenotype-based drug design, ScaffComb integrates phenotypic information into molecular scaffolds, which can be used to screen the drug library and identify potent drug combinations. First, ScaffComb is validated using the US food and drug administration dataset and known drug combinations are successfully reidentified. Then, ScaffComb is applied to screen the ZINC and ChEMBL databases, which yield novel drug combinations and reveal an ability to discover new synergistic mechanisms. To our knowledge, ScaffComb is the first method to use phenotype-based virtual screening of drug combinations in large-scale chemical datasets.

3 citations

References
More filters
Journal ArticleDOI
TL;DR: The Gene Set Enrichment Analysis (GSEA) method as discussed by the authors focuses on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation.
Abstract: Although genomewide RNA expression analysis has become a routine tool in biomedical research, extracting biological insight from such information remains a major challenge. Here, we describe a powerful analytical method called Gene Set Enrichment Analysis (GSEA) for interpreting gene expression data. The method derives its power by focusing on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation. We demonstrate how GSEA yields insights into several cancer-related data sets, including leukemia and lung cancer. Notably, where single-gene analysis finds little similarity between two independent studies of patient survival in lung cancer, GSEA reveals many biological pathways in common. The GSEA method is embodied in a freely available software package, together with an initial database of 1,325 biologically defined gene sets.

34,830 citations

Journal Article
TL;DR: A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.
Abstract: We present a new technique called “t-SNE” that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large datasets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of datasets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the datasets.

30,124 citations

Journal ArticleDOI
TL;DR: The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data and provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-power gene expression and genomic hybridization experiments.
Abstract: The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data. GEO provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-throughput gene expression and genomic hybridization experiments. GEO is not intended to replace in house gene expression databases that benefit from coherent data sets, and which are constructed to facilitate a particular analytic method, but rather complement these by acting as a tertiary, central data distribution hub. The three central data entities of GEO are platforms, samples and series, and were designed with gene expression and genomic hybridization experiments in mind. A platform is, essentially, a list of probes that define what set of molecules may be detected. A sample describes the set of molecules that are being probed and references a single platform used to generate its molecular abundance data. A series organizes samples into the meaningful data sets which make up an experiment. The GEO repository is publicly accessible through the World Wide Web at http://www.ncbi.nlm.nih.gov/geo.

10,968 citations

Journal ArticleDOI
TL;DR: How BLAT was optimized is described, which is more accurate and 500 times faster than popular existing tools for mRNA/DNA alignments and 50 times faster for protein alignments at sensitivity settings typically used when comparing vertebrate sequences.
Abstract: Analyzing vertebrate genomes requires rapid mRNA/DNA and cross-species protein alignments A new tool, BLAT, is more accurate and 500 times faster than popular existing tools for mRNA/DNA alignments and 50 times faster for protein alignments at sensitivity settings typically used when comparing vertebrate sequences BLAT's speed stems from an index of all nonoverlapping K-mers in the genome This index fits inside the RAM of inexpensive computers, and need only be computed once for each genome assembly BLAT has several major stages It uses the index to find regions in the genome likely to be homologous to the query sequence It performs an alignment between homologous regions It stitches together these aligned regions (often exons) into larger alignments (typically genes) Finally, BLAT revisits small internal exons possibly missed at the first stage and adjusts large gap boundaries that have canonical splice sites where feasible This paper describes how BLAT was optimized Effects on speed and sensitivity are explored for various K-mer sizes, mismatch schemes, and number of required index matches BLAT is compared with other alignment programs on various test sets and then used in several genome-wide applications http://genomeucscedu hosts a web-based BLAT server for the human genome

8,326 citations

Journal ArticleDOI
TL;DR: This paper proposed parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples.
Abstract: SUMMARY Non-biological experimental variation or “batch effects” are commonly observed across multiple batches of microarray experiments, often rendering the task of combining data from these batches difficult. The ability to combine microarray data sets is advantageous to researchers to increase statistical power to detect biological phenomena from studies where logistical considerations restrict sample size or in studies that require the sequential hybridization of arrays. In general, it is inappropriate to combine data sets without adjusting for batch effects. Methods have been proposed to filter batch effects from data, but these are often complicated and require large batch sizes (>25) to implement. Because the majority of microarray studies are conducted using much smaller sample sizes, existing methods are not sufficient. We propose parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples. We illustrate our methods using two example data sets and show that our methods are justifiable, easy to apply, and useful in practice. Software for our method is freely available at: http://biosun1.harvard.edu/complab/batch/.

6,319 citations

Related Papers (5)