scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Next Generation Connectivity Map: L1000 Platform and the First 1,000,000 Profiles.

TL;DR: The expanded CMap is reported, made possible by a new, low-cost, high-throughput reduced representation expression profiling method that is shown to be highly reproducible, comparable to RNA sequencing, and suitable for computational inference of the expression levels of 81% of non-measured transcripts.
About: This article is published in Cell.The article was published on 2017-11-30 and is currently open access. It has received 1943 citations till now.
Citations
More filters
Journal ArticleDOI
TL;DR: In this paper , the authors outlined the thinking from the authors and culminates in activity proposals in seven distinct but interacting scientific areas i.e. development of additional AOPs/AOP networks (AOPs), advanced cell culture models including Organ on a chip (OoC), toxicokinetic assessment with a focus on physiological based kinetic modelling (PBK), exposome, human susceptibility, data integration and new concepts in human risk assessment.
Abstract: While whole animal studies have their place in risk assessment of food and feed components, it is thought that more modern approaches such as human focused new approached methodologies (NAMs) would bring advantages including a greater focus to the human species, a focus on molecular mechanism and kinetics and the possibility of addressing susceptible populations. This report outlines the thinking from the authors and culminates in activity proposals in seven distinct but interacting scientific areas i.e. development of additional AOPs/AOP networks (AOPs), advanced cell culture models including Organ on a chip (OoC), toxicokinetic assessment with a focus on physiological based kinetic modelling (PBK), exposome, human susceptibility, data integration and new concepts in human risk assessment. Furthermore, the development of a Forum is proposed to facilitate the implementation of new approaches and concepts in risk assessment. The report was compiled by the project team, renowned experts in the various areas, and recommendations were discussed with EFSA and further refined following consultation with external experts via a dedicated workshop. The authors are convinced that if the recommendations are taken up, there will be a significant impact in the field, resulting in increasing the uptake and utilisation of these emerging technologies by all stakeholders involved.

17 citations

Journal ArticleDOI
TL;DR: It is found that miR-205 and the GCPR module play critical roles in GC progression, and Sirolimus can suppress cell proliferation of gastric cancer cell in vitro.
Abstract: Gastric cancer (GC) has high morbidity and mortality rates worldwide. Abundant literature has reported several individual genes and their related pathways intimately involved in tumor progression. However, little is known about GC progression at the gene network level. Therefore, understanding the underlying mechanisms of pathological transition from early stage to late stage is urgently needed. This study aims to identify potential vital genes and modules involved in the progression of GC. To understand the gene regulatory network of GC progression, we analyzed micro RNAs and messenger RNA s expression profiles by using a couple of bioinformatics tools. miR-205 was identified by differentially expressed analysis and was further confirmed through using multiple kernel learning-based Kronecker regularized least squares. Using weighted gene co-expression network analysis, the gastric cancer progression-related module, which has the highest correlation value with cancer progression, was obtained. Kyoto Encyclopedia of Genes and Genomes pathways and biological processes of the GCPR module genes were related to cell adhesion. Meanwhile, large-scale genes of GCPR module were found to be targeted by miR-205, including two hub genes SORBS1 and LPAR1. In brief, through multiple analytical methods, we found that miR-205 and the GCPR module play critical roles in GC progression. In addition, miR-205 might maintain cell adhesion by regulating SORBS1 and LPAR1. To screen the potential drug candidates, the gene expression profile of the GCPR module was mapped connectivity map (Cmap), and the mTOR inhibitor (Sirolimus) was found to be the most promising candidate. We further confirmed that Sirolimus can suppress cell proliferation of GC cell in vitro.

17 citations

Journal ArticleDOI
TL;DR: DeCoST framework could classify between FDA-approved drugs and rejected/withdrawn drug, which is the foundation to apply DeCoST in recommending potentially new treatment, and uses linear–quadratic regulator control technique to assess the therapeutic effect of a drug in disease-specific treatment.
Abstract: In this paper, we propose DeCoST (Drug Repurposing from Control System Theory) framework to apply control system paradigm for drug repurposing purpose. Drug repurposing has become one of the most active areas in pharmacology since the last decade. Compared to traditional drug development, drug repurposing may provide more systematic and significantly less expensive approaches in discovering new treatments for complex diseases. Although drug repurposing techniques rapidly evolve from ‘one: disease-gene-drug’ to ‘multi: gene, drug’ and from ‘lazy guilt-by-association’ to ‘systematic model-based pattern matching’, mathematical system and control paradigm has not been widely applied to model the system biology connectivity among drugs, genes, and diseases. In this paradigm, our DeCoST framework, which is among the earliest approaches in drug repurposing with control theory paradigm, applies biological and pharmaceutical knowledge to quantify rich connective data sources among drugs, genes, and diseases to construct disease-specific mathematical model. We use linear–quadratic regulator control technique to assess the therapeutic effect of a drug in disease-specific treatment. DeCoST framework could classify between FDA-approved drugs and rejected/withdrawn drug, which is the foundation to apply DeCoST in recommending potentially new treatment. Applying DeCoST in Breast Cancer and Bladder Cancer, we reprofiled 8 promising candidate drugs for Breast Cancer ER+ (Erbitux, Flutamide, etc.), 2 drugs for Breast Cancer ER- (Daunorubicin and Donepezil) and 10 drugs for Bladder Cancer repurposing (Zafirlukast, Tenofovir, etc.).

17 citations


Cites result from "A Next Generation Connectivity Map:..."

  • ...In this work, we have showed the results between DeCoST and the Broad Institute CMAP, which is among the most well-known and comprehensive platforms for drug repurposing....

    [...]

  • ...Unfortunately, we could not compare between CMAP and DeCoST at this point....

    [...]

  • ...Therefore, we believe that DeCoST could provide complimentary advantages, in addition to CMAP....

    [...]

  • ...Second, due to several factors in experimental design, CMAP does not contains cell line for Breast Cancer ER- and Bladder Cancer....

    [...]

  • ...DeCoST focuses primarily on recommending drugs that have never been in disease-specific clinical trials; meanwhile, CMAP (https://clue. io/repurposing-app) primarily reports on drugs that has been under early phases of clinical trials....

    [...]

Journal ArticleDOI
TL;DR: A combination of orthogonal approaches can allow target identification beyond the proteome as well as aid prioritisation for resource-intensive target validation studies.

17 citations

Journal ArticleDOI
TL;DR: A screening process for predicting chemical carcinogenicity and genotoxicity and characterizing modes of actions (MoAs) of chemical perturbations using in vitro gene expression assays is developed.
Abstract: Background: Most chemicals in commerce have not been evaluated for their carcinogenic potential. The de facto gold-standard approach to carcinogen testing adopts the 2-y rodent bioassay, a time-con...

17 citations


Cites background or methods from "A Next Generation Connectivity Map:..."

  • ...Differential expression values were calculated as moderated z-scores for each landmark gene and each unique perturbation (chemical and dose combination) perturbation, collapsed to a single value across replicates (Subramanian et al. 2017)....

    [...]

  • ...This platform was used in the creation of the Connectivity Map (CMap) (Subramanian et al. 2017), which now includes 1.3 million perturbation profiles of drugs and small molecules and has been instrumental in the discovery of small-molecule MoAs....

    [...]

  • ...Detailed cell culture, plating, treatment and lysis protocols are described in https://assets.clue.io/resources/sop-cell.pdf (Subramanian et al. 2017)....

    [...]

  • ...TAS>0:2 is the standard cutoff for sufficient bioactivity adopted by the CMap–L1000 workflow (Subramanian et al. 2017), while TAS>0:3 and TAS>0:4 represent more stringent thresholds we used to assess the effect of increasing bioactivity on downstream analysis, such as classification and gene set…...

    [...]

  • ...…each of our signatures and each of the perturbation signatures in the CMap, which comprises ∼ 1:3million profiles corresponding to 19,811 drugs and small molecules, and 5,075 molecular (gene-specific knockdown and overexpression) perturbations across 3 to 77 cell lines (Subramanian et al. 2017)....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: The Gene Set Enrichment Analysis (GSEA) method as discussed by the authors focuses on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation.
Abstract: Although genomewide RNA expression analysis has become a routine tool in biomedical research, extracting biological insight from such information remains a major challenge. Here, we describe a powerful analytical method called Gene Set Enrichment Analysis (GSEA) for interpreting gene expression data. The method derives its power by focusing on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation. We demonstrate how GSEA yields insights into several cancer-related data sets, including leukemia and lung cancer. Notably, where single-gene analysis finds little similarity between two independent studies of patient survival in lung cancer, GSEA reveals many biological pathways in common. The GSEA method is embodied in a freely available software package, together with an initial database of 1,325 biologically defined gene sets.

34,830 citations

Journal Article
TL;DR: A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.
Abstract: We present a new technique called “t-SNE” that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large datasets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of datasets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the datasets.

30,124 citations

Journal ArticleDOI
TL;DR: The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data and provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-power gene expression and genomic hybridization experiments.
Abstract: The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data. GEO provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-throughput gene expression and genomic hybridization experiments. GEO is not intended to replace in house gene expression databases that benefit from coherent data sets, and which are constructed to facilitate a particular analytic method, but rather complement these by acting as a tertiary, central data distribution hub. The three central data entities of GEO are platforms, samples and series, and were designed with gene expression and genomic hybridization experiments in mind. A platform is, essentially, a list of probes that define what set of molecules may be detected. A sample describes the set of molecules that are being probed and references a single platform used to generate its molecular abundance data. A series organizes samples into the meaningful data sets which make up an experiment. The GEO repository is publicly accessible through the World Wide Web at http://www.ncbi.nlm.nih.gov/geo.

10,968 citations

Journal ArticleDOI
TL;DR: How BLAT was optimized is described, which is more accurate and 500 times faster than popular existing tools for mRNA/DNA alignments and 50 times faster for protein alignments at sensitivity settings typically used when comparing vertebrate sequences.
Abstract: Analyzing vertebrate genomes requires rapid mRNA/DNA and cross-species protein alignments A new tool, BLAT, is more accurate and 500 times faster than popular existing tools for mRNA/DNA alignments and 50 times faster for protein alignments at sensitivity settings typically used when comparing vertebrate sequences BLAT's speed stems from an index of all nonoverlapping K-mers in the genome This index fits inside the RAM of inexpensive computers, and need only be computed once for each genome assembly BLAT has several major stages It uses the index to find regions in the genome likely to be homologous to the query sequence It performs an alignment between homologous regions It stitches together these aligned regions (often exons) into larger alignments (typically genes) Finally, BLAT revisits small internal exons possibly missed at the first stage and adjusts large gap boundaries that have canonical splice sites where feasible This paper describes how BLAT was optimized Effects on speed and sensitivity are explored for various K-mer sizes, mismatch schemes, and number of required index matches BLAT is compared with other alignment programs on various test sets and then used in several genome-wide applications http://genomeucscedu hosts a web-based BLAT server for the human genome

8,326 citations

Journal ArticleDOI
TL;DR: This paper proposed parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples.
Abstract: SUMMARY Non-biological experimental variation or “batch effects” are commonly observed across multiple batches of microarray experiments, often rendering the task of combining data from these batches difficult. The ability to combine microarray data sets is advantageous to researchers to increase statistical power to detect biological phenomena from studies where logistical considerations restrict sample size or in studies that require the sequential hybridization of arrays. In general, it is inappropriate to combine data sets without adjusting for batch effects. Methods have been proposed to filter batch effects from data, but these are often complicated and require large batch sizes (>25) to implement. Because the majority of microarray studies are conducted using much smaller sample sizes, existing methods are not sufficient. We propose parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples. We illustrate our methods using two example data sets and show that our methods are justifiable, easy to apply, and useful in practice. Software for our method is freely available at: http://biosun1.harvard.edu/complab/batch/.

6,319 citations

Related Papers (5)