scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Next Generation Connectivity Map: L1000 Platform and the First 1,000,000 Profiles.

TL;DR: The expanded CMap is reported, made possible by a new, low-cost, high-throughput reduced representation expression profiling method that is shown to be highly reproducible, comparable to RNA sequencing, and suitable for computational inference of the expression levels of 81% of non-measured transcripts.
About: This article is published in Cell.The article was published on 2017-11-30 and is currently open access. It has received 1943 citations till now.
Citations
More filters
Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors performed a batch query in the connectivity map (cMap) based on bioinformatics, queried out 35 compounds with therapeutic potential, and screened out parbendazole as a most promising compound, which had an excellent inhibitory effect on the proliferation of HNSCC cell lines.

3 citations

Journal ArticleDOI
TL;DR: In this article , the authors report that prenatal witnessing the defeat process of the mated partner induces anxiety-like behaviors in F1 male, but not female offspring, indicating a sex-specific intergenerational effects.

3 citations

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a system biology method to investigate the carcinogenic mechanism of oral squamous cell carcinoma (OSCC) in order to identify some important biomarkers as drug targets.
Abstract: In this study, we provide a systems biology method to investigate the carcinogenic mechanism of oral squamous cell carcinoma (OSCC) in order to identify some important biomarkers as drug targets. Further, a systematic drug discovery method with a deep neural network (DNN)-based drug-target interaction (DTI) model and drug design specifications is proposed to design a potential multiple-molecule drug for the medical treatment of OSCC before clinical trials. First, we use big database mining to construct the candidate genome-wide genetic and epigenetic network (GWGEN) including a protein-protein interaction network (PPIN) and a gene regulatory network (GRN) for OSCC and non-OSCC. In the next step, real GWGENs are identified for OSCC and non-OSCC by system identification and system order detection methods based on the OSCC and non-OSCC microarray data, respectively. Then, the principal network projection (PNP) method was used to extract core GWGENs of OSCC and non-OSCC from real GWGENs of OSCC and non-OSCC, respectively. Afterward, core signaling pathways were constructed through the annotation of KEGG pathways, and then the carcinogenic mechanism of OSCC was investigated by comparing the core signal pathways and their downstream abnormal cellular functions of OSCC and non-OSCC. Consequently, HES1, TCF, NF-κB and SP1 are identified as significant biomarkers of OSCC. In order to discover multiple molecular drugs for these significant biomarkers (drug targets) of the carcinogenic mechanism of OSCC, we trained a DNN-based drug-target interaction (DTI) model by DTI databases to predict candidate drugs for these significant biomarkers. Finally, drug design specifications such as adequate drug regulation ability, low toxicity and high sensitivity are employed to filter out the appropriate molecular drugs metformin, gefitinib and gallic-acid to combine as a potential multiple-molecule drug for the therapeutic treatment of OSCC.

3 citations

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors developed the largest-to-date online TCM active ingredients-based pharmacotranscriptomic platform integrated traditional Chinese medicine (ITCM) for the effective screening of active ingredients.
Abstract: With the emergence of high-throughput technologies, computational screening based on gene expression profiles has become one of the most effective methods for drug discovery. More importantly, profile-based approaches remarkably enhance novel drug-disease pair discovery without relying on drug- or disease-specific prior knowledge, which has been widely used in modern medicine. However, profile-based systematic screening of active ingredients of traditional Chinese medicine (TCM) has been scarcely performed due to inadequate pharmacotranscriptomic data. Here, we develop the largest-to-date online TCM active ingredients-based pharmacotranscriptomic platform integrated traditional Chinese medicine (ITCM) for the effective screening of active ingredients. First, we performed unified high-throughput experiments and constructed the largest data repository of 496 representative active ingredients, which was five times larger than the previous one built by our team. The transcriptome-based multi-scale analysis was also performed to elucidate their mechanism. Then, we developed six state-of-art signature search methods to screen active ingredients and determine the optimal signature size for all methods. Moreover, we integrated them into a screening strategy, TCM-Query, to identify the potential active ingredients for the special disease. In addition, we also comprehensively collected the TCM-related resource by literature mining. Finally, we applied ITCM to an active ingredient bavachinin, and two diseases, including prostate cancer and COVID-19, to demonstrate the power of drug discovery. ITCM was aimed to comprehensively explore the active ingredients of TCM and boost studies of pharmacological action and drug discovery. ITCM is available at http://itcm.biotcm.net.

2 citations

Journal ArticleDOI
01 Apr 2021-Life
TL;DR: In this paper, the authors inferred TF activities from transcriptomic data for almost all human TFs, defined clusters of SLE patients based on the estimated TF activities and analyzed the differential activity patterns among SLE and healthy samples in two different cohorts.
Abstract: Background: Systemic Lupus Erythematosus (SLE) is a systemic autoimmune disease with diverse clinical manifestations Although most of the SLE-associated loci are located in regulatory regions, there is a lack of global information about transcription factor (TFs) activities, the mode of regulation of the TFs, or the cell or sample-specific regulatory circuits The aim of this work is to decipher TFs implicated in SLE Methods: In order to decipher regulatory mechanisms in SLE, we have inferred TF activities from transcriptomic data for almost all human TFs, defined clusters of SLE patients based on the estimated TF activities and analyzed the differential activity patterns among SLE and healthy samples in two different cohorts The Transcription Factor activity matrix was used to stratify SLE patients and define sets of TFs with statistically significant differential activity among the disease and control samples Results: TF activities were able to identify two main subgroups of patients characterized by distinct neutrophil-to-lymphocyte ratio (NLR), with consistent patterns in two independent datasets—one from pediatric patients and other from adults Furthermore, after contrasting all subgroups of patients and controls, we obtained a significant and robust list of 14 TFs implicated in the dysregulation of SLE by different mechanisms and pathways Among them, well-known regulators of SLE, such as STAT or IRF, were found, but others suggest new pathways that might have important roles in SLE Conclusions: These results provide a foundation to comprehend the regulatory mechanism underlying SLE and the established regulatory factors behind SLE heterogeneity that could be potential therapeutic targets

2 citations

References
More filters
Journal ArticleDOI
TL;DR: The Gene Set Enrichment Analysis (GSEA) method as discussed by the authors focuses on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation.
Abstract: Although genomewide RNA expression analysis has become a routine tool in biomedical research, extracting biological insight from such information remains a major challenge. Here, we describe a powerful analytical method called Gene Set Enrichment Analysis (GSEA) for interpreting gene expression data. The method derives its power by focusing on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation. We demonstrate how GSEA yields insights into several cancer-related data sets, including leukemia and lung cancer. Notably, where single-gene analysis finds little similarity between two independent studies of patient survival in lung cancer, GSEA reveals many biological pathways in common. The GSEA method is embodied in a freely available software package, together with an initial database of 1,325 biologically defined gene sets.

34,830 citations

Journal Article
TL;DR: A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.
Abstract: We present a new technique called “t-SNE” that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large datasets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of datasets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the datasets.

30,124 citations

Journal ArticleDOI
TL;DR: The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data and provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-power gene expression and genomic hybridization experiments.
Abstract: The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data. GEO provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-throughput gene expression and genomic hybridization experiments. GEO is not intended to replace in house gene expression databases that benefit from coherent data sets, and which are constructed to facilitate a particular analytic method, but rather complement these by acting as a tertiary, central data distribution hub. The three central data entities of GEO are platforms, samples and series, and were designed with gene expression and genomic hybridization experiments in mind. A platform is, essentially, a list of probes that define what set of molecules may be detected. A sample describes the set of molecules that are being probed and references a single platform used to generate its molecular abundance data. A series organizes samples into the meaningful data sets which make up an experiment. The GEO repository is publicly accessible through the World Wide Web at http://www.ncbi.nlm.nih.gov/geo.

10,968 citations

Journal ArticleDOI
TL;DR: How BLAT was optimized is described, which is more accurate and 500 times faster than popular existing tools for mRNA/DNA alignments and 50 times faster for protein alignments at sensitivity settings typically used when comparing vertebrate sequences.
Abstract: Analyzing vertebrate genomes requires rapid mRNA/DNA and cross-species protein alignments A new tool, BLAT, is more accurate and 500 times faster than popular existing tools for mRNA/DNA alignments and 50 times faster for protein alignments at sensitivity settings typically used when comparing vertebrate sequences BLAT's speed stems from an index of all nonoverlapping K-mers in the genome This index fits inside the RAM of inexpensive computers, and need only be computed once for each genome assembly BLAT has several major stages It uses the index to find regions in the genome likely to be homologous to the query sequence It performs an alignment between homologous regions It stitches together these aligned regions (often exons) into larger alignments (typically genes) Finally, BLAT revisits small internal exons possibly missed at the first stage and adjusts large gap boundaries that have canonical splice sites where feasible This paper describes how BLAT was optimized Effects on speed and sensitivity are explored for various K-mer sizes, mismatch schemes, and number of required index matches BLAT is compared with other alignment programs on various test sets and then used in several genome-wide applications http://genomeucscedu hosts a web-based BLAT server for the human genome

8,326 citations

Journal ArticleDOI
TL;DR: This paper proposed parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples.
Abstract: SUMMARY Non-biological experimental variation or “batch effects” are commonly observed across multiple batches of microarray experiments, often rendering the task of combining data from these batches difficult. The ability to combine microarray data sets is advantageous to researchers to increase statistical power to detect biological phenomena from studies where logistical considerations restrict sample size or in studies that require the sequential hybridization of arrays. In general, it is inappropriate to combine data sets without adjusting for batch effects. Methods have been proposed to filter batch effects from data, but these are often complicated and require large batch sizes (>25) to implement. Because the majority of microarray studies are conducted using much smaller sample sizes, existing methods are not sufficient. We propose parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples. We illustrate our methods using two example data sets and show that our methods are justifiable, easy to apply, and useful in practice. Software for our method is freely available at: http://biosun1.harvard.edu/complab/batch/.

6,319 citations

Related Papers (5)