scispace - formally typeset
Search or ask a question
Author

Mark Gerstein

Bio: Mark Gerstein is an academic researcher from Yale University. The author has contributed to research in topics: Genome & Gene. The author has an hindex of 168, co-authored 751 publications receiving 149578 citations. Previous affiliations of Mark Gerstein include Rutgers University & Structural Genomics Consortium.
Topics: Genome, Gene, Human genome, Genomics, Pseudogene


Papers
More filters
Journal ArticleDOI
TL;DR: Suggestions for adapting current publication formats for facilitating text mining and enabling its broader use are suggested.

4 citations

Journal ArticleDOI
02 Jun 2000-Science
TL;DR: There is, however, a third approach for annotating the human genome that is, in a sense, already extant: extend the capabilities of the biological science literature.
Abstract: The News article “Are sequencers ready to ‘annotate’ the human genome?” by Elizabeth Pennisi (special issue on the Drosophila Genome, 24 Mar., p. [2183][1]) is especially timely and provocative. Pennisi mentions two ideas: a small group gathering at a centralized annotation jamboree, or a distributed, Web-based system that would allow anyone to contribute annotations with a “smart browser” that would merge all efforts. I favor the essence of the second proposal because it provides a more democratic and more “biological” approach to an all-important problem. There is, however, a third approach for annotating the human genome (providing at least the putative start, stop, and structure of each gene) that is, in a sense, already extant: extend the capabilities of the biological science literature. The current journal system is decentralized, yet most research articles adhere to common standards that make them ideal for annotation: (i) Each article associates a bit of annotation with a distinct time and place and with specific, responsible parties. (ii) Attentive scholarly referencing and footnoting provide a way to connect bits of annotation and allow for continuous “updates.” (iii) Peer review and editing provide a proven quality-control mechanism. (iv) Publication is an established indicator of scientific productivity; consequently, scientists already have an incentive to provide the information, whereas database submissions are often regarded as a chore. The main drawback of current journal article formats is that they are not very “computer-parseable,” or suitable for bulk annotation of thousands of genes. However, by adding sections of highly structured text to each article (that is, extended keywords and using a controlled vocabulary) and linking subparts of an article to relevant database identifiers, one can envision how a “literature annotation standard” could readily be interpreted by computers. Furthermore, if an article could be linked to a large “supplementary materials” data file with simple annotations for many genes (for example, lists of all the membrane proteins in the Caenorhabditis elegans genome), one would have a mechanism for bulk annotation. Further standardization could be achieved if the article described defined ways in which the data file might be updated over time and if the supplementary materials were refereed and evaluated with the text of the article. [1]: /lookup/doi/10.1126/science.287.5461.2183

4 citations

Journal ArticleDOI
TL;DR: In this paper, a deep learning framework for condensing enhancers and refining boundaries with large-scale functional assays (DECODE) is proposed to solve the problem of low-resolution annotations with superfluous regions.
Abstract: Motivation Mapping distal regulatory elements, such as enhancers, is a cornerstone for elucidating how genetic variations may influence diseases. Previous enhancer-prediction methods have used either unsupervised approaches or supervised methods with limited training data. Moreover, past approaches have implemented enhancer discovery as a binary classification problem without accurate boundary detection, producing low-resolution annotations with superfluous regions and reducing the statistical power for downstream analyses (e.g. causal variant mapping and functional validations). Here, we addressed these challenges via a two-step model called Deep-learning framework for Condensing enhancers and refining boundaries with large-scale functional assays (DECODE). First, we employed direct enhancer-activity readouts from novel functional characterization assays, such as STARR-seq, to train a deep neural network for accurate cell-type-specific enhancer prediction. Second, to improve the annotation resolution, we implemented a weakly supervised object detection framework for enhancer localization with precise boundary detection (to a 10 bp resolution) using Gradient-weighted Class Activation Mapping. Results Our DECODE binary classifier outperformed a state-of-the-art enhancer prediction method by 24% in transgenic mouse validation. Furthermore, the object detection framework can condense enhancer annotations to only 13% of their original size, and these compact annotations have significantly higher conservation scores and genome-wide association study variant enrichments than the original predictions. Overall, DECODE is an effective tool for enhancer classification and precise localization. Availability and implementation DECODE source code and pre-processing scripts are available at decode.gersteinlab.org. Supplementary information Supplementary data are available at Bioinformatics online.

4 citations

Posted ContentDOI
01 Apr 2021-bioRxiv
TL;DR: In this article, the authors introduce a network propagation approach that entirely focuses on long tail genes with potential functional impact on cancer development, and identify sets of often overlooked, rarely to moderately mutated genes whose biological interactions significantly propel their mutation-frequency-based rank upwards during propagation in 17 cancer types.
Abstract: Introduction The diversity of genomic alterations in cancer pose challenges to fully understanding the etiologies of the disease. Recent interest in infrequent mutations, in genes that reside in the “long tail” of the mutational distribution, uncovered new genes with significant implication in cancer development. The study of these genes often requires integrative approaches with multiple types of biological data. Network propagation methods have demonstrated high efficacy in uncovering genomic patterns underlying cancer using biological interaction networks. Yet, the majority of these analyses have focused their assessment on detecting known cancer genes or identifying altered subnetworks. In this paper, we introduce a network propagation approach that entirely focuses on long tail genes with potential functional impact on cancer development. Results We identify sets of often overlooked, rarely to moderately mutated genes whose biological interactions significantly propel their mutation-frequency-based rank upwards during propagation in 17 cancer types. We call these sets “upward mobility genes” (UMGs, 28-83 genes per cancer type) and hypothesize that their significant rank improvement indicates functional importance. We report new cancer-pathway associations based on UMGs that were not previously identified using driver genes alone, validate UMGs’ role in cancer cell survival in vitro—alone and compared to other network methods—using extensive genome-wide RNAi and CRISPR data repositories, and further conduct in vitro functional screenings resulting the validation of 8 previously unreported genes. Conclusion Our analysis extends the spectrum of cancer relevant genes and identifies novel potential therapeutic targets.

4 citations

Journal ArticleDOI
TL;DR: This work has developed an approach for identifying the binding sites of transcription factors on a global scale and is crucial for understanding how the activity of genes is controlled and thereby what is essential for understanding cell proliferation and differentiation.
Abstract: a nearly complete draft of the human genome has been determined, producing an enormous wealth of information (Olivier et al. 2001). However, the sequence by itself reveals little about the critical elements encoded in the DNA, and consequently, it is paramount to identify the functional elements encoded in the 3 billion base pairs and to determine how they work together to mediate complex processes such as development and responses to environmental alterations. Two essential tasks toward this goal are the identification of coding and transcriptionally active regions in the human genome and determining how they are regulated. The identification of these regions is an essential first step for the comprehensive and systematic analysis of gene and protein function. Thus far, a variety of different approaches have been used for identification of coding sequences and other functional elements in genomic DNA (Snyder and Gerstein 2003). Genes have been identified by generating and sequencing of cDNAs, expressed sequence tags (ESTs), and related approaches , and then mapping the mRNA coding sequences onto genomics DNA (Lander et al. 2001). Genes have also been identified by computational methods such as motif searches, identification of long open reading frames, and comparative genomic studies to identify conserved sequences, particularly those predicted to encode proteins (Lander et al. 2001; Venter et al. 2001; Water-ston et al. 2002). The availability of the full genomic DNA sequence allows the direct identification of transcribed sequences by globally interrogating all regions of the genome using genomic DNA microarrays. In addition to identification of genes, it is also of high interest to identify the elements that regulate their expression. Such information is crucial for understanding how the activity of genes is controlled and thereby what is essential for understanding cell proliferation and differentiation. Approaches to analyze gene regulation in the past have been hampered by the fact that the approaches are either not comprehensive or are indirect. For example, comparative analysis of gene expression using DNA mi-croarrays in lines expressing or lacking a factor of interest is indirect—changes in gene expression may be due to downstream effects of the factor. Recently, we have developed an approach for identifying the binding sites of transcription factors on a global scale (Iyer et al. 2001; Horak et al. 2002). This procedure involves immunoprecipitation of chromatin (ChIP) associated with a transcription factor of interest and using the associated DNA to probe a genomic DNA array containing …

3 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A new criterion for triggering the extension of word hits, combined with a new heuristic for generating gapped alignments, yields a gapped BLAST program that runs at approximately three times the speed of the original.
Abstract: The BLAST programs are widely used tools for searching protein and DNA databases for sequence similarities. For protein comparisons, a variety of definitional, algorithmic and statistical refinements described here permits the execution time of the BLAST programs to be decreased substantially while enhancing their sensitivity to weak similarities. A new criterion for triggering the extension of word hits, combined with a new heuristic for generating gapped alignments, yields a gapped BLAST program that runs at approximately three times the speed of the original. In addition, a method is introduced for automatically combining statistically significant alignments produced by BLAST into a position-specific score matrix, and searching the database using this matrix. The resulting Position-Specific Iterated BLAST (PSIBLAST) program runs at approximately the same speed per iteration as gapped BLAST, but in many cases is much more sensitive to weak but biologically relevant sequence similarities. PSI-BLAST is used to uncover several new and interesting members of the BRCT superfamily.

70,111 citations

Journal ArticleDOI
TL;DR: The goals of the PDB are described, the systems in place for data deposition and access, how to obtain further information and plans for the future development of the resource are described.
Abstract: The Protein Data Bank (PDB; http://www.rcsb.org/pdb/ ) is the single worldwide archive of structural data of biological macromolecules. This paper describes the goals of the PDB, the systems in place for data deposition and access, how to obtain further information, and near-term plans for the future development of the resource.

34,239 citations

Journal ArticleDOI
TL;DR: The Spliced Transcripts Alignment to a Reference (STAR) software based on a previously undescribed RNA-seq alignment algorithm that uses sequential maximum mappable seed search in uncompressed suffix arrays followed by seed clustering and stitching procedure outperforms other aligners by a factor of >50 in mapping speed.
Abstract: Motivation Accurate alignment of high-throughput RNA-seq data is a challenging and yet unsolved problem because of the non-contiguous transcript structure, relatively short read lengths and constantly increasing throughput of the sequencing technologies. Currently available RNA-seq aligners suffer from high mapping error rates, low mapping speed, read length limitation and mapping biases. Results To align our large (>80 billon reads) ENCODE Transcriptome RNA-seq dataset, we developed the Spliced Transcripts Alignment to a Reference (STAR) software based on a previously undescribed RNA-seq alignment algorithm that uses sequential maximum mappable seed search in uncompressed suffix arrays followed by seed clustering and stitching procedure. STAR outperforms other aligners by a factor of >50 in mapping speed, aligning to the human genome 550 million 2 × 76 bp paired-end reads per hour on a modest 12-core server, while at the same time improving alignment sensitivity and precision. In addition to unbiased de novo detection of canonical junctions, STAR can discover non-canonical splices and chimeric (fusion) transcripts, and is also capable of mapping full-length RNA sequences. Using Roche 454 sequencing of reverse transcription polymerase chain reaction amplicons, we experimentally validated 1960 novel intergenic splice junctions with an 80-90% success rate, corroborating the high precision of the STAR mapping strategy. Availability and implementation STAR is implemented as a standalone C++ code. STAR is free open source software distributed under GPLv3 license and can be downloaded from http://code.google.com/p/rna-star/.

30,684 citations

Journal ArticleDOI
TL;DR: Bowtie extends previous Burrows-Wheeler techniques with a novel quality-aware backtracking algorithm that permits mismatches and can be used simultaneously to achieve even greater alignment speeds.
Abstract: Bowtie is an ultrafast, memory-efficient alignment program for aligning short DNA sequence reads to large genomes. For the human genome, Burrows-Wheeler indexing allows Bowtie to align more than 25 million reads per CPU hour with a memory footprint of approximately 1.3 gigabytes. Bowtie extends previous Burrows-Wheeler techniques with a novel quality-aware backtracking algorithm that permits mismatches. Multiple processor cores can be used simultaneously to achieve even greater alignment speeds. Bowtie is open source http://bowtie.cbcb.umd.edu.

20,335 citations

28 Jul 2005
TL;DR: PfPMP1)与感染红细胞、树突状组胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作�ly.
Abstract: 抗原变异可使得多种致病微生物易于逃避宿主免疫应答。表达在感染红细胞表面的恶性疟原虫红细胞表面蛋白1(PfPMP1)与感染红细胞、内皮细胞、树突状细胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作用。每个单倍体基因组var基因家族编码约60种成员,通过启动转录不同的var基因变异体为抗原变异提供了分子基础。

18,940 citations