scispace - formally typeset
Search or ask a question

Showing papers by "Manolis Kellis published in 2012"


Journal ArticleDOI
TL;DR: This work has examined the completeness of the transcript annotation and found that 35% of transcriptional start sites are supported by CAGE clusters and 62% of protein-coding genes have annotated polyA sites, and over one-third of GENCODE protein-Coding genes aresupported by peptide hits derived from mass spectrometry spectra submitted to Peptide Atlas.
Abstract: The GENCODE Consortium aims to identify all gene features in the human genome using a combination of computational analysis, manual annotation, and experimental validation. Since the first public release of this annotation data set, few new protein-coding loci have been added, yet the number of alternative splicing transcripts annotated has steadily increased. The GENCODE 7 release contains 20,687 protein-coding and 9640 long noncoding RNA loci and has 33,977 coding transcripts not represented in UCSC genes and RefSeq. It also has the most comprehensive annotation of long noncoding RNA (lncRNA) loci publicly available with the predominant transcript form consisting of two exons. We have examined the completeness of the transcript annotation and found that 35% of transcriptional start sites are supported by CAGE clusters and 62% of protein-coding genes have annotated polyA sites. Over one-third of GENCODE protein-coding genes are supported by peptide hits derived from mass spectrometry spectra submitted to Peptide Atlas. New models derived from the Illumina Body Map 2.0 RNA-seq data identify 3689 new loci not currently in GENCODE, of which 3127 consist of two exon models indicating that they are possibly unannotated long noncoding loci. GENCODE 7 is publicly available from gencodegenes.org and via the Ensembl and UCSC Genome Browsers.

4,281 citations


01 Sep 2012
TL;DR: The Encyclopedia of DNA Elements project provides new insights into the organization and regulation of the authors' genes and genome, and is an expansive resource of functional annotations for biomedical research.

2,767 citations


Journal ArticleDOI
TL;DR: ChromHMM as mentioned in this paper is an automated computational system for learning chromatin states, characterizing their biological functions and correlations with large-scale functional datasets, and visualizing the resulting genome-wide maps of chromatin state annotations.
Abstract: Chromatin state annotation using combinations of chromatin modification patterns has emerged as a powerful approach for discovering regulatory regions and their cell type specific activity patterns, and for interpreting disease-association studies1-5 However, the computational challenge of learning chromatin state models from large numbers of chromatin modification datasets in multiple cell types still requires extensive bioinformatics expertise making it inaccessible to the wider scientific community To address this challenge, we have developed ChromHMM, an automated computational system for learning chromatin states, characterizing their biological functions and correlations with large-scale functional datasets, and visualizing the resulting genome-wide maps of chromatin state annotations

2,134 citations


Journal ArticleDOI
TL;DR: HaploReg is presented, a tool for exploring annotations of the non-coding genome among the results of published GWAS or novel sets of variants, and will be useful to researchers developing mechanistic hypotheses of the impact of non-Coding variants on clinical phenotypes and normal variation.
Abstract: The resolution of genome-wide association studies (GWAS) is limited by the linkage disequilibrium (LD) structure of the population being studied. Selecting the most likely causal variants within an LD block is relatively straightforward within coding sequence, but is more difficult when all variants are intergenic. Predicting functional non-coding sequence has been recently facilitated by the availability of conservation and epigenomic information. We present HaploReg, a tool for exploring annotations of the non-coding genome among the results of published GWAS or novel sets of variants. Using LD information from the 1000 Genomes Project, linked SNPs and small indels can be visualized along with their predicted chromatin state in nine cell types, conservation across mammals and their effect on regulatory motifs. Sets of SNPs, such as those resulting from GWAS, are analyzed for an enrichment of cell type-specific enhancers. HaploReg will be useful to researchers developing mechanistic hypotheses of the impact of non-coding variants on clinical phenotypes and normal variation. The HaploReg database is available at http://compbio.mit.edu/HaploReg.

2,075 citations


Journal ArticleDOI
TL;DR: This work discusses how ChIP quality, assessed in these ways, affects different uses of ChIP-seq data and develops a set of working standards and guidelines for ChIP experiments that are updated routinely.
Abstract: Chromatin immunoprecipitation (ChIP) followed by high-throughput DNA sequencing (ChIP-seq) has become a valuable and widely used approach for mapping the genomic location of transcription-factor binding and histone modifications in living cells. Despite its widespread use, there are considerable differences in how these experiments are conducted, how the results are scored and evaluated for quality, and how the data and metadata are archived for public use. These practices affect the quality and utility of any global ChIP experiment. Through our experience in performing ChIP-seq experiments, the ENCODE and modENCODE consortia have developed a set of working standards and guidelines for ChIP experiments that are updated routinely. The current guidelines address antibody validation, experimental replication, sequencing depth, data and metadata reporting, and data quality assessment. We discuss how ChIP quality, assessed in these ways, affects different uses of ChIP-seq data. All data sets used in the analysis have been deposited for public viewing and downloading at the ENCODE (http://encodeproject.org/ENCODE/) and modENCODE (http://www.modencode.org/) portals.

1,801 citations


Journal ArticleDOI
TL;DR: In this paper, the authors performed a comprehensive blind assessment of over 30 network inference methods on Escherichia coli, Staphylococcus aureus, Saccharomyces cerevisiae and in silico microarray data.
Abstract: Reconstructing gene regulatory networks from high-throughput data is a long-standing challenge. Through the Dialogue on Reverse Engineering Assessment and Methods (DREAM) project, we performed a comprehensive blind assessment of over 30 network inference methods on Escherichia coli, Staphylococcus aureus, Saccharomyces cerevisiae and in silico microarray data. We characterize the performance, data requirements and inherent biases of different inference approaches, and we provide guidelines for algorithm application and development. We observed that no single inference method performs optimally across all data sets. In contrast, integration of predictions from multiple inference methods shows robust and high performance across diverse data sets. We thereby constructed high-confidence networks for E. coli and S. aureus, each comprising ~1,700 transcriptional interactions at a precision of ~50%. We experimentally tested 53 previously unobserved regulatory interactions in E. coli, of which 23 (43%) were supported. Our results establish community-based methods as a powerful and robust tool for the inference of transcriptional gene regulatory networks.

1,424 citations


01 Jul 2012
TL;DR: A comprehensive blind assessment of over 30 network inference methods on Escherichia coli, Staphylococcus aureus, Saccharomyces cerevisiae and in silico microarray data defines the performance, data requirements and inherent biases of different inference approaches, and provides guidelines for algorithm application and development.
Abstract: Reconstructing gene regulatory networks from high-throughput data is a long-standing challenge. Through the Dialogue on Reverse Engineering Assessment and Methods (DREAM) project, we performed a comprehensive blind assessment of over 30 network inference methods on Escherichia coli, Staphylococcus aureus, Saccharomyces cerevisiae and in silico microarray data. We characterize the performance, data requirements and inherent biases of different inference approaches, and we provide guidelines for algorithm application and development. We observed that no single inference method performs optimally across all data sets. In contrast, integration of predictions from multiple inference methods shows robust and high performance across diverse data sets. We thereby constructed high-confidence networks for E. coli and S. aureus, each comprising ∼1,700 transcriptional interactions at a precision of ∼50%. We experimentally tested 53 previously unobserved regulatory interactions in E. coli, of which 23 (43%) were supported. Our results establish community-based methods as a powerful and robust tool for the inference of transcriptional gene regulatory networks.

1,355 citations


Journal ArticleDOI
TL;DR: A massively parallel reporter assay (MPRA) that facilitates the systematic dissection of transcriptional regulatory elements and QSAMs from two cellular states can be combined to design enhancer variants that optimize potentially conflicting objectives, such as maximizing induced activity while minimizing basal activity.
Abstract: An improved understanding of enhancers in mammalian genomes could facilitate the design of new regulatory elements. Melnikov et al. synthesize thousands of ~90 nt enhancer variants, assay their activity in human cells and use the data to rationally optimize synthetic enhancers.

590 citations


Journal ArticleDOI
TL;DR: Advances in predictive modeling can enable data-set integration to reveal pathways shared across loci and alleles, and richer regulatory models can guide the search for epistatic interactions, and new massively parallel reporter experiments can systematically validate regulatory predictions.
Abstract: Association studies provide genome-wide information about the genetic basis of complex disease, but medical research has focused primarily on protein-coding variants, owing to the difficulty of interpreting noncoding mutations. This picture has changed with advances in the systematic annotation of functional noncoding elements. Evolutionary conservation, functional genomics, chromatin state, sequence motifs and molecular quantitative trait loci all provide complementary information about the function of noncoding sequences. These functional maps can help with prioritizing variants on risk haplotypes, filtering mutations encountered in the clinic and performing systems-level analyses to reveal processes underlying disease associations. Advances in predictive modeling can enable data-set integration to reveal pathways shared across loci and alleles, and richer regulatory models can guide the search for epistatic interactions. Lastly, new massively parallel reporter experiments can systematically validate regulatory predictions. Ultimately, advances in regulatory and systems genomics can help unleash the value of whole-genome sequencing for personalized genomic risk assessment, diagnosis and treatment.

470 citations


01 Feb 2012
TL;DR: ChromHMM is developed, an automated computational system for learning chromatin states, characterizing their biological functions and correlations with large-scale functional datasets, and visualizing the resulting genome-wide maps of chromatin state annotations.
Abstract: Chromatin state annotation using combinations of chromatin modification patterns has emerged as a powerful approach for discovering regulatory regions and their cell type specific activity patterns, and for interpreting disease-association studies1-5. However, the computational challenge of learning chromatin state models from large numbers of chromatin modification datasets in multiple cell types still requires extensive bioinformatics expertise making it inaccessible to the wider scientific community. To address this challenge, we have developed ChromHMM, an automated computational system for learning chromatin states, characterizing their biological functions and correlations with large-scale functional datasets, and visualizing the resulting genome-wide maps of chromatin state annotations.

365 citations


01 Nov 2012
TL;DR: In this paper, the authors present a systematic annotation of functional noncoding elements, which can help with prioritizing variants on risk haplotypes, filtering mutations encountered in the clinic and performing systems-level analyses to reveal processes underlying disease associations.
Abstract: Association studies provide genome-wide information about the genetic basis of complex disease, but medical research has focused primarily on protein-coding variants, owing to the difficulty of interpreting noncoding mutations. This picture has changed with advances in the systematic annotation of functional noncoding elements. Evolutionary conservation, functional genomics, chromatin state, sequence motifs and molecular quantitative trait loci all provide complementary information about the function of noncoding sequences. These functional maps can help with prioritizing variants on risk haplotypes, filtering mutations encountered in the clinic and performing systems-level analyses to reveal processes underlying disease associations. Advances in predictive modeling can enable data-set integration to reveal pathways shared across loci and alleles, and richer regulatory models can guide the search for epistatic interactions. Lastly, new massively parallel reporter experiments can systematically validate regulatory predictions. Ultimately, advances in regulatory and systems genomics can help unleash the value of whole-genome sequencing for personalized genomic risk assessment, diagnosis and treatment.

Journal ArticleDOI
TL;DR: The authors performed a meta-analysis of two independent genome-wide association studies for primary open angle glaucoma (POAG) followed by a normal pressure (NPG) defined by intraocular pressure (IOP) less than 22 mmHg subgroup analysis.
Abstract: Optic nerve degeneration caused by glaucoma is a leading cause of blindness worldwide. Patients affected by the normalpressure form of glaucoma are more likely to harbor risk alleles for glaucoma-related optic nerve disease. We have performed a meta-analysis of two independent genome-wide association studies for primary open angle glaucoma (POAG) followed by a normal-pressure glaucoma (NPG, defined by intraocular pressure (IOP) less than 22 mmHg) subgroup analysis. The single-nucleotide polymorphisms that showed the most significant associations were tested for association with a second form of glaucoma, exfoliation-syndrome glaucoma. The overall meta-analysis of the GLAUGEN and NEIGHBOR dataset results (3,146 cases and 3,487 controls) identified significant associations between two loci and POAG: the CDKN2BAS region on 9p21 (rs2157719 [G], OR=0.69 [95%CI 0.63–0.75], p=1.86610 218 ), and the SIX1/SIX6 region on chromosome 14q23 (rs10483727 [A], OR=1.32 [95%CI 1.21–1.43], p=3.87610 211 ). In sub-group analysis two loci were significantly associated with NPG: 9p21 containing the CDKN2BAS gene (rs2157719 [G], OR=0.58 [95% CI 0.50–0.67], p=1.17610 212 ) and a probable regulatory region on 8q22 (rs284489 [G], OR=0.62 [95% CI 0.53–0.72], p=8.88610 210 ). Both NPG loci were also nominally associated with a second type of glaucoma, exfoliation syndrome glaucoma (rs2157719 [G], OR=0.59 [95% CI 0.41–0.87], p=0.004 and rs284489 [G], OR=0.76 [95% CI 0.54–1.06], p=0.021), suggesting that these loci might contribute more generally to optic nerve degeneration in glaucoma. Because both loci influence transforming growth factor beta (TGF-beta) signaling, we performed a genomic pathway analysis that showed an association between the TGF-beta pathway and NPG (permuted p=0.009). These results suggest that neuro-protective therapies targeting TGFbeta signaling could be effective for multiple forms of glaucoma.


01 Apr 2012
TL;DR: It is suggested that neuro-protective therapies targeting TGF-beta signaling could be effective for multiple forms of glaucoma, and genomic pathway analysis showed an association between the T GF-beta pathway and NPG.
Abstract: Optic nerve degeneration caused by glaucoma is a leading cause of blindness worldwide. Patients affected by the normalpressure form of glaucoma are more likely to harbor risk alleles for glaucoma-related optic nerve disease. We have performed a meta-analysis of two independent genome-wide association studies for primary open angle glaucoma (POAG) followed by a normal-pressure glaucoma (NPG, defined by intraocular pressure (IOP) less than 22 mmHg) subgroup analysis. The single-nucleotide polymorphisms that showed the most significant associations were tested for association with a second form of glaucoma, exfoliation-syndrome glaucoma. The overall meta-analysis of the GLAUGEN and NEIGHBOR dataset results (3,146 cases and 3,487 controls) identified significant associations between two loci and POAG: the CDKN2BAS region on 9p21 (rs2157719 [G], OR=0.69 [95%CI 0.63–0.75], p=1.86610 218 ), and the SIX1/SIX6 region on chromosome 14q23 (rs10483727 [A], OR=1.32 [95%CI 1.21–1.43], p=3.87610 211 ). In sub-group analysis two loci were significantly associated with NPG: 9p21 containing the CDKN2BAS gene (rs2157719 [G], OR=0.58 [95% CI 0.50–0.67], p=1.17610 212 ) and a probable regulatory region on 8q22 (rs284489 [G], OR=0.62 [95% CI 0.53–0.72], p=8.88610 210 ). Both NPG loci were also nominally associated with a second type of glaucoma, exfoliation syndrome glaucoma (rs2157719 [G], OR=0.59 [95% CI 0.41–0.87], p=0.004 and rs284489 [G], OR=0.76 [95% CI 0.54–1.06], p=0.021), suggesting that these loci might contribute more generally to optic nerve degeneration in glaucoma. Because both loci influence transforming growth factor beta (TGF-beta) signaling, we performed a genomic pathway analysis that showed an association between the TGF-beta pathway and NPG (permuted p=0.009). These results suggest that neuro-protective therapies targeting TGFbeta signaling could be effective for multiple forms of glaucoma.

Journal ArticleDOI
TL;DR: Two new algorithms for the DTL reconciliation problem are presented that are dramatically faster than existing algorithms, both asymptotically and in practice, and this dramatic improvement makes it possible to use D TL reconciliation for performing rigorous evolutionary analyses of large gene families and enables its use in advanced reconciliation-based gene and species tree reconstruction methods.
Abstract: Motivation: Gene family evolution is driven by evolutionary events such as speciation, gene duplication, horizontal gene transfer and gene loss, and inferring these events in the evolutionary history of a given gene family is a fundamental problem in comparative and evolutionary genomics with numerous important applications. Solving this problem requires the use of a reconciliation framework, where the input consists of a gene family phylogeny and the corresponding species phylogeny, and the goal is to reconcile the two by postulating speciation, gene duplication, horizontal gene transfer and gene loss events. This reconciliation problem is referred to as duplication-transfer-loss (DTL) reconciliation and has been extensively studied in the literature. Yet, even the fastest existing algorithms for DTL reconciliation are too slow for reconciling large gene families and for use in more sophisticated applications such as gene tree or species tree reconstruction. Results: We present two new algorithms for the DTL reconciliation problem that are dramatically faster than existing algorithms, both asymptotically and in practice. We also extend the standard DTL reconciliation model by considering distance-dependent transfer costs, which allow for more accurate reconciliation and give an efficient algorithm for DTL reconciliation under this extended model. We implemented our new algorithms and demonstrated up to 100 000-fold speed-up over existing methods, using both simulated and biological datasets. This dramatic improvement makes it possible to use DTL reconciliation for performing rigorous evolutionary analyses of large gene families and enables its use in advanced reconciliation-based gene and species tree reconstruction methods. Availability: Our programs can be freely downloaded from http://compbio.mit.edu/ranger-dtl/. Contact:[email protected]; [email protected] Supplementary information:Supplementary data are available at Bioinformatics online.

Journal ArticleDOI
28 Sep 2012-Science
TL;DR: Examination of regions of the human genome that did not show conservation among mammals and found evidence for a human lineage-specific constraint spanning approximately 4% of the genome that is biochemically active but not directly associated with genes, suggests continued turnover in regulatory regions.
Abstract: Although only 5% of the human genome is conserved across mammals, a substantially larger portion is biochemically active, raising the question of whether the additional elements evolve neutrally or confer a lineage-specific fitness advantage. To address this question, we integrate human variation information from the 1000 Genomes Project and activity data from the ENCODE Project. A broad range of transcribed and regulatory nonconserved elements show decreased human diversity, suggesting lineage-specific purifying selection. Conversely, conserved elements lacking activity show increased human diversity, suggesting that some recently became nonfunctional. Regulatory elements under human constraint in nonconserved regions were found near color vision and nerve-growth genes, consistent with purifying selection for recently evolved functions. Our results suggest continued turnover in regulatory regions, with at least an additional 4% of the human genome subject to lineage-specific constraint.

01 Sep 2012
TL;DR: In this paper, a broad range of transcribed and regulatory nonconserved elements show decreased human diversity, suggesting lineage-specific purifying selection, while conserved elements lacking activity show increased human diversity.
Abstract: Although only 5% of the human genome is conserved across mammals, a substantially larger portion is biochemically active, raising the question of whether the additional elements evolve neutrally or confer a lineage-specific fitness advantage. To address this question, we integrate human variation information from the 1000 Genomes Project and activity data from the ENCODE Project. A broad range of transcribed and regulatory nonconserved elements show decreased human diversity, suggesting lineage-specific purifying selection. Conversely, conserved elements lacking activity show increased human diversity, suggesting that some recently became nonfunctional. Regulatory elements under human constraint in nonconserved regions were found near color vision and nerve-growth genes, consistent with purifying selection for recently evolved functions. Our results suggest continued turnover in regulatory regions, with at least an additional 4% of the human genome subject to lineage-specific constraint.

Journal ArticleDOI
TL;DR: A new probabilistic model is presented, DLCoal, that defines gene duplication and loss in a population setting, such that coalescence and ILS can be directly addressed, and the first general reconciliation method that accurately infers gene duplications and losses in the presence of ILS is developed.
Abstract: Gene phylogenies provide a rich source of information about the way evolution shapes genomes, populations, and phenotypes. In addition to substitutions, evolutionary events such as gene duplication and loss (as well as horizontal transfer) play a major role in gene evolution, and many phylogenetic models have been developed in order to reconstruct and study these events. However, these models typically make the simplifying assumption that population-related effects such as incomplete lineage sorting (ILS) are negligible. While this assumption may have been reasonable in some settings, it has become increasingly problematic as increased genome sequencing has led to denser phylogenies, where effects such as ILS are more prominent. To address this challenge, we present a new probabilistic model, DLCoal, that defines gene duplication and loss in a population setting, such that coalescence and ILS can be directly addressed. Interestingly, this model implies that in addition to the usual gene tree and species tree, there exists a third tree, the locus tree, which will likely have many applications. Using this model, we develop the first general reconciliation method that accurately infers gene duplications and losses in the presence of ILS, and we show its improved inference of orthologs, paralogs, duplications, and losses for a variety of clades, including flies, fungi, and primates. Also, our simulations show that gene duplications increase the frequency of ILS, further illustrating the importance of a joint model. Going forward, we believe that this unified model can offer insights to questions in both phylogenetics and population genetics.

Journal ArticleDOI
TL;DR: This work develops and applies methods for transcriptional regulatory network inference from diverse functional genomics data sets and demonstrates the power of data integration for network inference and studies of gene regulation at the systems level.
Abstract: .Gaining insights on gene regulation from large-scale functional data sets is a grand challenge in systems biology. In this article, we develop and apply methods for transcriptional regulatory network inference from diverse functional genomics data sets and demonstrate their value for gene function and gene expression prediction. We formulate the network inference problem in a machine-learning framework and use both supervised and unsupervised methods to predict regulatory edges by integrating transcription factor (TF) binding, evolutionarily conserved sequence motifs, gene expression, and chromatin modification data sets as input features. Applying these methods to Drosophila melanogaster, we predict ~300,000 regulatory edges in a network of ~600 TFs and 12,000 target genes. We validate our predictions using known regulatory interactions, gene functional annotations, tissue-specific expression, protein–protein interactions, and three-dimensional maps of chromosome conformation. We use the inferred network to identify putative functions for hundreds of previously uncharacterized genes, including many in nervous system development, which are independently confirmed based on their tissue-specific expression patterns. Last, we use the regulatory network to predict target gene expression levels as a function of TF expression, and find significantly higher predictive power for integrative networks than for motif or ChIP-based networks. Our work reveals the complementarity between physical evidence of regulatory interactions (TF binding, motif conservation) and functional evidence (coordinated expression or chromatin patterns) and demonstrates the power of data integration for network inference and studies of gene regulation at the systems level.

Journal ArticleDOI
TL;DR: An efficient algorithm is presented to iteratively calculate pseudo-energies within a formal framework to reconcile information from both prediction algorithms and probing experiments and it is demonstrated how this approach can be used in combination with SHAPE chemical probing data to improve secondary structure prediction.
Abstract: Thermodynamic folding algorithms and structure probing experiments are commonly used to determine the secondary structure of RNAs. Here we propose a formal framework to reconcile information from both prediction algorithms and probing experiments. The thermodynamic energy parameters are adjusted using ‘pseudo-energies’ to minimize the discrepancy between prediction and experiment. Our framework differs from related approaches that used pseudo-energies in several key aspects. (i) The energy model is only changed when necessary and no adjustments are made if prediction and experiment are consistent. (ii) Pseudo-energies remain biophysically interpretable and hold positional information where experiment and model disagree. (iii) The whole thermodynamic ensemble of structures is considered thus allowing to reconstruct mixtures of suboptimal structures from seemingly contradicting data. (iv) The noise of the energy model and the experimental data is explicitly modeled leading to an intuitive weighting factor through which the problem can be seen as folding with ‘soft’ constraints of different strength. We present an efficient algorithm to iteratively calculate pseudo-energies within this framework and demonstrate how this approach can be used in combination with SHAPE chemical probing data to improve secondary structure prediction. We further demonstrate that the pseudo-energies correlate

Journal ArticleDOI
TL;DR: A metric of TFBS variability is introduced that takes into account changes in motif match associated with mutation and makes it possible to investigate TFBS functional constraints instance-by-instance as well as in sets that share common biological properties.
Abstract: Background Advances in sequencing technology have boosted population genomics and made it possible to map the positions of transcription factor binding sites (TFBSs) with high precision. Here we investigate TFBS variability by combining transcription factor binding maps generated by ENCODE, modENCODE, our previously published data and other sources with genomic variation data for human individuals and Drosophila isogenic lines.

Journal ArticleDOI
TL;DR: A comparison of the epigenomes of normal and cancerous stem cells, and pluripotent and differentiated states shows that the presence of at least two DNMTs is strongly associated with loci targeted for DNA hypermethylation, and shed important light on the determinants of DNA methylation and how it may become disrupted in cancer cells.

Journal ArticleDOI
TL;DR: DNA capture followed by next-generation sequencing of the translocation breakpoints revealed disruption of a single noncoding gene on chromosome 2, LINC00299, whose RNA product is expressed in all tissues measured, but most abundantly in brain.
Abstract: Large intergenic noncoding (linc) RNAs represent a newly described class of ribonucleic acid whose importance in human disease remains undefined. We identified a severely developmentally delayed 16-year-old female with karyotype 46,XX,t(2;11)(p25.1;p15.1)dn in the absence of clinically significant copy number variants (CNVs). DNA capture followed by next-generation sequencing of the translocation breakpoints revealed disruption of a single noncoding gene on chromosome 2, LINC00299, whose RNA product is expressed in all tissues measured, but most abundantly in brain. Among a series of additional, unrelated subjects referred for clinical diagnostic testing who showed CNV affecting this locus, we identified four with exon-crossing deletions in association with neurodevelopmental abnormalities. No disruption of the LINC00299 coding sequence was seen in almost 14,000 control subjects. Together, these subjects with disruption of LINC00299 implicate this particular noncoding RNA in brain development and raise the possibility that, as a class, abnormalities of lincRNAs may play a significant role in human developmental disorders.

01 Nov 2012
TL;DR: Computational methods to analyze noncoding RNAs include basic and advanced techniques to predict RNA structures, annotation of nonc coding RNAs in genomic data, mining RNA‐seq data for novel transcripts and prediction of transcript structures, computational aspects of microRNAs, and database resources.
Abstract: Noncoding RNAs have emerged as important key players in the cell. Understanding their surprisingly diverse range of functions is challenging for experimental and computational biology. Here, we review computational methods to analyze noncoding RNAs. The topics covered include basic and advanced techniques to predict RNA structures, annotation of noncoding RNAs in genomic data, mining RNA-seq data for novel transcripts and prediction of transcript structures, computational aspects of microRNAs, and database resources.


Journal ArticleDOI
TL;DR: In this article, a review of computational methods to analyze non-coding RNAs is presented, including basic and advanced techniques to predict RNA structures, annotation of noncoding RNA in genomic data, mining RNA-seq data for novel transcripts and prediction of transcript structures, computational aspects of microRNAs, and database resources.
Abstract: Noncoding RNAs have emerged as important key players in the cell Understanding their surprisingly diverse range of functions is challenging for experimental and computational biology Here, we review computational methods to analyze noncoding RNAs The topics covered include basic and advanced techniques to predict RNA structures, annotation of noncoding RNAs in genomic data, mining RNA-seq data for novel transcripts and prediction of transcript structures, computational aspects of microRNAs, and database resources

Journal ArticleDOI
TL;DR: It is found that genes involved in fusion and fission are enriched in signaling and development, suggesting that domain rearrangements and reuse may be crucial in these processes.
Abstract: Although the possibility of gene evolution by domain rearrangements has long been appreciated, current methods for reconstructing and systematically analyzing gene family evolution are limited to events such as duplication, loss, and sometimes, horizontal transfer. However, within the Drosophila clade, we find domain rearrangements occur in 35.9% of gene families, and thus, any comprehensive study of gene evolution in these species will need to account for such events. Here, we present a new computational model and algorithm for reconstructing gene evolution at the domain level. We develop a method for detecting homologous domains between genes and present a phylogenetic algorithm for reconstructing maximum parsimony evolutionary histories that include domain generation, duplication, loss, merge (fusion), and split (fission) events. Using this method, we find that genes involved in fusion and fission are enriched in signaling and development, suggesting that domain rearrangements and reuse may be crucial in these processes. We also find that fusion is more abundant than fission, and that fusion and fission events occur predominantly alongside duplication, with 92.5% and 34.3% of fusion and fission events retaining ancestral architectures in the duplicated copies. We provide a catalog of ∼9,000 genes that undergo domain rearrangement across nine sequenced species, along with possible mechanisms for their formation. These results dramatically expand on evolution at the subgene level and offer several insights into how new genes and functions arise between species.

Journal ArticleDOI
TL;DR: In this article, the authors used whole-genome approaches to sequence four Vibrio cholerae isolates from Haiti and the Dominican Republic and three additional V. cholera isolates to a high depth of coverage.
Abstract: Whole-genome sequencing is an important tool for understanding microbial evolution and identifying the emergence of functionally important variants over the course of epidemics. In October 2010, a severe cholera epidemic began in Haiti, with additional cases identified in the neighboring Dominican Republic. We used whole-genome approaches to sequence four Vibrio cholerae isolates from Haiti and the Dominican Republic and three additional V. cholerae isolates to a high depth of coverage (>2000x); four of the seven isolates were previously sequenced. Using these sequence data, we examined the effect of depth of coverage and sequencing platform on genome assembly and identification of sequence variants. We found that 50x coverage is sufficient to construct a whole-genome assembly and to accurately call most variants from 100 base pair paired-end sequencing reads. Phylogenetic analysis between the newly sequenced and thirty-three previously sequenced V. cholerae isolates indicates that the Haitian and Dominican Republic isolates are closest to strains from South Asia. The Haitian and Dominican Republic isolates form a tight cluster, with only four variants unique to individual isolates. These variants are located in the CTX region, the SXT region, and the core genome. Of the 126 mutations identified that separate the Haiti-Dominican Republic cluster from the V. cholerae reference strain (N16961), 73 are non-synonymous changes, and a number of these changes cluster in specific genes and pathways. Sequence variant analyses of V. cholerae isolates, including multiple isolates from the Haitian outbreak, identify coverage-specific and technology-specific effects on variant detection, and provide insight into genomic change and functional evolution during an epidemic.

Dissertation
01 Jan 2012
TL;DR: New computational approaches to predict the binding sites of regulators using the genomes of many, closely related species and discover and characterize microRNAs and use static predictions for binding sites in conjunction with chromatin modifications to better understand the dynamics of regulation are developed.
Abstract: Gene regulation, the process responsible for taking a static genome and producing the diversity and complexity of life, is largely mediated through the sequence specific binding of regulators. The short, degenerate nature of the recognized elements and the unknown rules through which they interact makes deciphering gene regulation a significant challenge. In this thesis, we utilize comparative genomics and other approaches to exploit large-scale experimental datasets and better understand the sequence elements and regulators responsible for regulatory programs. In particular, we develop new computational approaches to (1) predict the binding sites of regulators using the genomes of many, closely related species; (2) understand the sequence motifs associated with transcription factors; (3) discover and characterize microRNAs, an important class of regulators; (4) use static predictions for binding sites in conjunction with chromatin modifications to better understand the dynamics of regulation; and (5) systematically validate the predicted motif instances using a massively parallel reporter assay. We find that the predictions made by our algorithms are of high quality and are comparable to those made by leading experimental approaches. Moreover, we find that experimental and computational approaches are often complementary. Regions experimentally identified to be bound by a factor can be species and cell line specific, but they lack the resolution and unbiased nature of our predictions. Experimentally identified miRNAs have unmistakable signs of being processed, but cannot provide the same insights our machine learning framework does. Further emphasizing the importance of integration, combining chromatin mark annotations and gene expression from multiple cell types with our static motif instances allows for increasing our power and making additional biologically relevant insights. We successfully apply the algorithms in this thesis to 29 mammals and 12 flies and expect them to be applicable to other clades of eukaryotic species. Moreover, we find that our performance has not yet plateaued and believe these methods will continue to be relevant as sequencing becomes increasingly commonplace and thousands of genomes become available. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)