scispace - formally typeset
Search or ask a question

Showing papers in "Nature Methods in 2010"


Journal ArticleDOI
TL;DR: An overview of the analysis pipeline and links to raw data and processed output from the runs with and without denoising are provided.
Abstract: Supplementary Figure 1 Overview of the analysis pipeline. Supplementary Table 1 Details of conventionally raised and conventionalized mouse samples. Supplementary Discussion Expanded discussion of QIIME analyses presented in the main text; Sequencing of 16S rRNA gene amplicons; QIIME analysis notes; Expanded Figure 1 legend; Links to raw data and processed output from the runs with and without denoising.

28,911 citations


Journal ArticleDOI
TL;DR: A new method and the corresponding software tool, PolyPhen-2, which is different from the early tool polyPhen1 in the set of predictive features, alignment pipeline, and the method of classification is presented and performance, as presented by its receiver operating characteristic curves, was consistently superior.
Abstract: To the Editor: Applications of rapidly advancing sequencing technologies exacerbate the need to interpret individual sequence variants. Sequencing of phenotyped clinical subjects will soon become a method of choice in studies of the genetic causes of Mendelian and complex diseases. New exon capture techniques will direct sequencing efforts towards the most informative and easily interpretable protein-coding fraction of the genome. Thus, the demand for computational predictions of the impact of protein sequence variants will continue to grow. Here we present a new method and the corresponding software tool, PolyPhen-2 (http://genetics.bwh.harvard.edu/pph2/), which is different from the early tool PolyPhen1 in the set of predictive features, alignment pipeline, and the method of classification (Fig. 1a). PolyPhen-2 uses eight sequence-based and three structure-based predictive features (Supplementary Table 1) which were selected automatically by an iterative greedy algorithm (Supplementary Methods). Majority of these features involve comparison of a property of the wild-type (ancestral, normal) allele and the corresponding property of the mutant (derived, disease-causing) allele, which together define an amino acid replacement. Most informative features characterize how well the two human alleles fit into the pattern of amino acid replacements within the multiple sequence alignment of homologous proteins, how distant the protein harboring the first deviation from the human wild-type allele is from the human protein, and whether the mutant allele originated at a hypermutable site2. The alignment pipeline selects the set of homologous sequences for the analysis using a clustering algorithm and then constructs and refines their multiple alignment (Supplementary Fig. 1). The functional significance of an allele replacement is predicted from its individual features (Supplementary Figs. 2–4) by Naive Bayes classifier (Supplementary Methods). Figure 1 PolyPhen-2 pipeline and prediction accuracy. (a) Overview of the algorithm. (b) Receiver operating characteristic (ROC) curves for predictions made by PolyPhen-2 using five-fold cross-validation on HumDiv (red) and HumVar3 (light green). UniRef100 (solid ... We used two pairs of datasets to train and test PolyPhen-2. We compiled the first pair, HumDiv, from all 3,155 damaging alleles with known effects on the molecular function causing human Mendelian diseases, present in the UniProt database, together with 6,321 differences between human proteins and their closely related mammalian homologs, assumed to be non-damaging (Supplementary Methods). The second pair, HumVar3, consists of all the 13,032 human disease-causing mutations from UniProt, together with 8,946 human nsSNPs without annotated involvement in disease, which were treated as non-damaging. We found that PolyPhen-2 performance, as presented by its receiver operating characteristic curves, was consistently superior compared to PolyPhen (Fig. 1b) and it also compared favorably with the three other popular prediction tools4–6 (Fig. 1c). For a false positive rate of 20%, PolyPhen-2 achieves the rate of true positive predictions of 92% and 73% on HumDiv and HumVar, respectively (Supplementary Table 2). One reason for a lower accuracy of predictions on HumVar is that nsSNPs assumed to be non-damaging in HumVar contain a sizable fraction of mildly deleterious alleles. In contrast, most of amino acid replacements assumed non-damaging in HumDiv must be close to selective neutrality. Because alleles that are even mildly but unconditionally deleterious cannot be fixed in the evolving lineage, no method based on comparative sequence analysis is ideal for discriminating between drastically and mildly deleterious mutations, which are assigned to the opposite categories in HumVar. Another reason is that HumDiv uses an extra criterion to avoid possible erroneous annotations of damaging mutations. For a mutation, PolyPhen-2 calculates Naive Bayes posterior probability that this mutation is damaging and reports estimates of false positive (the chance that the mutation is classified as damaging when it is in fact non-damaging) and true positive (the chance that the mutation is classified as damaging when it is indeed damaging) rates. A mutation is also appraised qualitatively, as benign, possibly damaging, or probably damaging (Supplementary Methods). The user can choose between HumDiv- and HumVar-trained PolyPhen-2. Diagnostics of Mendelian diseases requires distinguishing mutations with drastic effects from all the remaining human variation, including abundant mildly deleterious alleles. Thus, HumVar-trained PolyPhen-2 should be used for this task. In contrast, HumDiv-trained PolyPhen-2 should be used for evaluating rare alleles at loci potentially involved in complex phenotypes, dense mapping of regions identified by genome-wide association studies, and analysis of natural selection from sequence data, where even mildly deleterious alleles must be treated as damaging.

11,571 citations


Journal ArticleDOI
TL;DR: MutationTaster allows the efficient filtering of NGS data for alterations with high disease-causing potential and provides Perl scripts that can process data from all major platforms (Roche 454, Illumina Genome Analyzer and ABI SOLiD).
Abstract: (simple_aae) or at alterations causing complex changes in the amino acid sequence (complex_aae). To train the classifier, we generated a dataset with all available and suitable common polymorphisms and known diseasecausing mutations extracted from common databases and the literature. We cross-validated the classifier five times including all three prediction models and obtained an overall accuracy of 91.1 ± 0.1%. We also compared MutationTaster with similar applications (Panther3, Pmut4, PolyPhen and PolyPhen-2 (ref. 5) and ‘screening for non-acceptable polymorphisms’ (SNAP)6) and analyzed the identical 1,000 disease-linked mutations and 1,000 polymorphisms with all programs. For this comparison, we used only alterations causing single amino acid exchanges. MutationTaster performed best in terms of accuracy and speed (Table 1). A description of all training and validation procedures and detailed statistics are available in Supplementary Methods. MutationTaster can be used via an intuitive web interface to analyze single mutations as well as in batch mode. To streamline and to standardize the analysis of NGS data, we provide Perl scripts that can process data from all major platforms (Roche 454, Illumina Genome Analyzer and ABI SOLiD). MutationTaster hence allows the efficient filtering of NGS data for alterations with high disease-causing potential (see Supplementary Methods for an example). Present limitations of the software comprise its inability to analyze insertion-deletions greater than 12 base pairs and alterations spanning an intron-exon border. Also, analysis of non-exonic alterations is restricted to Kozak consensus sequence, splice sites and poly(A) signal. We will add tests for other sequence motifs in the near future. MutationTaster is available at http://www.mutationtaster.org/.

2,628 citations


Journal ArticleDOI
TL;DR: This Review discusses promising photonic methods that have the ability to visualize cellular and subcellular components in tissues across different penetration scales, according to the tissue depth at which they operate.
Abstract: Optical microscopy has been a fundamental tool of biological discovery for more than three centuries, but its in vivo tissue imaging ability has been restricted by light scattering to superficial investigations, even when confocal or multiphoton methods are used. Recent advances in optical and optoacoustic (photoacoustic) imaging now allow imaging at depths and resolutions unprecedented for optical methods. These abilities are increasingly important to understand the dynamic interactions of cellular processes at different systems levels, a major challenge of postgenome biology. This Review discusses promising photonic methods that have the ability to visualize cellular and subcellular components in tissues across different penetration scales. The methods are classified into microscopic, mesoscopic and macroscopic approaches, according to the tissue depth at which they operate. Key characteristics associated with different imaging implementations are described and the potential of these technologies in biological applications is discussed.

1,607 citations


Journal ArticleDOI
TL;DR: The direct detection of DNA methylation, without bisulfite conversion, through single-molecule, real-time (SMRT) sequencing is described and is amenable to long read lengths and will likely enable mapping of methylation patterns in even highly repetitive genomic regions.
Abstract: We describe the direct detection of DNA methylation, without bisulfite conversion, through single-molecule, real-time (SMRT) sequencing. In SMRT sequencing, DNA polymerases catalyze the incorporation of fluorescently labeled nucleotides into complementary nucleic acid strands. The arrival times and durations of the resulting fluorescence pulses yield information about polymerase kinetics and allow direct detection of modified nucleotides in the DNA template, including N6-methyladenine, 5-methylcytosine and 5-hydroxymethylcytosine. Measurement of polymerase kinetics is an intrinsic part of SMRT sequencing and does not adversely affect determination of primary DNA sequence. The various modifications affect polymerase kinetics differently, allowing discrimination between them. We used these kinetic signatures to identify adenine methylation in genomic samples and found that, in combination with circular consensus sequencing, they can enable single-molecule identification of epigenetic modifications with base-pair resolution. This method is amenable to long read lengths and will likely enable mapping of methylation patterns in even highly repetitive genomic regions.

1,353 citations


Journal ArticleDOI
TL;DR: In this paper, a mixture-of-isoforms (MISO) model was proposed to estimate expression of alternatively spliced exons and isoforms and assesses confidence in these estimates.
Abstract: Through alternative splicing, most human genes express multiple isoforms that often differ in function. To infer isoform regulation from high-throughput sequencing of cDNA fragments (RNA-seq), we developed the mixture-of-isoforms (MISO) model, a statistical model that estimates expression of alternatively spliced exons and isoforms and assesses confidence in these estimates. Incorporation of mRNA fragment length distribution in paired-end RNA-seq greatly improved estimation of alternative-splicing levels. MISO also detects differentially regulated exons or isoforms. Application of MISO implicated the RNA splicing factor hnRNP H1 in the regulation of alternative cleavage and polyadenylation, a role that was supported by UV cross-linking-immunoprecipitation sequencing (CLIP-seq) analysis in human cells. Our results provide a probabilistic framework for RNA-seq analysis, give functional insights into pre-mRNA processing and yield guidelines for the optimal design of RNA-seq experiments for studies of gene and isoform expression.

1,265 citations


Journal ArticleDOI
TL;DR: The experiences with the leading target-enrichment technologies, the optimizations that are performed, and typical results that can be obtained using each are described and detailed protocols for each are provided so that end users can find the best compromise between sensitivity, specificity and uniformity for their particular project.
Abstract: We have not yet reached a point at which routine sequencing of large numbers of whole eukaryotic genomes is feasible, and so it is often necessary to select genomic regions of interest and to enrich these regions before sequencing. There are several enrichment approaches, each with unique advantages and disadvantages. Here we describe our experiences with the leading target-enrichment technologies, the optimizations that we have performed and typical results that can be obtained using each. We also provide detailed protocols for each technology so that end users can find the best compromise between sensitivity, specificity and uniformity for their particular project.

1,068 citations


Journal ArticleDOI
TL;DR: The mouse grimace scale (MGS) is developed, a standardized behavioral coding system with high accuracy and reliability; assays involving noxious stimuli of moderate duration are accompanied by facial expressions of pain.
Abstract: Facial expression is widely used as a measure of pain in infants; whether nonhuman animals display such pain expressions has never been systematically assessed. We developed the mouse grimace scale (MGS), a standardized behavioral coding system with high accuracy and reliability; assays involving noxious stimuli of moderate duration are accompanied by facial expressions of pain. This measure of spontaneously emitted pain may provide insight into the subjective pain experience of mice.

1,043 citations


Journal ArticleDOI
TL;DR: It is demonstrated that micropost rigidity impacts cell morphology, focal adhesions, cytoskeletal contractility and stem cell differentiation, and early changes in cytoskeleton contractility predicted later stem cell fate decisions in single cells.
Abstract: Micropost arrays can be used to modulate substrate rigidity independently of other substrate properties, permitting the study of the effects of rigidity on cell function.

1,008 citations


Journal ArticleDOI
TL;DR: Trans-ABySS, a de novo short-read transcriptome assembly and analysis pipeline that addresses variation in local read densities by assembling read substrings with varying stringencies and then merging the resulting contigs before analysis, achieves high sensitivity and specificity relative to reference-based assembly methods.
Abstract: We describe Trans-ABySS, a de novo short-read transcriptome assembly and analysis pipeline that addresses variation in local read densities by assembling read substrings with varying stringencies and then merging the resulting contigs before analysis. Analyzing 7.4 gigabases of 50-base-pair paired-end Illumina reads from an adult mouse liver poly(A) RNA library, we identified known, new and alternative structures in expressed transcripts, and achieved high sensitivity and specificity relative to reference-based assembly methods.

988 citations


Journal ArticleDOI
TL;DR: Genetically encoded light-inducible protein-interaction modules based on Arabidopsis thaliana cryptochrome 2 and CIB1 that require no exogenous ligands and dimerize on blue-light exposure with subsecond time resolution and subcellular spatial resolution are described.
Abstract: Dimerizers allowing inducible control of protein-protein interactions are powerful tools for manipulating biological processes. Here we describe genetically encoded light-inducible protein-interaction modules based on Arabidopsis thaliana cryptochrome 2 and CIB1 that require no exogenous ligands and dimerize on blue-light exposure with subsecond time resolution and subcellular spatial resolution. We demonstrate the utility of this system by inducing protein translocation, transcription and Cre recombinase-mediated DNA recombination using light.

Journal ArticleDOI
TL;DR: Both theory and experimental data showed that unweighted least-squares fitting of a Gaussian squanders one-third of the available information, a popular formula for its precision exaggerates beyond Fisher's information limit, and weighted least-Squares may do worse, whereas maximum-likelihood fitting is practically optimal.
Abstract: We optimally localized isolated fluorescent beads and molecules imaged as diffraction-limited spots, determined the orientation of molecules and present reliable formulas for the precision of various localization methods. Both theory and experimental data showed that unweighted least-squares fitting of a Gaussian squanders one-third of the available information, a popular formula for its precision exaggerates beyond Fisher's information limit, and weighted least-squares may do worse, whereas maximum-likelihood fitting is practically optimal.

Journal ArticleDOI
TL;DR: 'minicircle'
Abstract: Owing to the risk of insertional mutagenesis, viral transduction has been increasingly replaced by nonviral methods to generate induced pluripotent stem cells (iPSCs). We report the use of 'minicircle' DNA, a vector type that is free of bacterial DNA and capable of high expression in cells, for this purpose. Here we use a single minicircle vector to generate transgene-free iPSCs from adult human adipose stem cells.

Journal ArticleDOI
TL;DR: In this paper, the authors developed a comprehensive computational pipeline to compare library quality metrics from any RNA-seq method, using the well-annotated Saccharomyces cerevisiae transcriptome as a benchmark.
Abstract: Strand-specific, massively parallel cDNA sequencing (RNA-seq) is a powerful tool for transcript discovery, genome annotation and expression profiling. There are multiple published methods for strand-specific RNA-seq, but no consensus exists as to how to choose between them. Here we developed a comprehensive computational pipeline to compare library quality metrics from any RNA-seq method. Using the well-annotated Saccharomyces cerevisiae transcriptome as a benchmark, we compared seven library-construction protocols, including both published and our own methods. We found marked differences in strand specificity, library complexity, evenness and continuity of coverage, agreement with known annotations and accuracy for expression profiling. Weighing each method's performance and ease, we identified the dUTP second-strand marking and the Illumina RNA ligation methods as the leading protocols, with the former benefitting from the current availability of paired-end sequencing. Our analysis provides a comprehensive benchmark, and our computational pipeline is applicable for assessment of future protocols in other organisms.

Journal ArticleDOI
TL;DR: A fast method for denoising pyrosequencing for community 16S rRNA analysis is developed and a 2–4 fold reduction in the number of observed OTUs is observed comparing denoised with non-denoised data.
Abstract: We developed a fast method for denoising pyrosequencing for community 16S rRNA analysis. We observe a 2–4 fold reduction in the number of observed OTUs (operational taxonomic units) comparing denoised with non-denoised data. ~50,000 sequences can be denoised on a laptop within an hour, two orders of magnitude faster than published techniques. We demonstrate the effects of denoising on alpha and beta diversity of large 16S rRNA datasets.

Journal ArticleDOI
TL;DR: How visualization tools are being used to help interpret protein interaction, gene expression and metabolic profile data is discussed, and emerging new directions are highlighted.
Abstract: High-throughput studies of biological systems are rapidly accumulating a wealth of 'omics'-scale data. Visualization is a key aspect of both the analysis and understanding of these data, and users now have many visualization methods and tools to choose from. The challenge is to create clear, meaningful and integrated visualizations that give biological insight, without being overwhelmed by the intrinsic complexity of the data. In this review, we discuss how visualization tools are being used to help interpret protein interaction, gene expression and metabolic profile data, and we highlight emerging new directions.

Journal ArticleDOI
TL;DR: This work describes a technique to quantitatively measure three-dimensional traction forces exerted by cells fully encapsulated in well-defined elastic hydrogel matrices and revealed patterns of force generation attributable to morphologically distinct regions of cells as they extend into the surrounding matrix.
Abstract: Tracking the displacement of fluorescent beads surrounding a cell embedded in a hydrogel matrix allows quantitative measurement of the three-dimensional traction forces exerted by the cell.

Journal ArticleDOI
TL;DR: An approach to adaptive optics in microscopy wherein the rear pupil of an objective lens is segmented into subregions, and light is directed individually to each subregion to measure, by image shift, the deflection faced by each group of rays as they emerge from the objective and travel through the specimen toward the focus.
Abstract: Biological specimens are rife with optical inhomogeneities that seriously degrade imaging performance under all but the most ideal conditions. Measuring and then correcting for these inhomogeneities is the province of adaptive optics. Here we introduce an approach to adaptive optics in microscopy wherein the rear pupil of an objective lens is segmented into subregions, and light is directed individually to each subregion to measure, by image shift, the deflection faced by each group of rays as they emerge from the objective and travel through the specimen toward the focus. Applying our method to two-photon microscopy, we could recover near-diffraction-limited performance from a variety of biological and nonbiological samples exhibiting aberrations large or small and smoothly varying or abruptly changing. In particular, results from fixed mouse cortical slices illustrate our ability to improve signal and resolution to depths of 400 microm.

Journal ArticleDOI
TL;DR: Waltz as mentioned in this paper is a web-based tool that uses a position-specific scoring matrix to determine amyloid-forming sequences, which allows users to identify and better distinguish between Amyloid sequences and amorphous beta-sheet aggregates.
Abstract: Protein aggregation results in beta-sheet-like assemblies that adopt either a variety of amorphous morphologies or ordered amyloid-like structures. These differences in structure also reflect biological differences; amyloid and amorphous beta-sheet aggregates have different chaperone affinities, accumulate in different cellular locations and are degraded by different mechanisms. Further, amyloid function depends entirely on a high intrinsic degree of order. Here we experimentally explored the sequence space of amyloid hexapeptides and used the derived data to build Waltz, a web-based tool that uses a position-specific scoring matrix to determine amyloid-forming sequences. Waltz allows users to identify and better distinguish between amyloid sequences and amorphous beta-sheet aggregates and allowed us to identify amyloid-forming regions in functional amyloids.

Journal ArticleDOI
TL;DR: This method discriminates the specimen-related scattered background from signal fluorescence, thereby removing out-of-focus light and optimizing the contrast of in-focus structures, and provides rapid control of the illumination pattern, exceptional imaging quality and high imaging speeds.
Abstract: The combination of digital scanned laser light sheet microscopy and incoherent structured illumination allows intrinsic removal of scattered background fluorescence from the desired fluorescent signal. This provides substantial advantages for imaging nontransparent organisms and allowed reconstruction of a fly digital embryo from a developing Drosophila embryo.

Journal ArticleDOI
TL;DR: A method to accurately quantify human tumor proteomes is described by combining a mixture of five stable-isotope labeling by amino acids in cell culture (SILAC)-labeled cell lines with human carcinoma tissue, which broadens the scope of SILAC-based proteomics.
Abstract: We describe a method to accurately quantify human tumor proteomes by combining a mixture of five stable-isotope labeling by amino acids in cell culture (SILAC)-labeled cell lines with human carcinoma tissue. This generated hundreds of thousands of isotopically labeled peptides in appropriate amounts to serve as internal standards for mass spectrometry-based analysis. By decoupling the labeling from the measurement, this super-SILAC method broadens the scope of SILAC-based proteomics.

Journal ArticleDOI
TL;DR: This work tracked the performance of >600,000 variants of a human WW domain after three and six rounds of selection by phage display for binding to its peptide ligand, providing a general means for understanding how protein function relates to sequence.
Abstract: We present a large-scale approach to investigate the functional consequences of sequence variation in a protein. The approach entails the display of hundreds of thousands of protein variants, moderate selection for activity, and high throughput DNA sequencing to quantify the performance of each variant. Using this strategy, we tracked the performance of >600,000 variants of a human WW domain after three and six rounds of selection by phage display for binding to its peptide ligand. Binding properties of these variants defined a high-resolution map of mutational preference across the WW domain; each position possessed unique features that could not be captured by a few representative mutations. Our approach could be applied to many in vitro or in vivo protein assays, providing a general means for understanding how protein function relates to sequence.

Journal ArticleDOI
TL;DR: An iterative algorithm is described that converges to the maximum likelihood estimate of the position and intensity of a single fluorophore and efficiently computes and achieves the Cramér-Rao lower bound, an essential tool for parameter estimation.
Abstract: We describe an iterative algorithm that converges to the maximum likelihood estimate of the position and intensity of a single fluorophore. Our technique efficiently computes and achieves the Cramer-Rao lower bound, an essential tool for parameter estimation. An implementation of the algorithm on graphics processing unit hardware achieved more than 10(5) combined fits and Cramer-Rao lower bound calculations per second, enabling real-time data analysis for super-resolution imaging and other applications.

Journal ArticleDOI
TL;DR: This work validated csSAM with predesigned mixtures and applied it to whole-blood gene expression datasets from stable post-transplant kidney transplant recipients and those experiencing acute transplant rejection, which revealed hundreds of differentially expressed genes that were otherwise undetectable.
Abstract: We describe cell type-specific significance analysis of microarrays (csSAM) for analyzing differential gene expression for each cell type in a biological sample from microarray data and relative cell-type frequencies. First, we validated csSAM with predesigned mixtures and then applied it to whole-blood gene expression datasets from stable post-transplant kidney transplant recipients and those experiencing acute transplant rejection, which revealed hundreds of differentially expressed genes that were otherwise undetectable.

Journal ArticleDOI
TL;DR: In vivo imaging in mouse neocortex is reported with greatly improved temporal resolution using random-access scanning with acousto-optic deflectors, uncovering spatiotemporal trial-to-trial variability of sensory responses in barrel cortex and visual cortex.
Abstract: Two-photon calcium imaging of neuronal populations enables optical recording of spiking activity in living animals, but standard laser scanners are too slow to accurately determine spike times. Here we report in vivo imaging in mouse neocortex with greatly improved temporal resolution using random-access scanning with acousto-optic deflectors. We obtained fluorescence measurements from 34-91 layer 2/3 neurons at a 180-490 Hz sampling rate. We detected single action potential-evoked calcium transients with signal-to-noise ratios of 2-5 and determined spike times with near-millisecond precision and 5-15 ms confidence intervals. An automated 'peeling' algorithm enabled reconstruction of complex spike trains from fluorescence traces up to 20-30 Hz frequency, uncovering spatiotemporal trial-to-trial variability of sensory responses in barrel cortex and visual cortex. By revealing spike sequences in neuronal populations on a fast time scale, high-speed calcium imaging will facilitate optical studies of information processing in brain microcircuits.

Journal ArticleDOI
TL;DR: It was found that picking up mice by the tail induced aversion and high anxiety, whereas use of tunnels or open hand led to voluntary approach, low anxiety and acceptance of physical restraint.
Abstract: Mice handled by their tails show high levels of anxiety and stress compared to mice handled in cupped hands or in a transparent tunnel.

Journal ArticleDOI
TL;DR: A method based on crude synthetic peptide libraries for the high-throughput development of SRM assays is described, illustrating the power of the approach by generating and applying validated SRM Assays for all Saccharomyces cerevisiae kinases and phosphatases.
Abstract: Selected reaction monitoring (SRM) uses sensitive and specific mass spectrometric assays to measure target analytes across multiple samples, but it has not been broadly applied in proteomics owing to the tedious assay development process for each protein. We describe a method based on crude synthetic peptide libraries for the high-throughput development of SRM assays. We illustrate the power of the approach by generating and applying validated SRM assays for all Saccharomyces cerevisiae kinases and phosphatases.

Journal ArticleDOI
TL;DR: Mass spectrometry has evolved and matured to a level where it is able to assess the complexity of the human proteome and some of the expected challenges ahead and promising strategies for success are discussed.
Abstract: Mass spectrometry has evolved and matured to a level where it is able to assess the complexity of the human proteome. We discuss some of the expected challenges ahead and promising strategies for success.

Journal ArticleDOI
TL;DR: The GenePRIMP as discussed by the authors is a computational process that performs evidence-based evaluation of gene models in prokaryotic genomes and reports anomalies including inconsistent start sites, missed genes and split genes.
Abstract: We present 'gene prediction improvement pipeline' (GenePRIMP; http://geneprimp.jgi-psf.org/), a computational process that performs evidence-based evaluation of gene models in prokaryotic genomes and reports anomalies including inconsistent start sites, missed genes and split genes. We found that manual curation of gene models using the anomaly reports generated by GenePRIMP improved their quality, and demonstrate the applicability of GenePRIMP in improving finishing quality and comparing different genome-sequencing and annotation technologies.

Journal ArticleDOI
TL;DR: To the Editor: Although conventional microscopes have a reso-lution limited by diffraction to about half the wavelength of light, several recent advances have led to microscopy methods that achieve roughly tenfold improvements in resolution.
Abstract: To the Editor: Although conventional microscopes have a reso-lution limited by diffraction to about half the wavelength of light, several recent advances have led to microscopy methods that achieve roughly tenfold improvements in resolution. Among them, photoactivated light microscopy (PALM) and stochastic optical resolution microscopy (STORM) have become particularly popular, as they only require relatively simple and affordable modifications to a standard total internal reflection fluorescence (TIRF) microscope and have been extended to three-dimensional (3D) super-resolution and multicolor imaging.