scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Sequencing technologies-the next generation

01 Jan 2010-Nature Reviews Genetics (Nature Publishing Group)-Vol. 11, Iss: 1, pp 31-46
TL;DR: A technical review of template preparation, sequencing and imaging, genome alignment and assembly approaches, and recent advances in current and near-term commercially available NGS instruments is presented.
Abstract: Demand has never been greater for revolutionary technologies that deliver fast, inexpensive and accurate genome information. This challenge has catalysed the development of next-generation sequencing (NGS) technologies. The inexpensive production of large volumes of sequence data is the primary advantage over conventional methods. Here, I present a technical review of template preparation, sequencing and imaging, genome alignment and assembly approaches, and recent advances in current and near-term commercially available NGS instruments. I also outline the broad range of applications for NGS technologies, in addition to providing guidelines for platform selection to address biological questions of interest.

Summary (1 min read)

Jump to:  and [Summary]

Summary

  • DNA sequencing is one of the most important platforms for study in biological systems today.
  • The high-throughput-next generation sequencing technologies delivers fast, inexpensive, and accurate genome information.
  • Next generation sequencing can produce over 100 times more data than methods based on Sanger Sequencing.
  • The next generation sequencing technologies offered from Illumina / Solexa, ABI/SOLiD, 454/Roche, and Helicos has provided unprecedented opportunity for high-throughput functional genomic research.
  • Next generation sequence technologies offer novel and rapid ways for genome-wide characterization and profiling of mRNA's, transcription factor regions, and DNA patterns.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

TEMPLATE DESIGN © 2008
www.PosterPresentations.com
ABSTRACT
Conclusion and Future Work
Next Generation Sequencing
CONTACT INFO
Data Analysis Comparisons
Downstream Analysis
REFERENCES
DNA sequencing is one of the most important platforms for
study in biological systems today. The high-throughput-next
generation sequencing technologies delivers fast,
inexpensive, and accurate genome information. Next
generation sequencing can produce over 100 times more data
than methods based on Sanger Sequencing. The next
generation sequencing technologies offered from Illumina /
Solexa, ABI/SOLiD, 454/Roche, and Helicos has provided
unprecedented opportunity for high–throughput functional
genomic research. Next generation sequence technologies
offer novel and rapid ways for genome-wide characterization
and profiling of mRNAs, transcription factor regions, and DNA
patterns.
Fig. 7) This is a plot of the frequency of each percentage covered for all nodes.
BLAST is in blue, MUMmer is in green.
Sequencing Technologies – the Next Generation,
Micahel L. Metzkerh
Next Generation Sequencing Pipeline Development and Data Analysis
Fig. 9) This is a plot of the coverage of each Node. BLAST points are blue,
MUMmer points are red.
Fig. 6) This is a plot of the frequency of each percentage covered for all contigs.
BLAST is in blue, MUMmer is in green.
454/Roche – 454 Life Sciences is a Biotechnology company
that is a part of Roche and based in Branford, Connecticut.
The center develops ultra-fast high-throughput DNA
sequencing methods and tools.
Illumina/Solexa– Illumina is a company that develops and
manufactures integrated systems for the analysis of gene
variation. Solexa was founded to develop genome
sequencing technology.
ABI/SOLiD - (Sequencing by Oligonucleotide Ligation and
Detection) is a next-generation DNA sequencing technology
developed by Life Technologies and has been commercially
available since 2006. This next generation technology
generates hundreds of millions to billions of small sequence
reads at one time.
Helicos - Helicos's technology images the extension of
individual DNA molecules using a defined primer and
individual fluorescently labeled nucleotides, which contain a
"virtual terminator" preventing incorporation of multiple
nucleotides per cycle.
Julian Pierre
1
, Jordan Taylor
2
, Amit Upadhyay
3
, Bhanu Rekepalli
3
Fig. 8) This is a plot of the coverage of each Contig. BLAST points are blue,
MUMmer points are red.
Using the coverage of
each individual contig
ID, the results for both
BLAST and MUMmer
were plotted. While
BLAST hit more contigs,
there are more contigs
with a higher coverage
that were hit by
MUMmer.
Using the data gathered
from both BLAST and
MUMmer, the frequency
of the amount covered
for each contig was
plotted. From Fig 6), it
can be inferred that
MUMmer hit more
accurately for contigs.
Fig 4) from main.g2.bx.psu.edu
Once the results were found using both the BLAST and
MUMmer search tools, we created a program to see which
sequencing tool had the most hits per contig. The total
number of contigs in the database file is 160,749 and the
total number of nodes in the query file is 552,305. BLAST
returned a total of 123,070 hits and MUMmer returned a
total of 121,829 hits. From the results, MUMmer hit more
accurately than BLAST while BLAST hit more contigs than
MUMmer.
In Next-Generation Sequencing, data analysis is one of the
most expensive processes. While the cost of genome
sequencing goes down, the cost of analyzing data is still
expensive. In the future, the “$1,000 genome will come with
a $20,000 analysis price tag.”
The same process was
done with the Nodes.
From Fig 7), it can be
inferred that BLAST hit
more accurately with
nodes. However, there
are more BLAST results
with lower coverage.
The future of next generation sequencing can be broken
down into a variety of categories such as personalized
medicine, bio fuels, climate change, and other life science
fields.
Personalized Medicine is a medical model that proposes
the customization of medical decision to tailor an
individual
Bio Fuels present a source of alternative energy.
Microalgal biofuels use algae to synthesize the fuel. In
order to optimize the process, an understanding of the
gene-function relationship of algae would prove helpful.
Climate change is the active study of past and future
theoretical models which uses the past climate data to
make future projections.
In conclusion, we hope to contribute the knowledge we
have gained to contribute to fields such as these.
The same process was
done with the Nodes.
While BLAST hit more
Nodes, there are more
Nodes that hit with a
lower coverage using
BLAST.
1 Texas Southern University, 2 Austin Peay State University, 3 University of Tennessee
Next Gen Sequencing uses a wide array of tools to obtain results based
on the genome sequence. The most widely used Tools are BLAST,
HMMER, and MUMmer.
BLAST (Basic Local Alignment Search Tool) is a multi-sequence
alignment tool developed by NIH (National Institute of Health). It is
used find similar regions in different sequences and then compare
their similarities.
MUMmer (Maximum Unique Matches) is a rapid alignment system
used for rapidly aligning entire genomes. It can also align incomplete
genomes and can easily handle 1000’s of contigs from a shotgun
sequencing project.
HMMER (Hidden Markov Modeler) is used for searching sequence
databases for homologs of protein sequences, and for making protein
sequence alignments. It implements methods using probabilistic
models called profile hidden Markov models (HMMs)
Genome Assembly
Sequence Analysis refers to
the process of subjecting a
DNA, RNA or peptide
sequence to a wide range of
analytical methods to:
Compare sequences to find
similarities and infer if they
are Homologous
To identify the features of
the sequence such as gene
structure, distribution,
introns and exons, and
regulation of gene
expression
Identify Sequence
differences and variations
such as mutations
Fig. 1) This is figure shows three different Next Generation Sequencing methods. [2]
Fig. 2) Taken from A Hitchhiker’s Guide to Next-Generation Sequencing, by Gabe Rudy
Fig. 3) Taken from bio.davidson.edu/courses. Shows alignment results for yeast.
Fig 5) from jcvi.org shows the mapping of chr6 of a Human Genome
Julian Pierre – julz_pierre@yahoo.com
Jordan Taylor – jtaylor74@my.apsu.edu
Amit Upadhyay – aupadhy1@utk.edu
Bhanu Rekepalli – brekapal@utk.edu
http://www.roche.com/research_and_development/r_d_overview/
r_d_sites.htm?id=18
http://www.pnas.org/content/99/6/3712/F1.expansion.html
http://www.yerkes.emory.edu/nhp_genomics_core/Services/
Sequencing.html
http://www.illumina.com/technology/solexa_technology.ilmn
http://blast.ncbi.nlm.nih.gov/Blast.cgi
https://main.g2.bx.psu.edu/u/dan/p/fastq
http://ori.dhhs.gov/education/products/n_illinois_u/datamanagement/
datopic.htmll
http://www.jcvi.org/medicago/include/images/chr6.BamHI.maps.jpg
Gabe Rudy, (2010) A Hitchhikers Guide to Next-Generation
Sequencing, :1-9, Golden Helix
[1] John D. McPherson, (2009) Next-Generation Gap, 6:1-4, Nature
Methods Supplement
[2]Michael L. Metzker, (2010) Sequencing Technologies, - the next
generation, 11:1-5, Nature Reviews
Md. Fakruddin,Khanjada Shahnewaj Bin mannan, (2012) Next
Generation sequencing technologies – Principles and prospects,
6:1-9, Research and Reviews in Biosciences
Misra N., Panda P. K., Parida B. K., Mishra B. K., (2012)
Phylogenomic Study of Lipid Genes Involved in Mocroalgal Biofuel
Production – Candidate Gene Mining and Metabolic Pathway
Analyses, Evolutionary Bioinformatics 8:545-564, doi: 10.4137/
EBO.S10159
Galaxy is an open, web-based
platform for data intensive
biomedical research. It can be
used on its own free public
server where you can perform,
reproduce, and share complete
analyses.
An example of how Galaxy
reflects its data is shown in Fig 5.
Two FASTA files related to the same nucleotide sequence
were input into both BLAST and MUMmer and the results
were parsed into tables. Then, the coverage of all hit contigs
and nodes from both programs was found.
Citations
More filters
Journal ArticleDOI
TL;DR: It is concluded that starting DNA quality is an important consideration for RADSeq; however, the approach remains robust until genomic DNA is extensively degraded.
Abstract: Degraded DNA from suboptimal field sampling is common in molecular ecology. However, its impact on techniques that use restriction site associated next-generation DNA sequencing (RADSeq, GBS) is unknown. We experimentally examined the effects of in situDNA degradation on data generation for a modified double-digest RADSeq approach (3RAD). We generated libraries using genomic DNA serially extracted from the muscle tissue of 8 individual lake whitefish (Coregonus clupeaformis) following 0-, 12-, 48- and 96-h incubation at room temperature posteuthanasia. This treatment of the tissue resulted in input DNA that ranged in quality from nearly intact to highly sheared. All samples were sequenced as a multiplexed pool on an Illumina MiSeq. Libraries created from low to moderately degraded DNA (12-48 h) performed well. In contrast, the number of RADtags per individual, number of variable sites, and percentage of identical RADtags retained were all dramatically reduced when libraries were made using highly degraded DNA (96-h group). This reduction in performance was largely due to a significant and unexpected loss of raw reads as a result of poor quality scores. Our findings remained consistent after changes in restriction enzymes, modified fold coverage values (2- to 16-fold), and additional read-length trimming. We conclude that starting DNA quality is an important consideration for RADSeq; however, the approach remains robust until genomic DNA is extensively degraded.

110 citations


Cites background or methods from "Sequencing technologies-the next ge..."

  • ...…shifting molecular ecology studies away from traditional markers towards large numbers of single-nucleotide polymorphisms (SNPs; reviewed by: Mardis 2008; Metzker 2010; Angeloni et al. 2012; Ekblom & Galindo 2011; see also: Ackerman et al. 2011; Barchi et al. 2011; Larson et al. 2013, 2014)....

    [...]

  • ...However, recent advances in DNA sequencing technology are rapidly shifting molecular ecology studies away from traditional markers towards large numbers of single-nucleotide polymorphisms (SNPs; reviewed by: Mardis 2008; Metzker 2010; Angeloni et al. 2012; Ekblom & Galindo 2011; see also: Ackerman et al. 2011; Barchi et al. 2011; Larson et al. 2013, 2014)....

    [...]

  • ...Third generation sequencing technologies, such as the PacBio SMRT system, have been used in the field of forensics to overcome contamination and degradation issues, as they allow for longer read lengths (Metzker 2010; Glenn 2011)....

    [...]

Journal ArticleDOI
TL;DR: Digital procedures based on the limiting dilution of biological samples in individual compartments such as droplets of a water-in-oil emulsion, and relies on the discrete counting of a given event, providing an absolute value and quantitative data could become an essential diagnostic tool for the study of diseases as well as patient management.

110 citations

Journal ArticleDOI
TL;DR: There has been a recent surge in the use of genome-wide methodologies to identify and annotate the transcriptional regulatory elements in the human genome, and several aspects of enhancer function have been shown to be more widespread than was previously appreciated.
Abstract: There has been a recent surge in the use of genome-wide methodologies to identify and annotate the transcriptional regulatory elements in the human genome. Here we review some of these methodologies and the conceptual insights about transcription regulation that have been gained from the use of genome-wide studies. It has become clear that the binding of transcription factors is itself a highly regulated process, and binding does not always appear to have functional consequences. Numerous properties have now been associated with regulatory elements that may be useful in their identification. Several aspects of enhancer function have been shown to be more widespread than was previously appreciated, including the highly combinatorial nature of transcription factor binding, the postinitiation regulation of many target genes, and the binding of enhancers at early stages to maintain their competence during development. Going forward, the integration of multiple genome-wide data sets should become a standard approach to elucidate higher-order regulatory interactions.

110 citations

Book ChapterDOI
06 Feb 2013
TL;DR: The emerging consensus among systematists and evolutionary biologists was based on the need to distinguish between a non-operational, ontological definition of species, versus the empirical (operational) data needed to test their reality, and the multiple empirical criteria simply emphasized the many contingent properties.
Abstract: A decade ago Sites and Marshall [1] described the empirical practice of species delimitation as “a Renaissance issue in systematic biology”. At the time there was an odd disconnect be‐ tween the two frequently stated empirical goals systematic biology: the discovery of: (1) monophyletic groups (clades) and relationships within these at all hierarchical levels above species; and (2) lineages (species); compared to the actual practice of the discipline. While much of systematic biology had been devoted to the first goal, the second goal had until re‐ cently been largely ignored [2], despite the fact that species are routinely used as the basic units of analysis in biogeography, ecology, evolutionary biology, and conservation biology [3,4]. However, Sites and Marshall [1] noted “signs of a Renaissance” at the time of their re‐ view, which was precipitated in part by others emphasizing the need to distinguish between a non-operational, ontological definition of species, versus the empirical (operational) data needed to test their reality [5-7]. De Queiroz [7] (p. 60) noted that “All modern species defi‐ nitions either explicitly or implicitly equate species with segments of population level evolu‐ tionary lineages.” De Queiroz also noted that this was a revised version of Simpson’s “evolutionary species concept”, which defines a species as “a lineage (an ancestraldescend‐ ent sequence of populations) evolving separately from others and with its own evolutionary role and tendencies” ([8], p. 153), and called this a General Lineage Concept (GLC) of species ([7], p. 65). De Queiroz [9] further emphasized that the multiple empirical criteria simply re‐ flect the many contingent properties (differences in genetic or morphological features, adap‐ tive zones or ecological niches, mate-recognition systems, reproductive compatibility, monophyly, etc.) of diverging populations associated with different evolutionary processes operating in various geographic contexts [10,11]. Sites and Marshall [1] noted that the emerging consensus among systematists and evolutionary biologists was based on the utili‐ ty of this distinction (ontological definition vs. empirical species delimitation [SDL] meth‐

110 citations


Cites background from "Sequencing technologies-the next ge..."

  • ..., SOLiD 454, Illumina, Solexa, etc; [83]), and con‐ tinuing today with the recently introduced third-generation ‘nanopore’ sequencing [84,85]....

    [...]

Journal ArticleDOI
17 Jun 2011-Leukemia
TL;DR: This multicenter analysis demonstrated that amplicon-based deep sequencing is technically feasible, achieves high concordance across multiple laboratories and allows a broad and in-depth molecular characterization of cancer specimens with high diagnostic sensitivity.
Abstract: Massively parallel pyrosequencing allows sensitive deep sequencing to detect molecular aberrations. Thus far, data are limited on the technical performance in a clinical diagnostic setting. Here, we investigated as an international consortium the robustness, precision and reproducibility of amplicon next-generation deep sequencing across 10 laboratories in eight countries. In a cohort of 18 chronic myelomonocytic leukemia patients, mutational analyses were performed on TET2, a frequently mutated gene in myeloproliferative neoplasms. Additionally, hotspot regions of CBL and KRAS were investigated. The study was executed using GS FLX sequencing instruments and the small volume 454 Life Sciences Titanium emulsion PCR setup. We report a high concordance in mutation detection across all laboratories, including a robust detection of novel variants, which were undetected by standard Sanger sequencing. The sensitivity to detect low-level variants present with as low as 1-2% frequency, compared with the 20% threshold for Sanger-based sequencing is increased. Together with the output of high-quality long reads and fast run time, we demonstrate the utility of deep sequencing in clinical applications. In conclusion, this multicenter analysis demonstrated that amplicon-based deep sequencing is technically feasible, achieves high concordance across multiple laboratories and allows a broad and in-depth molecular characterization of cancer specimens with high diagnostic sensitivity.

110 citations

References
More filters
Journal ArticleDOI
TL;DR: The RNA-Seq approach to transcriptome profiling that uses deep-sequencing technologies provides a far more precise measurement of levels of transcripts and their isoforms than other methods.
Abstract: RNA-Seq is a recently developed approach to transcriptome profiling that uses deep-sequencing technologies. Studies using this method have already altered our view of the extent and complexity of eukaryotic transcriptomes. RNA-Seq also provides a far more precise measurement of levels of transcripts and their isoforms than other methods. This article describes the RNA-Seq approach, the challenges associated with its application, and the advances made so far in characterizing several eukaryote transcriptomes.

11,528 citations


"Sequencing technologies-the next ge..." refers background in this paper

  • ...For example, in gene-expression studies microarrays are now being replaced by seq-based methods , which can identify and quantify rare transcripts without prior knowledge of a particular gene and can provide information regarding alternative splicing and sequence variation in identified gene...

    [...]

Journal ArticleDOI
TL;DR: Velvet represents a new approach to assembly that can leverage very short reads in combination with read pairs to produce useful assemblies and is in close agreement with simulated results without read-pair information.
Abstract: We have developed a new set of algorithms, collectively called "Velvet," to manipulate de Bruijn graphs for genomic sequence assembly. A de Bruijn graph is a compact representation based on short words (k-mers) that is ideal for high coverage, very short read (25-50 bp) data sets. Applying Velvet to very short reads and paired-ends information only, one can produce contigs of significant length, up to 50-kb N50 length in simulations of prokaryotic data and 3-kb N50 on simulated mammalian BACs. When applied to real Solexa data sets without read pairs, Velvet generated contigs of approximately 8 kb in a prokaryote and 2 kb in a mammalian BAC, in close agreement with our simulated results without read-pair information. Velvet represents a new approach to assembly that can leverage very short reads in combination with read pairs to produce useful assemblies.

9,389 citations

Journal ArticleDOI
15 Sep 2005-Nature
TL;DR: A scalable, highly parallel sequencing system with raw throughput significantly greater than that of state-of-the-art capillary electrophoresis instruments with 96% coverage at 99.96% accuracy in one run of the machine is described.
Abstract: The proliferation of large-scale DNA-sequencing projects in recent years has driven a search for alternative methods to reduce time and cost. Here we describe a scalable, highly parallel sequencing system with raw throughput significantly greater than that of state-of-the-art capillary electrophoresis instruments. The apparatus uses a novel fibre-optic slide of individual wells and is able to sequence 25 million bases, at 99% or better accuracy, in one four-hour run. To achieve an approximately 100-fold increase in throughput over current Sanger sequencing technology, we have developed an emulsion method for DNA amplification and an instrument for sequencing by synthesis using a pyrosequencing protocol optimized for solid support and picolitre-scale volumes. Here we show the utility, throughput, accuracy and robustness of this system by shotgun sequencing and de novo assembly of the Mycoplasma genitalium genome with 96% coverage at 99.96% accuracy in one run of the machine.

8,434 citations

Journal ArticleDOI
20 Feb 2009-Cell
TL;DR: This work has revealed unexpected diversity in their biogenesis pathways and the regulatory mechanisms that they access, which has direct implications for fundamental biology as well as disease etiology and treatment.

4,490 citations


"Sequencing technologies-the next ge..." refers background in this paper

  • ...and to elucidate the role of non-coding RNAs in health and diseas...

    [...]

Journal ArticleDOI
20 Feb 2009-Cell
TL;DR: The evolution of long noncoding RNAs and their roles in transcriptional regulation, epigenetic gene regulation, and disease are reviewed.

4,277 citations


"Sequencing technologies-the next ge..." refers background in this paper

  • ...and to elucidate the role of non-coding RNAs in health and diseas...

    [...]