scispace - formally typeset
Search or ask a question
Author

Paolo Carnevali

Bio: Paolo Carnevali is an academic researcher. The author has contributed to research in topics: Genome & Human genome. The author has an hindex of 5, co-authored 6 publications receiving 1400 citations.

Papers
More filters
Journal ArticleDOI
01 Jan 2010-Science
TL;DR: A genome sequencing platform that achieves efficient imaging and low reagent consumption with combinatorial probe anchor ligation chemistry to independently assay each base from patterned nanoarrays of self-assembling DNA nanoballs is described.
Abstract: Genome sequencing of large numbers of individuals promises to advance the understanding, treatment, and prevention of human diseases, among other applications. We describe a genome sequencing platform that achieves efficient imaging and low reagent consumption with combinatorial probe anchor ligation chemistry to independently assay each base from patterned nanoarrays of self-assembling DNA nanoballs. We sequenced three human genomes with this platform, generating an average of 45- to 87-fold coverage per genome and identifying 3.2 to 4.5 million sequence variants per genome. Validation of one genome data set demonstrates a sequence accuracy of about 1 false variant per 100 kilobases. The high accuracy, affordable cost of $4400 for sequencing consumables, and scalability of this platform enable complete human genome sequencing for the detection of rare variants in large-scale genetic studies.

1,343 citations

Patent
28 Apr 2010
TL;DR: In this article, the authors present an approach for calling variations in a sample polynucleotide sequence compared to a reference polynotide sequence. But the approach is limited to the case where a likelihood exists that one or more bases of the sample polyclotide sequence are changed from corresponding bases in the reference polyclonal sequence.
Abstract: Embodiments for calling variations in a sample polynucleotide sequence compared to a reference polynucleotide sequence are provided. Aspects of the embodiments include executing an application on at least one computer that locates local areas in the reference polynucleotide sequence where a likelihood exists that one or more bases of the sample polynucleotide sequence are changed from corresponding bases in the reference polynucleotide sequence, where the likelihood is determined at least in part based on mapped mated reads of the sample polynucleotide sequence; generating at least one sequence hypothesis for each of the local areas, and optimizing the at least one sequence hypothesis for at least a portion of the local areas to find one or more optimized sequence hypotheses of high probability for the local areas; and analyzing the optimized sequence hypotheses to identify a series of variation calls in the sample polynucleotide sequence.

61 citations

Journal ArticleDOI
TL;DR: The Human Pangenome Reference Consortium (HPC) as mentioned in this paper was formed with the goal of creating high-quality, cost-effective, diploid genome assemblies for a pangeneome reference that represents human genetic diversity.
Abstract: The current human reference genome, GRCh38, represents over 20 years of effort to generate a high-quality assembly, which has benefitted society1,2. However, it still has many gaps and errors, and does not represent a biological genome as it is a blend of multiple individuals3,4. Recently, a high-quality telomere-to-telomere reference, CHM13, was generated with the latest long-read technologies, but it was derived from a hydatidiform mole cell line with a nearly homozygous genome5. To address these limitations, the Human Pangenome Reference Consortium formed with the goal of creating high-quality, cost-effective, diploid genome assemblies for a pangenome reference that represents human genetic diversity6. Here, in our first scientific report, we determined which combination of current genome sequencing and assembly approaches yield the most complete and accurate diploid genome assembly with minimal manual curation. Approaches that used highly accurate long reads and parent-child data with graph-based haplotype phasing during assembly outperformed those that did not. Developing a combination of the top-performing methods, we generated our first high-quality diploid reference assembly, containing only approximately four gaps per chromosome on average, with most chromosomes within ±1% of the length of CHM13. Nearly 48% of protein-coding genes have non-synonymous amino acid changes between haplotypes, and centromeric regions showed the highest diversity. Our findings serve as a foundation for assembling near-complete diploid human genomes at scale for a pangenome reference to capture global genetic variation from single nucleotides to structural rearrangements.

56 citations

Patent
11 Oct 2011
TL;DR: In this article, the authors present methods, apparatuses, and systems for identification of junctions (e.g., resulting from large-scale rearrangements) of a sequenced genome with respect to a human genome reference sequence.
Abstract: Methods, apparatuses, and systems for identification of junctions (e.g., resulting from large-scale rearrangements) of a sequenced genome with respect to a human genome reference sequence is provided. For example, false positives can be distinguished from actual junctions. Such false positives can result from many sources, including mismapping, chimeric reactions among the DNA of a sample, and problems with the reference genome. As part of the filtering processes, a base pair resolution (or near base pair resolution) of a junction can be provided. In various implementations, junctions can be identified using discordant mate pairs and/or using a statistical analysis of the length distributions of fragments for local regions of the sample genome. Clinically significant junctions can also be identified so that further analysis can be focused on genomic regions that may have more of an impact on the health of a patient.

29 citations

Patent
17 Sep 2012
TL;DR: In this paper, a variant score is used to determine a variant calibrated score indicating a likelihood that the variant call is correct, which can be used for determining a variant relative to a reference.
Abstract: After DNA fragments are sequenced and mapped to a reference, various hypotheses for the sequences in a variant region can be scored to find which sequence hypotheses are more likely. A hypothesis can include a specific variable fraction for the plurality of alleles that comprise the sequence hypothesis in the region. A likelihood of each hypothesis can be determined using a probability that accounts for the fraction of the alleles specified in the respective sequence hypothesis. Thus, other hypotheses besides standard homozygous and equal heterozygous (i.e., one chromosome with A and one with B in a cell) can be explored by explicitly including the variable fractions of the alleles as a parameter in the optimization. Also, a variant score can be determined for a variant relative to a reference. The variant score can be used to determine a variant calibrated score indicating a likelihood that the variant call is correct.

23 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The GATK programming framework enables developers and analysts to quickly and easily write efficient and robust NGS tools, many of which have already been incorporated into large-scale sequencing projects like the 1000 Genomes Project and The Cancer Genome Atlas.
Abstract: Next-generation DNA sequencing (NGS) projects, such as the 1000 Genomes Project, are already revolutionizing our understanding of genetic variation among individuals. However, the massive data sets generated by NGS—the 1000 Genome pilot alone includes nearly five terabases—make writing feature-rich, efficient, and robust analysis tools difficult for even computationally sophisticated individuals. Indeed, many professionals are limited in the scope and the ease with which they can answer scientific questions by the complexity of accessing and manipulating the data produced by these machines. Here, we discuss our Genome Analysis Toolkit (GATK), a structured programming framework designed to ease the development of efficient and robust analysis tools for next-generation DNA sequencers using the functional programming philosophy of MapReduce. The GATK provides a small but rich set of data access patterns that encompass the majority of analysis tool needs. Separating specific analysis calculations from common data management infrastructure enables us to optimize the GATK framework for correctness, stability, and CPU and memory efficiency and to enable distributed and shared memory parallelization. We highlight the capabilities of the GATK by describing the implementation and application of robust, scale-tolerant tools like coverage calculators and single nucleotide polymorphism (SNP) calling. We conclude that the GATK programming framework enables developers and analysts to quickly and easily write efficient and robust NGS tools, many of which have already been incorporated into large-scale sequencing projects like the 1000 Genomes Project and The Cancer Genome Atlas.

20,557 citations

28 Jul 2005
TL;DR: PfPMP1)与感染红细胞、树突状组胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作�ly.
Abstract: 抗原变异可使得多种致病微生物易于逃避宿主免疫应答。表达在感染红细胞表面的恶性疟原虫红细胞表面蛋白1(PfPMP1)与感染红细胞、内皮细胞、树突状细胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作用。每个单倍体基因组var基因家族编码约60种成员,通过启动转录不同的var基因变异体为抗原变异提供了分子基础。

18,940 citations

Journal ArticleDOI
TL;DR: A unified analytic framework to discover and genotype variation among multiple samples simultaneously that achieves sensitive and specific results across five sequencing technologies and three distinct, canonical experimental designs is presented.
Abstract: Recent advances in sequencing technology make it possible to comprehensively catalogue genetic variation in population samples, creating a foundation for understanding human disease, ancestry and evolution. The amounts of raw data produced are prodigious and many computational steps are required to translate this output into high-quality variant calls. We present a unified analytic framework to discover and genotype variation among multiple samples simultaneously that achieves sensitive and specific results across five sequencing technologies and three distinct, canonical experimental designs. Our process includes (1) initial read mapping; (2) local realignment around indels; (3) base quality score recalibration; (4) SNP discovery and genotyping to find all potential variants; and (5) machine learning to separate true segregating variation from machine artifacts common to next-generation sequencing technologies. We discuss the application of these tools, instantiated in the Genome Analysis Toolkit (GATK), to deep whole-genome, whole-exome capture, and multi-sample low-pass (~4×) 1000 Genomes Project datasets.

10,056 citations

Journal ArticleDOI
01 Nov 2012-Nature
TL;DR: It is shown that evolutionary conservation and coding consequence are key determinants of the strength of purifying selection, that rare-variant load varies substantially across biological pathways, and that each individual contains hundreds of rare non-coding variants at conserved sites, such as motif-disrupting changes in transcription-factor-binding sites.
Abstract: By characterizing the geographic and functional spectrum of human genetic variation, the 1000 Genomes Project aims to build a resource to help to understand the genetic contribution to disease. Here we describe the genomes of 1,092 individuals from 14 populations, constructed using a combination of low-coverage whole-genome and exome sequencing. By developing methods to integrate information across several algorithms and diverse data sources, we provide a validated haplotype map of 38 million single nucleotide polymorphisms, 1.4 million short insertions and deletions, and more than 14,000 larger deletions. We show that individuals from different populations carry different profiles of rare and common variants, and that low-frequency variants show substantial geographic differentiation, which is further increased by the action of purifying selection. We show that evolutionary conservation and coding consequence are key determinants of the strength of purifying selection, that rare-variant load varies substantially across biological pathways, and that each individual contains hundreds of rare non-coding variants at conserved sites, such as motif-disrupting changes in transcription-factor-binding sites. This resource, which captures up to 98% of accessible single nucleotide polymorphisms at a frequency of 1% in related populations, enables analysis of common and low-frequency variants in individuals from diverse, including admixed, populations.

7,710 citations

Journal ArticleDOI
TL;DR: A technical review of template preparation, sequencing and imaging, genome alignment and assembly approaches, and recent advances in current and near-term commercially available NGS instruments is presented.
Abstract: Demand has never been greater for revolutionary technologies that deliver fast, inexpensive and accurate genome information. This challenge has catalysed the development of next-generation sequencing (NGS) technologies. The inexpensive production of large volumes of sequence data is the primary advantage over conventional methods. Here, I present a technical review of template preparation, sequencing and imaging, genome alignment and assembly approaches, and recent advances in current and near-term commercially available NGS instruments. I also outline the broad range of applications for NGS technologies, in addition to providing guidelines for platform selection to address biological questions of interest.

7,023 citations