scispace - formally typeset
Search or ask a question
Author

François Pompanon

Bio: François Pompanon is an academic researcher from University of Savoy. The author has contributed to research in topics: Population & Domestication. The author has an hindex of 42, co-authored 82 publications receiving 13199 citations. Previous affiliations of François Pompanon include Joseph Fourier University & Centre national de la recherche scientifique.


Papers
More filters
Journal ArticleDOI
TL;DR: Four case studies representing a large variety of population genetics investigations differing in their sampling strategies, in the type of organism studied (plant or animal) and the molecular markers used [microsatellites or amplified fragment length polymorphisms (AFLPs), and the estimated genotyping error rate are considered.
Abstract: Genotyping errors occur when the genotype determined after molecular analysis does not correspond to the real genotype of the individual under consideration. Virtually every genetic data set includes some erroneous genotypes, but genotyping errors remain a taboo subject in population genetics, even though they might greatly bias the final conclusions, especially for studies based on individual identification. Here, we consider four case studies representing a large variety of population genetics investigations differing in their sampling strategies (noninvasive or traditional), in the type of organism studied (plant or animal) and the molecular markers used [microsatellites or amplified fragment length polymorphisms (AFLPs)]. In these data sets, the estimated genotyping error rate ranges from 0.8% for microsatellite loci from bear tissues to 2.6% for AFLP loci from dwarf birch leaves. Main sources of errors were allelic dropouts for microsatellites and differences in peak intensities for AFLPs, but in both cases human factors were non-negligible error generators. Therefore, tracking genotyping errors and identifying their causes are necessary to clean up the data sets and validate the final results according to the precision required. In addition, we propose the outline of a protocol designed to limit and quantify genotyping errors at each step of the genotyping process. In particular, we recommend (i) several efficient precautions to prevent contaminations and technical artefacts; (ii) systematic use of blind samples and automation; (iii) experience and rigor for laboratory work and scoring; and (iv) systematic reporting of the error rate in population genetics studies.

1,391 citations

Journal ArticleDOI
TL;DR: A novel approach, based on the limited persistence of DNA in the environment, to detect the presence of a species in fresh water, using specific primers that amplify short mitochondrial DNA sequences to track the existence of a frog in controlled environments and natural wetlands.
Abstract: The assessment of species distribution is a first critical phase of biodiversity studies and is necessary to many disciplines such as biogeography, conservation biology and ecology. However, several species are difficult to detect, especially during particular time periods or developmental stages, potentially biasing study outcomes. Here we present a novel approach, based on the limited persistence of DNA in the environment, to detect the presence of a species in fresh water. We used specific primers that amplify short mitochondrial DNA sequences to track the presence of a frog (Rana catesbeiana) in controlled environments and natural wetlands. A multi-sampling approach allowed for species detection in all environments where it was present, even at low densities. The reliability of the results was demonstrated by the identification of amplified DNA fragments, using traditional sequencing and parallel pyrosequencing techniques. As the environment can retain the molecular imprint of inhabiting species, our approach allows the reliable detection of secretive organisms in wetlands without direct observation. Combined with massive sequencing and the development of DNA barcodes that enable species identification, this approach opens new perspectives for the assessment of current biodiversity from environmental samples.

1,226 citations

Journal ArticleDOI
TL;DR: The near‐term future of DNA metabarcoding has an enormous potential to boost data acquisition in biodiversity research as further developments associated with the impressive progress in DNA sequencing will eliminate the currently required DNA amplification step, and comprehensive taxonomic reference libraries can be built based on the well‐curated DNA extract collections maintained by standardized barcoding initiatives.
Abstract: Virtually all empirical ecological studies require species identification during data collection. DNA metabarcoding refers to the automated identification of multiple species from a single bulk sample containing entire organisms or from a single environmental sample containing degraded DNA (soil, water, faeces, etc.). It can be implemented for both modern and ancient environmental samples. The availability of next-generation sequencing platforms and the ecologists' need for high-throughput taxon identification have facilitated the emergence of DNA metabarcoding. The potential power of DNA metabarcoding as it is implemented today is limited mainly by its dependency on PCR and by the considerable investment needed to build comprehensive taxonomic reference libraries. Further developments associated with the impressive progress in DNA sequencing will eliminate the currently required DNA amplification step, and comprehensive taxonomic reference libraries composed of whole organellar genomes and repetitive ribosomal nuclear DNA can be built based on the well-curated DNA extract collections maintained by standardized barcoding initiatives. The near-term future of DNA metabarcoding has an enormous potential to boost data acquisition in biodiversity research.

1,216 citations

Journal ArticleDOI
TL;DR: A protocol for estimating error rates is proposed and it is recommended that these measures be systemically reported to attest the reliability of published genotyping studies.
Abstract: Although genotyping errors affect most data and can markedly influence the biological conclusions of a study, they are too often neglected. Errors have various causes, but their occurrence and effect can be limited by considering these causes in the production and analysis of the data. Procedures that have been developed for dealing with errors in linkage studies, forensic analyses and non-invasive genotyping should be applied more broadly to any genetic study. We propose a protocol for estimating error rates and recommend that these measures be systemically reported to attest the reliability of published genotyping studies.

1,143 citations

Journal ArticleDOI
TL;DR: The power and pitfalls of NGS diet methods are reviewed, the critical factors to take into account when choosing or designing a suitable barcode are presented and the validation of data accuracy including the viability of producing quantitative data is discussed.
Abstract: The analysis of food webs and their dynamics facilitates understanding of the mechanistic processes behind community ecology and ecosystem functions. Having accurate techniques for determining dietary ranges and components is critical for this endeavour. While visual analyses and early molecular approaches are highly labour intensive and often lack resolution, recent DNA-based approaches potentially provide more accurate methods for dietary studies. A suite of approaches have been used based on the identification of consumed species by characterization of DNA present in gut or faecal samples. In one approach, a standardized DNA region (DNA barcode) is PCR amplified, amplicons are sequenced and then compared to a reference database for identification. Initially, this involved sequencing clones from PCR products, and studies were limited in scale because of the costs and effort required. The recent development of next generation sequencing (NGS) has made this approach much more powerful, by allowing the direct characterization of dozens of samples with several thousand sequences per PCR product, and has the potential to reveal many consumed species simultaneously (DNA metabarcoding). Continual improvement of NGS technologies, on-going decreases in costs and current massive expansion of reference databases make this approach promising. Here we review the power and pitfalls of NGS diet methods. We present the critical factors to take into account when choosing or designing a suitable barcode. Then, we consider both technical and analytical aspects of NGS diet studies. Finally, we discuss the validation of data accuracy including the viability of producing quantitative data.

958 citations


Cited by
More filters
Journal Article
Fumio Tajima1
30 Oct 1989-Genomics
TL;DR: It is suggested that the natural selection against large insertion/deletion is so weak that a large amount of variation is maintained in a population.

11,521 citations

Journal Article
TL;DR: For the next few weeks the course is going to be exploring a field that’s actually older than classical population genetics, although the approach it’ll be taking to it involves the use of population genetic machinery.
Abstract: So far in this course we have dealt entirely with the evolution of characters that are controlled by simple Mendelian inheritance at a single locus. There are notes on the course website about gametic disequilibrium and how allele frequencies change at two loci simultaneously, but we didn’t discuss them. In every example we’ve considered we’ve imagined that we could understand something about evolution by examining the evolution of a single gene. That’s the domain of classical population genetics. For the next few weeks we’re going to be exploring a field that’s actually older than classical population genetics, although the approach we’ll be taking to it involves the use of population genetic machinery. If you know a little about the history of evolutionary biology, you may know that after the rediscovery of Mendel’s work in 1900 there was a heated debate between the “biometricians” (e.g., Galton and Pearson) and the “Mendelians” (e.g., de Vries, Correns, Bateson, and Morgan). Biometricians asserted that the really important variation in evolution didn’t follow Mendelian rules. Height, weight, skin color, and similar traits seemed to

9,847 citations

01 Aug 2000
TL;DR: Assessment of medical technology in the context of commercialization with Bioentrepreneur course, which addresses many issues unique to biomedical products.
Abstract: BIOE 402. Medical Technology Assessment. 2 or 3 hours. Bioentrepreneur course. Assessment of medical technology in the context of commercialization. Objectives, competition, market share, funding, pricing, manufacturing, growth, and intellectual property; many issues unique to biomedical products. Course Information: 2 undergraduate hours. 3 graduate hours. Prerequisite(s): Junior standing or above and consent of the instructor.

4,833 citations

Journal ArticleDOI
TL;DR: This paper showed that the likelihood equations used by versions 1.0 and 2.0 of CERVUS to accommodate genotyping error miscalculate the probability of observing an erroneous genotype.
Abstract: Genotypes are frequently used to identify parentage. Such analysis is notoriously vulnerable to genotyping error, and there is ongoing debate regarding how to solve this problem. Many scientists have used the computer program CERVUS to estimate parentage, and have taken advantage of its option to allow for genotyping error. In this study, we show that the likelihood equations used by versions 1.0 and 2.0 of CERVUS to accommodate genotyping error miscalculate the probability of observing an erroneous genotype. Computer simulation and reanalysis of paternity in Rum red deer show that correcting this error increases success in paternity assignment, and that there is a clear benefit to accommodating genotyping errors when errors are present. A new version of CERVUS (3.0) implementing the corrected likelihood equations is available at http://www.fieldgenetics.com.

4,562 citations

01 Feb 2015
TL;DR: In this article, the authors describe the integrative analysis of 111 reference human epigenomes generated as part of the NIH Roadmap Epigenomics Consortium, profiled for histone modification patterns, DNA accessibility, DNA methylation and RNA expression.
Abstract: The reference human genome sequence set the stage for studies of genetic variation and its association with human disease, but epigenomic studies lack a similar reference. To address this need, the NIH Roadmap Epigenomics Consortium generated the largest collection so far of human epigenomes for primary cells and tissues. Here we describe the integrative analysis of 111 reference human epigenomes generated as part of the programme, profiled for histone modification patterns, DNA accessibility, DNA methylation and RNA expression. We establish global maps of regulatory elements, define regulatory modules of coordinated activity, and their likely activators and repressors. We show that disease- and trait-associated genetic variants are enriched in tissue-specific epigenomic marks, revealing biologically relevant cell types for diverse human traits, and providing a resource for interpreting the molecular basis of human disease. Our results demonstrate the central role of epigenomic information for understanding gene regulation, cellular differentiation and human disease.

4,409 citations