scispace - formally typeset
Search or ask a question
Author

Anne-Lise Develle

Bio: Anne-Lise Develle is an academic researcher from University of Savoy. The author has contributed to research in topics: Holocene & Glacial period. The author has an hindex of 15, co-authored 52 publications receiving 4109 citations. Previous affiliations of Anne-Lise Develle include Aix-Marseille University & Centre national de la recherche scientifique.


Papers
More filters
Journal ArticleDOI
01 May 2017
TL;DR: ROZA was developed under the umbrella of LTER-France (Long Term Ecological Research) in order to facilitate the re-use of data and samples and will favor to use of paleodata by non-paleodata scientists, in particular ecologists.
Abstract: Managing paleoscience data is highly challenging to the multiplicity of actors in play, types of sampling, analysis, post-analysis treatments, statistics etc. However, a well-structured curating of data would permit innovative developments based on data and/or sample re-use, such as meta-analysis or the development of new proxies on previously studied cores. In this paper, we will present two recent initiatives that allowed us tackling this objective at a French national level: the “National Cyber Core Repository” (NCCR) and the “LTER-France retro-observatory” (ROZA).NCCR was developed under the umbrella of the French National Center fo Coring and Drilling (C2FN) thanks to the national excellence equipment project CLIMCOR. It aims at gathering on a unique website the locations and metadata of any scientific coring/drilling performed by French teams or using French facilities, whatever the type of archive it is (lake/marine sediment; ice etc.). It uses international standard, notably IGSN (for samples), ORCID (for persons) and DOI (for campaigns). NCC follows the INSPIRE ISO 19115 standard in order to catalogue the data. For continental sediment, NCCR may be fed directly on the field through a specifically developed mobile application.Based on NCCR, further initiatives may be led. In particular, under the umbrella of LTER-France (Long Term Ecological Research), we developed ROZA in order to facilitate the re-use of data and samples. Here the idea is to capitalise the knowledge on a given lake from which several sediment cores can be taken through time. In that aim we selected at least one lake from each of the 13 areas composing the network LTER-France. To enter the database, a set of mandatory data must be provided under a pre-determined format. In that case, the insertion of ROZA within the network LTER will favor to use of paleodata by non-paleodata scientists, in particular ecologists.

3,648 citations

Journal ArticleDOI
TL;DR: A retro-observation approach based on lake sediment records to monitor micropollutants and to evaluate the long-term succession and diffuse transfer of herbicide, fungicide, and insecticide treatments in a vineyard catchment in France revealed how changes in these practices affect storage conditions and, consequently, the pesticides' transfer dynamics.
Abstract: Agricultural pesticide use has increased worldwide during the last several decades, but the long-term fate, storage, and transfer dynamics of pesticides in a changing environment are poorly understood. Many pesticides have been progressively banned, but in numerous cases, these molecules are stable and may persist in soils, sediments, and ice. Many studies have addressed the question of their possible remobilization as a result of global change. In this article, we present a retro-observation approach based on lake sediment records to monitor micropollutants and to evaluate the long-term succession and diffuse transfer of herbicides, fungicides, and insecticide treatments in a vineyard catchment in France. The sediment allows for a reliable reconstruction of past pesticide use through time, validated by the historical introduction, use, and banning of these organic and inorganic pesticides in local vineyards. Our results also revealed how changes in these practices affect storage conditions and, consequently, the pesticides' transfer dynamics. For example, the use of postemergence herbicides (glyphosate), which induce an increase in soil erosion, led to a release of a banned remnant pesticide (dichlorodiphenyltrichloroethane, DDT), which had been previously stored in vineyard soil, back into the environment. Management strategies of ecotoxicological risk would be well served by recognition of the diversity of compounds stored in various environmental sinks, such as agriculture soil, and their capability to become sources when environmental conditions change.

104 citations

Journal ArticleDOI
TL;DR: This work combined plant and mammal DNA metabarcoding analyses with sedimentological and geochemical analyses from three lake-catchment systems that are characterised by different erosion dynamics to elucidate and assess issues relating to DNA sources and transfer processes.
Abstract: Over the last decade, an increasing number of studies have used lake sediment DNA to trace past landscape changes, agricultural activities or human presence. However, the processes responsible for lake sediment formation and sediment properties might affect DNA records via taphonomic and analytical processes. It is crucial to understand these processes to ensure reliable interpretations for "palaeo" studies. Here, we combined plant and mammal DNA metabarcoding analyses with sedimentological and geochemical analyses from three lake-catchment systems that are characterised by different erosion dynamics. The new insights derived from this approach elucidate and assess issues relating to DNA sources and transfer processes. The sources of eroded materials strongly affect the "catchment-DNA" concentration in the sediments. For instance, erosion of upper organic and organo-mineral soil horizons provides a higher amount of plant DNA in lake sediments than deep horizons, bare soils or glacial flours. Moreover, high erosion rates, along with a well-developed hydrographic network, are proposed as factors positively affecting the representation of the catchment flora. The development of open and agricultural landscapes, which favour the erosion, could thus bias the reconstructed landscape trajectory but help the record of these human activities. Regarding domestic animals, pastoral practices and animal behaviour might affect their DNA record because they control the type of source of DNA ("point" vs. "diffuse").

72 citations

Journal ArticleDOI
TL;DR: The lake La Thuile, in the Northern French Prealps (874m asl), provides an 18m long sedimentary sequence spanning the entire Lateglacial/Holocene period.
Abstract: Lake La Thuile, in the Northern French Prealps (874 m asl), provides an 18-m long sedimentary sequence spanning the entire Lateglacial/Holocene period The high-resolution multi-proxy (sedimento

63 citations

Journal ArticleDOI
TL;DR: In this paper, the first 14C-dated isotope record from the northern Levant is presented, based on oxygen isotopes from ostracod shells from lacustrine-palustrine deposits accumulated in a small karstic, hydrologically open basin (Yammouneh), located on the eastern flank of Mount Lebanon.

61 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Improved data access is improved with the release of a new RESTful API to support high-throughput programmatic access, an improved web interface and a new summary statistics database.
Abstract: The GWAS Catalog delivers a high-quality curated collection of all published genome-wide association studies enabling investigations to identify causal variants, understand disease mechanisms, and establish targets for novel therapies. The scope of the Catalog has also expanded to targeted and exome arrays with 1000 new associations added for these technologies. As of September 2018, the Catalog contains 5687 GWAS comprising 71673 variant-trait associations from 3567 publications. New content includes 284 full P-value summary statistics datasets for genome-wide and new targeted array studies, representing 6 × 109 individual variant-trait statistics. In the last 12 months, the Catalog's user interface was accessed by ∼90000 unique users who viewed >1 million pages. We have improved data access with the release of a new RESTful API to support high-throughput programmatic access, an improved web interface and a new summary statistics database. Summary statistics provision is supported by a new format proposed as a community standard for summary statistics data representation. This format was derived from our experience in standardizing heterogeneous submissions, mapping formats and in harmonizing content. Availability: https://www.ebi.ac.uk/gwas/.

2,878 citations

Journal ArticleDOI
29 Mar 2021-BMJ
TL;DR: The preferred reporting items for systematic reviews and meta-analyses (PRISMA 2020) as mentioned in this paper was developed to facilitate transparent and complete reporting of systematic reviews, and has been updated to reflect recent advances in systematic review methodology and terminology.
Abstract: The methods and results of systematic reviews should be reported in sufficient detail to allow users to assess the trustworthiness and applicability of the review findings. The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement was developed to facilitate transparent and complete reporting of systematic reviews and has been updated (to PRISMA 2020) to reflect recent advances in systematic review methodology and terminology. Here, we present the explanation and elaboration paper for PRISMA 2020, where we explain why reporting of each item is recommended, present bullet points that detail the reporting recommendations, and present examples from published reviews. We hope that changes to the content and structure of PRISMA 2020 will facilitate uptake of the guideline and lead to more transparent, complete, and accurate reporting of systematic reviews.

2,217 citations

Journal ArticleDOI
TL;DR: A historical archive covering the past 15 years of GO data with a consistent format and file structure for both the ontology and annotations is made available to maintain consistency with other ontologies.
Abstract: The Gene Ontology Consortium (GOC) provides the most comprehensive resource currently available for computable knowledge regarding the functions of genes and gene products. Here, we report the advances of the consortium over the past two years. The new GO-CAM annotation framework was notably improved, and we formalized the model with a computational schema to check and validate the rapidly increasing repository of 2838 GO-CAMs. In addition, we describe the impacts of several collaborations to refine GO and report a 10% increase in the number of GO annotations, a 25% increase in annotated gene products, and over 9,400 new scientific articles annotated. As the project matures, we continue our efforts to review older annotations in light of newer findings, and, to maintain consistency with other ontologies. As a result, 20 000 annotations derived from experimental data were reviewed, corresponding to 2.5% of experimental GO annotations. The website (http://geneontology.org) was redesigned for quick access to documentation, downloads and tools. To maintain an accurate resource and support traceability and reproducibility, we have made available a historical archive covering the past 15 years of GO data with a consistent format and file structure for both the ontology and annotations.

1,988 citations

Journal ArticleDOI
TL;DR: The ARRIVE guidelines (Animal Research: Reporting of In Vivo Experiments) have been updated and information reorganised to facilitate their use in practice to help ensure that researchers, reviewers, and journal editors are better equipped to improve the rigour and transparency of the scientific process and thus reproducibility.
Abstract: Reproducible science requires transparent reporting. The ARRIVE guidelines (Animal Research: Reporting of In Vivo Experiments) were originally developed in 2010 to improve the reporting of animal research. They consist of a checklist of information to include in publications describing in vivo experiments to enable others to scrutinise the work adequately, evaluate its methodological rigour, and reproduce the methods and results. Despite considerable levels of endorsement by funders and journals over the years, adherence to the guidelines has been inconsistent, and the anticipated improvements in the quality of reporting in animal research publications have not been achieved. Here, we introduce ARRIVE 2.0. The guidelines have been updated and information reorganised to facilitate their use in practice. We used a Delphi exercise to prioritise and divide the items of the guidelines into 2 sets, the “ARRIVE Essential 10,” which constitutes the minimum requirement, and the “Recommended Set,” which describes the research context. This division facilitates improved reporting of animal research by supporting a stepwise approach to implementation. This helps journal editors and reviewers verify that the most important items are being reported in manuscripts. We have also developed the accompanying Explanation and Elaboration document, which serves (1) to explain the rationale behind each item in the guidelines, (2) to clarify key concepts, and (3) to provide illustrative examples. We aim, through these changes, to help ensure that researchers, reviewers, and journal editors are better equipped to improve the rigour and transparency of the scientific process and thus reproducibility.

1,796 citations

Journal ArticleDOI
TL;DR: DisGeNET is a versatile platform that can be used for different research purposes including the investigation of the molecular underpinnings of specific human diseases and their comorbidities, the analysis of the properties of disease genes, the generation of hypothesis on drug therapeutic action and drug adverse effects, the validation of computationally predicted disease genes and the evaluation of text-mining methods performance.
Abstract: The information about the genetic basis of human diseases lies at the heart of precision medicine and drug discovery. However, to realize its full potential to support these goals, several problems, such as fragmentation, heterogeneity, availability and different conceptualization of the data must be overcome. To provide the community with a resource free of these hurdles, we have developed DisGeNET (http://www.disgenet.org), one of the largest available collections of genes and variants involved in human diseases. DisGeNET integrates data from expert curated repositories, GWAS catalogues, animal models and the scientific literature. DisGeNET data are homogeneously annotated with controlled vocabularies and community-driven ontologies. Additionally, several original metrics are provided to assist the prioritization of genotype-phenotype relationships. The information is accessible through a web interface, a Cytoscape App, an RDF SPARQL endpoint, scripts in several programming languages and an R package. DisGeNET is a versatile platform that can be used for different research purposes including the investigation of the molecular underpinnings of specific human diseases and their comorbidities, the analysis of the properties of disease genes, the generation of hypothesis on drug therapeutic action and drug adverse effects, the validation of computationally predicted disease genes and the evaluation of text-mining methods performance.

1,718 citations