scispace - formally typeset
Search or ask a question
Author

Carol Kilkenny

Bio: Carol Kilkenny is an academic researcher. The author has contributed to research in topics: Research ethics & Blinding. The author has an hindex of 9, co-authored 10 publications receiving 12118 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: Most of the papers surveyed did not report using randomisation or blinding to reduce bias in animal selection and outcome assessment, consistent with reviews of many research areas, including clinical studies, published in recent years.
Abstract: animals used (i.e., species/strain, sex, and age/weight). Most of the papers surveyed did not report using randomisation (87%) or blinding (86%) to reduce bias in animal selection and outcome assessment. Only 70% of the publications that used statistical methods fully described them and presented the results with a measure of precision or variability [5]. These findings are a cause for concern and are consistent with reviews of many research areas, including clinical studies, published in recent years [2–22].

6,271 citations

Journal ArticleDOI
TL;DR: An accurate summary of the background, research objectives, including details of the species or strain of animal used, key methods, principal findings and conclusions of the study is provided.
Abstract: The following guidelines are excerpted (as permitted under the Creative Commons Attribution License (CCAL), with the knowledge and approval of PLoS Biology and the authors) from Kilkenny et al (2010) ​ Table

3,093 citations

Journal ArticleDOI
TL;DR: The following guidelines are excerpted (as permitted under the Creative Commons Attribution License (CCAL), with the knowledge and approval of PLoS Biology and the authors) from Kilkenny et al.
Abstract: The following guidelines are excerpted (as permitted under the Creative Commons Attribution License (CCAL), with the knowledge and approval of PLoS Biology and the authors) from Kilkenny et al (2010). ​ Table

1,916 citations

Journal ArticleDOI
TL;DR: 1.2 Provide an accurate summary of the background, res principal findings, and conclusions of the study.

1,487 citations

Journal ArticleDOI
30 Nov 2009-PLOS ONE
TL;DR: A systematic survey of reporting, experimental design and statistical analysis in published biomedical research using laboratory animals identified a number of issues that need to be addressed in order to improve experimentalDesign and reporting in publications describing research using animals.
Abstract: For scientific, ethical and economic reasons, experiments involving animals should be appropriately designed, correctly analysed and transparently reported. This increases the scientific validity of the results, and maximises the knowledge gained from each experiment. A minimum amount of relevant information must be included in scientific publications to ensure that the methods and results of a study can be reviewed, analysed and repeated. Omitting essential information can raise scientific and ethical concerns. We report the findings of a systematic survey of reporting, experimental design and statistical analysis in published biomedical research using laboratory animals. Medline and EMBASE were searched for studies reporting research on live rats, mice and non-human primates carried out in UK and US publicly funded research establishments. Detailed information was collected from 271 publications, about the objective or hypothesis of the study, the number, sex, age and/or weight of animals used, and experimental and statistical methods. Only 59% of the studies stated the hypothesis or objective of the study and the number and characteristics of the animals used. Appropriate and efficient experimental design is a critical component of high-quality science. Most of the papers surveyed did not use randomisation (87%) or blinding (86%), to reduce bias in animal selection and outcome assessment. Only 70% of the publications that used statistical methods described their methods and presented the results with a measure of error or variability. This survey has identified a number of issues that need to be addressed in order to improve experimental design and reporting in publications describing research using animals. Scientific publication is a powerful and important source of information; the authors of scientific publications therefore have a responsibility to describe their methods and results comprehensively, accurately and transparently, and peer reviewers and journal editors share the responsibility to ensure that published studies fulfil these criteria.

691 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Most of the papers surveyed did not report using randomisation or blinding to reduce bias in animal selection and outcome assessment, consistent with reviews of many research areas, including clinical studies, published in recent years.
Abstract: animals used (i.e., species/strain, sex, and age/weight). Most of the papers surveyed did not report using randomisation (87%) or blinding (86%) to reduce bias in animal selection and outcome assessment. Only 70% of the publications that used statistical methods fully described them and presented the results with a measure of precision or variability [5]. These findings are a cause for concern and are consistent with reviews of many research areas, including clinical studies, published in recent years [2–22].

6,271 citations

Journal ArticleDOI
TL;DR: It is shown that the average statistical power of studies in the neurosciences is very low, and the consequences include overestimates of effect size and low reproducibility of results.
Abstract: A study with low statistical power has a reduced chance of detecting a true effect, but it is less well appreciated that low power also reduces the likelihood that a statistically significant result reflects a true effect. Here, we show that the average statistical power of studies in the neurosciences is very low. The consequences of this include overestimates of effect size and low reproducibility of results. There are also ethical dimensions to this problem, as unreliable research is inefficient and wasteful. Improving reproducibility in neuroscience is a key priority and requires attention to well-established but often ignored methodological principles.

5,683 citations

Journal ArticleDOI
TL;DR: An accurate summary of the background, research objectives, including details of the species or strain of animal used, key methods, principal findings and conclusions of the study is provided.
Abstract: The following guidelines are excerpted (as permitted under the Creative Commons Attribution License (CCAL), with the knowledge and approval of PLoS Biology and the authors) from Kilkenny et al (2010) ​ Table

3,093 citations

Journal ArticleDOI
TL;DR: In virtually all medical domains, diagnostic and prognostic multivariable prediction models are being developed, validated, updated, and implemented with the aim to assist doctors and individuals in estimating probabilities and potentially influence their decision making.
Abstract: The TRIPOD (Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis) Statement includes a 22-item checklist, which aims to improve the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. This explanation and elaboration document describes the rationale; clarifies the meaning of each item; and discusses why transparent reporting is important, with a view to assessing risk of bias and clinical usefulness of the prediction model. Each checklist item of the TRIPOD Statement is explained in detail and accompanied by published examples of good reporting. The document also provides a valuable reference of issues to consider when designing, conducting, and analyzing prediction model studies. To aid the editorial process and help peer reviewers and, ultimately, readers and systematic reviewers of prediction model studies, it is recommended that authors include a completed checklist in their submission. The TRIPOD checklist can also be downloaded from www.tripod-statement.org.

2,982 citations

Journal ArticleDOI
TL;DR: This work argues for the adoption of measures to optimize key elements of the scientific process: methods, reporting and dissemination, reproducibility, evaluation and incentives, in the hope that this will facilitate action toward improving the transparency, reproducible and efficiency of scientific research.
Abstract: Improving the reliability and efficiency of scientific research will increase the credibility of the published scientific literature and accelerate discovery. Here we argue for the adoption of measures to optimize key elements of the scientific process: methods, reporting and dissemination, reproducibility, evaluation and incentives. There is some evidence from both simulations and empirical studies supporting the likely effectiveness of these measures, but their broad adoption by researchers, institutions, funders and journals will require iterative evaluation and improvement. We discuss the goals of these measures, and how they can be implemented, in the hope that this will facilitate action toward improving the transparency, reproducibility and efficiency of scientific research.

1,951 citations