scispace - formally typeset
Search or ask a question
Author

William J Browne

Bio: William J Browne is an academic researcher from University of Bristol. The author has contributed to research in topics: Multilevel model & Somatic cell count. The author has an hindex of 46, co-authored 169 publications receiving 21730 citations. Previous affiliations of William J Browne include Institute of Education & The Turing Institute.


Papers
More filters
Journal ArticleDOI
TL;DR: Most of the papers surveyed did not report using randomisation or blinding to reduce bias in animal selection and outcome assessment, consistent with reviews of many research areas, including clinical studies, published in recent years.
Abstract: animals used (i.e., species/strain, sex, and age/weight). Most of the papers surveyed did not report using randomisation (87%) or blinding (86%) to reduce bias in animal selection and outcome assessment. Only 70% of the publications that used statistical methods fully described them and presented the results with a measure of precision or variability [5]. These findings are a cause for concern and are consistent with reviews of many research areas, including clinical studies, published in recent years [2–22].

6,271 citations

Journal ArticleDOI
TL;DR: An accurate summary of the background, research objectives, including details of the species or strain of animal used, key methods, principal findings and conclusions of the study is provided.
Abstract: The following guidelines are excerpted (as permitted under the Creative Commons Attribution License (CCAL), with the knowledge and approval of PLoS Biology and the authors) from Kilkenny et al (2010) ​ Table

3,093 citations

Journal ArticleDOI
TL;DR: The following guidelines are excerpted (as permitted under the Creative Commons Attribution License (CCAL), with the knowledge and approval of PLoS Biology and the authors) from Kilkenny et al.
Abstract: The following guidelines are excerpted (as permitted under the Creative Commons Attribution License (CCAL), with the knowledge and approval of PLoS Biology and the authors) from Kilkenny et al (2010). ​ Table

1,916 citations

Journal ArticleDOI
TL;DR: The ARRIVE guidelines (Animal Research: Reporting of In Vivo Experiments) have been updated and information reorganised to facilitate their use in practice to help ensure that researchers, reviewers, and journal editors are better equipped to improve the rigour and transparency of the scientific process and thus reproducibility.
Abstract: Reproducible science requires transparent reporting. The ARRIVE guidelines (Animal Research: Reporting of In Vivo Experiments) were originally developed in 2010 to improve the reporting of animal research. They consist of a checklist of information to include in publications describing in vivo experiments to enable others to scrutinise the work adequately, evaluate its methodological rigour, and reproduce the methods and results. Despite considerable levels of endorsement by funders and journals over the years, adherence to the guidelines has been inconsistent, and the anticipated improvements in the quality of reporting in animal research publications have not been achieved. Here, we introduce ARRIVE 2.0. The guidelines have been updated and information reorganised to facilitate their use in practice. We used a Delphi exercise to prioritise and divide the items of the guidelines into 2 sets, the “ARRIVE Essential 10,” which constitutes the minimum requirement, and the “Recommended Set,” which describes the research context. This division facilitates improved reporting of animal research by supporting a stepwise approach to implementation. This helps journal editors and reviewers verify that the most important items are being reported in manuscripts. We have also developed the accompanying Explanation and Elaboration document, which serves (1) to explain the rationale behind each item in the guidelines, (2) to clarify key concepts, and (3) to provide illustrative examples. We aim, through these changes, to help ensure that researchers, reviewers, and journal editors are better equipped to improve the rigour and transparency of the scientific process and thus reproducibility.

1,796 citations

Journal ArticleDOI
TL;DR: 1.2 Provide an accurate summary of the background, res principal findings, and conclusions of the study.

1,487 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, the authors make a case for the importance of reporting variance explained (R2) as a relevant summarizing statistic of mixed-effects models, which is rare, even though R2 is routinely reported for linear models and also generalized linear models (GLM).
Abstract: Summary The use of both linear and generalized linear mixed-effects models (LMMs and GLMMs) has become popular not only in social and medical sciences, but also in biological sciences, especially in the field of ecology and evolution. Information criteria, such as Akaike Information Criterion (AIC), are usually presented as model comparison tools for mixed-effects models. The presentation of ‘variance explained’ (R2) as a relevant summarizing statistic of mixed-effects models, however, is rare, even though R2 is routinely reported for linear models (LMs) and also generalized linear models (GLMs). R2 has the extremely useful property of providing an absolute value for the goodness-of-fit of a model, which cannot be given by the information criteria. As a summary statistic that describes the amount of variance explained, R2 can also be a quantity of biological interest. One reason for the under-appreciation of R2 for mixed-effects models lies in the fact that R2 can be defined in a number of ways. Furthermore, most definitions of R2 for mixed-effects have theoretical problems (e.g. decreased or negative R2 values in larger models) and/or their use is hindered by practical difficulties (e.g. implementation). Here, we make a case for the importance of reporting R2 for mixed-effects models. We first provide the common definitions of R2 for LMs and GLMs and discuss the key problems associated with calculating R2 for mixed-effects models. We then recommend a general and simple method for calculating two types of R2 (marginal and conditional R2) for both LMMs and GLMMs, which are less susceptible to common problems. This method is illustrated by examples and can be widely employed by researchers in any fields of research, regardless of software packages used for fitting mixed-effects models. The proposed method has the potential to facilitate the presentation of R2 for a wide range of circumstances.

7,749 citations

Journal ArticleDOI
TL;DR: The use (and misuse) of GLMMs in ecology and evolution are reviewed, estimation and inference are discussed, and 'best-practice' data analysis procedures for scientists facing this challenge are summarized.
Abstract: How should ecologists and evolutionary biologists analyze nonnormal data that involve random effects? Nonnormal data such as counts or proportions often defy classical statistical procedures. Generalized linear mixed models (GLMMs) provide a more flexible approach for analyzing nonnormal data when random effects are present. The explosion of research on GLMMs in the last decade has generated considerable uncertainty for practitioners in ecology and evolution. Despite the availability of accurate techniques for estimating GLMM parameters in simple cases, complex GLMMs are challenging to fit and statistical inference such as hypothesis testing remains difficult. We review the use (and misuse) of GLMMs in ecology and evolution, discuss estimation and inference and summarize 'best-practice' data analysis procedures for scientists facing this challenge.

7,207 citations

Book
19 Nov 2008
TL;DR: This meta-analyses presents a meta-analysis of the contributions from the home, the school, and the curricula to create a picture of visible teaching and visible learning in the post-modern world.
Abstract: Preface Chapter 1 The challenge Chapter 2 The nature of the evidence: A synthesis of meta-analyses Chapter 3 The argument: Visible teaching and visible learning Chapter 4: The contributions from the student Chapter 5 The contributions from the home Chapter 6 The contributions from the school Chapter 7 The contributions from the teacher Chapter 8 The contributions from the curricula Chapter 9 The contributions from teaching approaches - I Chapter 10 The contributions from teaching approaches - II Chapter 11: Bringing it all together Appendix A: The 800 meta-analyses Appendix B: The meta-analyses by rank order References

6,776 citations

Journal ArticleDOI
TL;DR: Most of the papers surveyed did not report using randomisation or blinding to reduce bias in animal selection and outcome assessment, consistent with reviews of many research areas, including clinical studies, published in recent years.
Abstract: animals used (i.e., species/strain, sex, and age/weight). Most of the papers surveyed did not report using randomisation (87%) or blinding (86%) to reduce bias in animal selection and outcome assessment. Only 70% of the publications that used statistical methods fully described them and presented the results with a measure of precision or variability [5]. These findings are a cause for concern and are consistent with reviews of many research areas, including clinical studies, published in recent years [2–22].

6,271 citations

Journal ArticleDOI
TL;DR: It is shown that the average statistical power of studies in the neurosciences is very low, and the consequences include overestimates of effect size and low reproducibility of results.
Abstract: A study with low statistical power has a reduced chance of detecting a true effect, but it is less well appreciated that low power also reduces the likelihood that a statistically significant result reflects a true effect. Here, we show that the average statistical power of studies in the neurosciences is very low. The consequences of this include overestimates of effect size and low reproducibility of results. There are also ethical dimensions to this problem, as unreliable research is inefficient and wasteful. Improving reproducibility in neuroscience is a key priority and requires attention to well-established but often ignored methodological principles.

5,683 citations