scispace - formally typeset
Search or ask a question

Showing papers by "Fiona Fidler published in 2021"


Journal ArticleDOI
TL;DR: In this paper, the authors lift their gaze to the horizon to imagine how researchers and institutions can clear the path towards more credible and effective research programs, and propose a set of strategies to do so.
Abstract: Unreliable research programmes waste funds, time, and even the lives of the organisms we seek to help and understand. Reducing this waste and increasing the value of scientific evidence require changing the actions of both individual researchers and the institutions they depend on for employment and promotion. While ecologists and evolutionary biologists have somewhat improved research transparency over the past decade (e.g. more data sharing), major obstacles remain. In this commentary, we lift our gaze to the horizon to imagine how researchers and institutions can clear the path towards more credible and effective research programmes.

23 citations


Journal ArticleDOI
TL;DR: The REPRISE project takes a systematic approach to determine how reliable systematic reviews of interventions are and aims to explore various aspects relating to the transparency, reproducibility and replicability of several components of systematic reviews with meta-analysis of the effects of health, social, behavioural and educational interventions.
Abstract: Investigations of transparency, reproducibility and replicability in science have been directed largely at individual studies. It is just as critical to explore these issues in syntheses of studies, such as systematic reviews, given their influence on decision-making and future research. We aim to explore various aspects relating to the transparency, reproducibility and replicability of several components of systematic reviews with meta-analysis of the effects of health, social, behavioural and educational interventions. The REPRISE (REProducibility and Replicability In Syntheses of Evidence) project consists of four studies. We will evaluate the completeness of reporting and sharing of review data, analytic code and other materials in a random sample of 300 systematic reviews of interventions published in 2020 (Study 1). We will survey authors of systematic reviews to explore their views on sharing review data, analytic code and other materials and their understanding of and opinions about replication of systematic reviews (Study 2). We will then evaluate the extent of variation in results when we (a) independently reproduce meta-analyses using the same computational steps and analytic code (if available) as used in the original review (Study 3), and (b) crowdsource teams of systematic reviewers to independently replicate a subset of methods (searches for studies, selection of studies for inclusion, collection of outcome data, and synthesis of results) in a sample of the original reviews; 30 reviews will be replicated by 1 team each and 2 reviews will be replicated by 15 teams (Study 4). The REPRISE project takes a systematic approach to determine how reliable systematic reviews of interventions are. We anticipate that results of the REPRISE project will inform strategies to improve the conduct and reporting of future systematic reviews.

15 citations


Journal ArticleDOI
02 Sep 2021-PLOS ONE
TL;DR: In this paper, a suite of aggregation methods, informed by previous experience and the available literature, were developed to evaluate the accuracy, calibration, and informativeness of the aggregated results.
Abstract: Structured protocols offer a transparent and systematic way to elicit and combine/aggregate, probabilistic predictions from multiple experts. These judgements can be aggregated behaviourally or mathematically to derive a final group prediction. Mathematical rules (e.g., weighted linear combinations of judgments) provide an objective approach to aggregation. The quality of this aggregation can be defined in terms of accuracy, calibration and informativeness. These measures can be used to compare different aggregation approaches and help decide on which aggregation produces the "best" final prediction. When experts' performance can be scored on similar questions ahead of time, these scores can be translated into performance-based weights, and a performance-based weighted aggregation can then be used. When this is not possible though, several other aggregation methods, informed by measurable proxies for good performance, can be formulated and compared. Here, we develop a suite of aggregation methods, informed by previous experience and the available literature. We differentially weight our experts' estimates by measures of reasoning, engagement, openness to changing their mind, informativeness, prior knowledge, and extremity, asymmetry or granularity of estimates. Next, we investigate the relative performance of these aggregation methods using three datasets. The main goal of this research is to explore how measures of knowledge and behaviour of individuals can be leveraged to produce a better performing combined group judgment. Although the accuracy, calibration, and informativeness of the majority of methods are very similar, a couple of the aggregation methods consistently distinguish themselves as among the best or worst. Moreover, the majority of methods outperform the usual benchmarks provided by the simple average or the median of estimates.

11 citations


Journal ArticleDOI
01 Jul 2021
TL;DR: The authors surveyed Australian and Italian academic research psychologists about the meaning and role of replication in psychology and found that nearly all participants (98% of Australians and 96% of Italians) selected options that correspond to a direct replication, while only 14% of Australian and 8% of Italian selected any options that included changing the experimental method.
Abstract: Solutions to the crisis in confidence in the psychological literature have been proposed in many recent articles, including increased publication of replication studies, a solution that requires engagement by the psychology research community. We surveyed Australian and Italian academic research psychologists about the meaning and role of replication in psychology. When asked what they consider to be a replication study, nearly all participants (98% of Australians and 96% of Italians) selected options that correspond to a direct replication. Only 14% of Australians and 8% of Italians selected any options that included changing the experimental method. Majorities of psychologists from both countries agreed that replications are very important, that more replications should be done, that more resources should be allocated to them, and that they should be published more often. Majorities of psychologists from both countries reported that they or their students sometimes or often replicate studies, yet they also reported having no replication studies published in the prior 5 years. When asked to estimate the percentage of published studies in psychology that are replications, both Australians (with a median estimate of 13%) and Italians (with a median estimate of 20%) substantially overestimated the actual rate. When asked what constitute the main obstacles to replications, difficulty publishing replications was the most frequently cited obstacle, coupled with the high value given to innovative or novel research and the low value given to replication studies.

7 citations


Journal ArticleDOI
TL;DR: This systematic review aims to synthesise the findings of medical and health science studies that have empirically investigated the prevalence of data or code sharing, or both, to provide some insight into how often research data and code are shared publicly and privately, how this has changed over time, and how effective some measures such as the institution of data sharing policies and data availability statements have been in motivating researchers to share their underlying data andcode.
Abstract: Numerous studies have demonstrated low but increasing rates of data and code sharing within medical and health research disciplines. However, it remains unclear how commonly data and code are shared across all fields of medical and health research, as well as whether sharing rates are positively associated with implementation of progressive policies by publishers and funders, or growing expectations from the medical and health research community at large. Therefore this systematic review aims to synthesise the findings of medical and health science studies that have empirically investigated the prevalence of data or code sharing, or both. Objectives include the investigation of: (i) the prevalence of public sharing of research data and code alongside published articles (including preprints), (ii) the prevalence of private sharing of research data and code in response to reasonable requests, and (iii) factors associated with the sharing of either research output (e.g., the year published, the publisher’s policy on sharing, the presence of a data or code availability statement). It is hoped that the results will provide some insight into how often research data and code are shared publicly and privately, how this has changed over time, and how effective some measures such as the institution of data sharing policies and data availability statements have been in motivating researchers to share their underlying data and code.

5 citations



Proceedings ArticleDOI
01 Jan 2021
TL;DR: The repliCATS platform is described as a multi-user cloud-based software platform featuring a technical implementation of the IDEA protocol for eliciting expert opinion on research replicability, capture of consent and demographic data, on-line training on replication concepts, and exporting of completed judgements.
Abstract: In recent years there has been increased interest in replicating prior research. One of the biggest challenges to assessing replicability is the cost in resources and time that it takes to repeat studies. Thus there is an impetus to develop rapid elicitation protocols that can, in a practical manner, estimate the likelihood that research findings will successfully replicate. We employ a novel implementation of the IDEA ('Investigate', 'Discuss', 'Estimate' and 'Aggregate) protocol, realised through the repliCATS platform. The repliCATS platform is designed to scalably elicit expert opinion about replicability of social and behavioural science research. The IDEA protocol provides a structured methodology for eliciting judgements and reasoning from groups. This paper describes the repliCATS platform as a multi-user cloud-based software platform featuring (1) a technical implementation of the IDEA protocol for eliciting expert opinion on research replicability, (2) capture of consent and demographic data, (3) on-line training on replication concepts, and (4) exporting of completed judgements. The platform has, to date, evaluated 3432 social and behavioural science research claims from 637 participants.

2 citations