scispace - formally typeset
Search or ask a question
Author

Fiona Fidler

Other affiliations: RMIT University, La Trobe University
Bio: Fiona Fidler is an academic researcher from University of Melbourne. The author has contributed to research in topics: Expert elicitation & Estimation statistics. The author has an hindex of 32, co-authored 88 publications receiving 6907 citations. Previous affiliations of Fiona Fidler include RMIT University & La Trobe University.


Papers
More filters
Journal ArticleDOI
Daniel J. Benjamin1, James O. Berger2, Magnus Johannesson1, Magnus Johannesson3, Brian A. Nosek4, Brian A. Nosek5, Eric-Jan Wagenmakers6, Richard A. Berk7, Kenneth A. Bollen8, Björn Brembs9, Lawrence D. Brown7, Colin F. Camerer10, David Cesarini11, David Cesarini12, Christopher D. Chambers13, Merlise A. Clyde2, Thomas D. Cook14, Thomas D. Cook15, Paul De Boeck16, Zoltan Dienes17, Anna Dreber3, Kenny Easwaran18, Charles Efferson19, Ernst Fehr20, Fiona Fidler21, Andy P. Field17, Malcolm R. Forster22, Edward I. George7, Richard Gonzalez23, Steven N. Goodman24, Edwin J. Green25, Donald P. Green26, Anthony G. Greenwald27, Jarrod D. Hadfield28, Larry V. Hedges14, Leonhard Held20, Teck-Hua Ho29, Herbert Hoijtink30, Daniel J. Hruschka31, Kosuke Imai32, Guido W. Imbens24, John P. A. Ioannidis24, Minjeong Jeon33, James Holland Jones34, Michael Kirchler35, David Laibson36, John A. List37, Roderick J. A. Little23, Arthur Lupia23, Edouard Machery38, Scott E. Maxwell39, Michael A. McCarthy21, Don A. Moore40, Stephen L. Morgan41, Marcus R. Munafò42, Shinichi Nakagawa43, Brendan Nyhan44, Timothy H. Parker45, Luis R. Pericchi46, Marco Perugini47, Jeffrey N. Rouder48, Judith Rousseau49, Victoria Savalei50, Felix D. Schönbrodt51, Thomas Sellke52, Betsy Sinclair53, Dustin Tingley36, Trisha Van Zandt16, Simine Vazire54, Duncan J. Watts55, Christopher Winship36, Robert L. Wolpert2, Yu Xie32, Cristobal Young24, Jonathan Zinman44, Valen E. Johnson18, Valen E. Johnson1 
University of Southern California1, Duke University2, Stockholm School of Economics3, University of Virginia4, Center for Open Science5, University of Amsterdam6, University of Pennsylvania7, University of North Carolina at Chapel Hill8, University of Regensburg9, California Institute of Technology10, Research Institute of Industrial Economics11, New York University12, Cardiff University13, Northwestern University14, Mathematica Policy Research15, Ohio State University16, University of Sussex17, Texas A&M University18, Royal Holloway, University of London19, University of Zurich20, University of Melbourne21, University of Wisconsin-Madison22, University of Michigan23, Stanford University24, Rutgers University25, Columbia University26, University of Washington27, University of Edinburgh28, National University of Singapore29, Utrecht University30, Arizona State University31, Princeton University32, University of California, Los Angeles33, Imperial College London34, University of Innsbruck35, Harvard University36, University of Chicago37, University of Pittsburgh38, University of Notre Dame39, University of California, Berkeley40, Johns Hopkins University41, University of Bristol42, University of New South Wales43, Dartmouth College44, Whitman College45, University of Puerto Rico46, University of Milan47, University of California, Irvine48, Paris Dauphine University49, University of British Columbia50, Ludwig Maximilian University of Munich51, Purdue University52, Washington University in St. Louis53, University of California, Davis54, Microsoft55
TL;DR: The default P-value threshold for statistical significance is proposed to be changed from 0.05 to 0.005 for claims of new discoveries in order to reduce uncertainty in the number of discoveries.
Abstract: We propose to change the default P-value threshold for statistical significance from 0.05 to 0.005 for claims of new discoveries.

1,586 citations

Posted Content
TL;DR: This article proposed to change the default P-value threshold for statistical significance for claims of new discoveries from 0.05 to 0.005, which is the threshold used in this paper.
Abstract: We propose to change the default P-value threshold for statistical significance for claims of new discoveries from 0.05 to 0.005.

1,415 citations

Journal ArticleDOI
TL;DR: This article illustrates some basic features of error bars and explains how they can help communicate data and assist correct interpretation and suggests eight simple rules to assist with effective use and interpretation.
Abstract: Error bars commonly appear in figures in publications, but experimental biologists are often unsure how they should be used and interpreted. In this article we illustrate some basic features of error bars and explain how they can help communicate data and assist correct interpretation. Error bars may show confidence intervals, standard errors, standard deviations, or other quantities. Different types of error bars give quite different information, and so figure legends must make clear what error bars represent. We suggest eight simple rules to assist with effective use and interpretation of error bars.

717 citations

Journal ArticleDOI
TL;DR: Four aspects of an expert elicitation exercise are suggested to be examined to determine its comprehensiveness and effectiveness: study design and context, elicit design, elicitation method, and elicitation output.
Abstract: Expert knowledge is used widely in the science and practice of conservation because of the complexity of problems, relative lack of data, and the imminent nature of many conservation decisions. Expert knowledge is substantive information on a particular topic that is not widely known by others. An expert is someone who holds this knowledge and who is often deferred to in its interpretation. We refer to predictions by experts of what may happen in a particular context as expert judgments. In general, an expert-elicitation approach consists of five steps: deciding how information will be used, determining what to elicit, designing the elicitation process, performing the elicitation, and translating the elicited information into quantitative statements that can be used in a model or directly to make decisions. This last step is known as encoding. Some of the considerations in eliciting expert knowledge include determining how to work with multiple experts and how to combine multiple judgments, minimizing bias in the elicited information, and verifying the accuracy of expert information. We highlight structured elicitation techniques that, if adopted, will improve the accuracy and information content of expert judgment and ensure uncertainty is captured accurately. We suggest four aspects of an expert elicitation exercise be examined to determine its comprehensiveness and effectiveness: study design and context, elicitation design, elicitation method, and elicitation output. Just as the reliability of empirical data depends on the rigor with which it was acquired so too does that of expert knowledge.

654 citations

Journal ArticleDOI
TL;DR: Results suggest that many leading researchers have severe misconceptions about how error bars relate to statistical significance, do not adequately distinguish CIs and SE bars, and do not appreciate the importance of whether the 2 means are independent or come from a repeated measures design.
Abstract: Little is known about researchers' understanding of confidence intervals (CIs) and standard error (SE) bars. Authors of journal articles in psychology, behavioral neuroscience, and medicine were invited to visit a Web site where they adjusted a figure until they judged 2 means, with error bars, to be just statistically significantly different (p < .05). Results from 473 respondents suggest that many leading researchers have severe misconceptions about how error bars relate to statistical significance, do not adequately distinguish CIs and SE bars, and do not appreciate the importance of whether the 2 means are independent or come from a repeated measures design. Better guidelines for researchers and less ambiguous graphical conventions are needed before the advantages of CIs for research communication can be realized.

309 citations


Cited by
More filters
Book
01 Jun 2015
TL;DR: A practical primer on how to calculate and report effect sizes for t-tests and ANOVA's such that effect sizes can be used in a-priori power analyses and meta-analyses and a detailed overview of the similarities and differences between within- and between-subjects designs is provided.
Abstract: Effect sizes are the most important outcome of empirical studies. Most articles on effect sizes highlight their importance to communicate the practical significance of results. For scientists themselves, effect sizes are most useful because they facilitate cumulative science. Effect sizes can be used to determine the sample size for follow-up studies, or examining effects across studies. This article aims to provide a practical primer on how to calculate and report effect sizes for t-tests and ANOVA’s such that effect sizes can be used in a-priori power analyses and meta-analyses. Whereas many articles about effect sizes focus on between-subjects designs and address within-subjects designs only briefly, I provide a detailed overview of the similarities and differences between within- and between-subjects designs. I suggest that some research questions in experimental psychology examine inherently intra-individual effects, which makes effect sizes that incorporate the correlation between measures the best summary of the results. Finally, a supplementary spreadsheet is provided to make it as easy as possible for researchers to incorporate effect size calculations into their workflow.

5,374 citations

Journal ArticleDOI
TL;DR: This study provides a working guide to boosted regression trees (BRT), an ensemble method for fitting statistical models that differs fundamentally from conventional techniques that aim to fit a single parsimonious model.
Abstract: Summary 1 Ecologists use statistical models for both explanation and prediction, and need techniques that are flexible enough to express typical features of their data, such as nonlinearities and interactions 2 This study provides a working guide to boosted regression trees (BRT), an ensemble method for fitting statistical models that differs fundamentally from conventional techniques that aim to fit a single parsimonious model Boosted regression trees combine the strengths of two algorithms: regression trees (models that relate a response to their predictors by recursive binary splits) and boosting (an adaptive method for combining many simple models to give improved predictive performance) The final BRT model can be understood as an additive regression model in which individual terms are simple trees, fitted in a forward, stagewise fashion 3 Boosted regression trees incorporate important advantages of tree-based methods, handling different types of predictor variables and accommodating missing data They have no need for prior data transformation or elimination of outliers, can fit complex nonlinear relationships, and automatically handle interaction effects between predictors Fitting multiple trees in BRT overcomes the biggest drawback of single tree models: their relatively poor predictive performance Although BRT models are complex, they can be summarized in ways that give powerful ecological insight, and their predictive performance is superior to most traditional modelling methods 4 The unique features of BRT raise a number of practical issues in model fitting We demonstrate the practicalities and advantages of using BRT through a distributional analysis of the short-finned eel ( Anguilla australis Richardson), a native freshwater fish of New Zealand We use a data set of over 13 000 sites to illustrate effects of several settings, and then fit and interpret a model using a subset of the data We provide code and a tutorial to enable the wider use of BRT by ecologists

4,787 citations

Journal Article
TL;DR: Thaler and Sunstein this paper described a general explanation of and advocacy for libertarian paternalism, a term coined by the authors in earlier publications, as a general approach to how leaders, systems, organizations, and governments can nudge people to do the things the nudgers want and need done for the betterment of the nudgees, or of society.
Abstract: NUDGE: IMPROVING DECISIONS ABOUT HEALTH, WEALTH, AND HAPPINESS by Richard H. Thaler and Cass R. Sunstein Penguin Books, 2009, 312 pp, ISBN 978-0-14-311526-7This book is best described formally as a general explanation of and advocacy for libertarian paternalism, a term coined by the authors in earlier publications. Informally, it is about how leaders, systems, organizations, and governments can nudge people to do the things the nudgers want and need done for the betterment of the nudgees, or of society. It is paternalism in the sense that "it is legitimate for choice architects to try to influence people's behavior in order to make their lives longer, healthier, and better", (p. 5) It is libertarian in that "people should be free to do what they like - and to opt out of undesirable arrangements if they want to do so", (p. 5) The built-in possibility of opting out or making a different choice preserves freedom of choice even though people's behavior has been influenced by the nature of the presentation of the information or by the structure of the decisionmaking system. I had never heard of libertarian paternalism before reading this book, and I now find it fascinating.Written for a general audience, this book contains mostly social and behavioral science theory and models, but there is considerable discussion of structure and process that has roots in mathematical and quantitative modeling. One of the main applications of this social system is economic choice in investing, selecting and purchasing products and services, systems of taxes, banking (mortgages, borrowing, savings), and retirement systems. Other quantitative social choice systems discussed include environmental effects, health care plans, gambling, and organ donations. Softer issues that are also subject to a nudge-based approach are marriage, education, eating, drinking, smoking, influence, spread of information, and politics. There is something in this book for everyone.The basis for this libertarian paternalism concept is in the social theory called "science of choice", the study of the design and implementation of influence systems on various kinds of people. The terms Econs and Humans, are used to refer to people with either considerable or little rational decision-making talent, respectively. The various libertarian paternalism concepts and systems presented are tested and compared in light of these two types of people. Two foundational issues that this book has in common with another book, Network of Echoes: Imitation, Innovation and Invisible Leaders, that was also reviewed for this issue of the Journal are that 1 ) there are two modes of thinking (or components of the brain) - an automatic (intuitive) process and a reflective (rational) process and 2) the need for conformity and the desire for imitation are powerful forces in human behavior. …

3,435 citations

Journal ArticleDOI
TL;DR: This article extensively discusses two dimensionless (and thus standardised) classes of effect size statistics: d statistics (standardised mean difference) and r statistics (correlation coefficient), because these can be calculated from almost all study designs and also because their calculations are essential for meta‐analysis.
Abstract: Null hypothesis significance testing (NHST) is the dominant statistical approach in biology, although it has many, frequently unappreciated, problems. Most importantly, NHST does not provide us with two crucial pieces of information: (1) the magnitude of an effect of interest, and (2) the precision of the estimate of the magnitude of that effect. All biologists should be ultimately interested in biological importance, which may be assessed using the magnitude of an effect, but not its statistical significance. Therefore, we advocate presentation of measures of the magnitude of effects (i.e. effect size statistics) and their confidence intervals (CIs) in all biological journals. Combined use of an effect size and its CIs enables one to assess the relationships within data more effectively than the use of p values, regardless of statistical significance. In addition, routine presentation of effect sizes will encourage researchers to view their results in the context of previous research and facilitate the incorporation of results into future meta-analysis, which has been increasingly used as the standard method of quantitative review in biology. In this article, we extensively discuss two dimensionless (and thus standardised) classes of effect size statistics: d statistics (standardised mean difference) and r statistics (correlation coefficient), because these can be calculated from almost all study designs and also because their calculations are essential for meta-analysis. However, our focus on these standardised effect size statistics does not mean unstandardised effect size statistics (e.g. mean difference and regression coefficient) are less important. We provide potential solutions for four main technical problems researchers may encounter when calculating effect size and CIs: (1) when covariates exist, (2) when bias in estimating effect size is possible, (3) when data have non-normal error structure and/or variances, and (4) when data are non-independent. Although interpretations of effect sizes are often difficult, we provide some pointers to help researchers. This paper serves both as a beginner’s instruction manual and a stimulus for changing statistical practice for the better in the biological sciences.

3,041 citations

Journal ArticleDOI
TL;DR: The use of effect size reporting in the analysis of social science data remains inconsistent and interpretation of the effect size estimates continues to be confused as discussed by the authors, and clinicians also may have little guidance in the interpretation of effect sizes relevant for clinical practice.
Abstract: Increasing emphasis has been placed on the use of effect size reporting in the analysis of social science data. Nonetheless, the use of effect size reporting remains inconsistent, and interpretation of effect size estimates continues to be confused. Researchers are presented with numerous effect sizes estimate options, not all of which are appropriate for every research question. Clinicians also may have little guidance in the interpretation of effect sizes relevant for clinical practice. The current article provides a primer of effect size estimates for the social sciences. Common effect sizes estimates, their use, and interpretations are presented as a guide for researchers.

2,680 citations