scispace - formally typeset
Search or ask a question
Author

Jonah Gabry

Bio: Jonah Gabry is an academic researcher from Columbia University. The author has contributed to research in topics: Bayesian probability & Bayesian inference. The author has an hindex of 18, co-authored 28 publications receiving 4573 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: In this article, leave-one-out cross-validation (LOO) and the widely applicable information criterion (WAIC) are used to estimate pointwise out-of-sample prediction accuracy from a fitted Bayesian model using the log-likelihood evaluated at the posterior simulations of the parameter values.
Abstract: Leave-one-out cross-validation (LOO) and the widely applicable information criterion (WAIC) are methods for estimating pointwise out-of-sample prediction accuracy from a fitted Bayesian model using the log-likelihood evaluated at the posterior simulations of the parameter values. LOO and WAIC have various advantages over simpler estimates of predictive error such as AIC and DIC but are less used in practice because they involve additional computational steps. Here we lay out fast and stable computations for LOO and WAIC that can be performed using existing simulation draws. We introduce an efficient computation of LOO using Pareto-smoothed importance sampling (PSIS), a new procedure for regularizing importance weights. Although WAIC is asymptotically equal to LOO, we demonstrate that PSIS-LOO is more robust in the finite case with weak priors or influential observations. As a byproduct of our calculations, we also obtain approximate standard errors for estimated predictive errors and for comparing of predictive errors between two models. We implement the computations in an R package called 'loo' and demonstrate using models fit with the Bayesian inference package Stan.

2,455 citations

Journal ArticleDOI
TL;DR: In this paper, leave-one-out cross-validation (LOO) and the widely applicable information criterion (WAIC) are used to estimate pointwise out-of-sample prediction accuracy from a fitted Bayesian model using the log-likelihood evaluated at the posterior simulations of the parameter values.
Abstract: Leave-one-out cross-validation (LOO) and the widely applicable information criterion (WAIC) are methods for estimating pointwise out-of-sample prediction accuracy from a fitted Bayesian model using the log-likelihood evaluated at the posterior simulations of the parameter values. LOO and WAIC have various advantages over simpler estimates of predictive error such as AIC and DIC but are less used in practice because they involve additional computational steps. Here we lay out fast and stable computations for LOO and WAIC that can be performed using existing simulation draws. We introduce an efficient computation of LOO using Pareto-smoothed importance sampling (PSIS), a new procedure for regularizing importance weights. Although WAIC is asymptotically equal to LOO, we demonstrate that PSIS-LOO is more robust in the finite case with weak priors or influential observations. As a byproduct of our calculations, we also obtain approximate standard errors for estimated predictive errors and for comparison of predictive errors between two models. We implement the computations in an R package called loo and demonstrate using models fit with the Bayesian inference package Stan.

1,533 citations

Journal ArticleDOI
TL;DR: In this article, the authors propose an alternative definition of R2 for Bayesian fits, where the numerator can be larger than the denominator, which is a problem for Bayes fits.
Abstract: The usual definition of R2 (variance of the predicted values divided by the variance of the data) has a problem for Bayesian fits, as the numerator can be larger than the denominator. We propose an...

452 citations

Journal ArticleDOI
TL;DR: Visualization is helpful in each of these stages of the Bayesian workflow and it is indispensable when drawing inferences from the types of modern, high dimensional models that are used by applied researchers.
Abstract: Bayesian data analysis is about more than just computing a posterior distribution, and Bayesian visualization is about more than trace plots of Markov chains. Practical Bayesian data analysis, like all data analysis, is an iterative process of model building, inference, model checking and evaluation, and model expansion. Visualization is helpful in each of these stages of the Bayesian workflow and it is indispensable when drawing inferences from the types of modern, high dimensional models that are used by applied researchers.

440 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The brms package implements Bayesian multilevel models in R using the probabilistic programming language Stan, allowing users to fit linear, robust linear, binomial, Poisson, survival, ordinal, zero-inflated, hurdle, and even non-linear models all in a multileVEL context.
Abstract: The brms package implements Bayesian multilevel models in R using the probabilistic programming language Stan. A wide range of distributions and link functions are supported, allowing users to fit - among others - linear, robust linear, binomial, Poisson, survival, ordinal, zero-inflated, hurdle, and even non-linear models all in a multilevel context. Further modeling options include autocorrelation of the response variable, user defined covariance structures, censored data, as well as meta-analytic standard errors. Prior specifications are flexible and explicitly encourage users to apply prior distributions that actually reflect their beliefs. In addition, model fit can easily be assessed and compared with the Watanabe-Akaike information criterion and leave-one-out cross-validation.

4,353 citations

Journal ArticleDOI
TL;DR: In this article, leave-one-out cross-validation (LOO) and the widely applicable information criterion (WAIC) are used to estimate pointwise out-of-sample prediction accuracy from a fitted Bayesian model using the log-likelihood evaluated at the posterior simulations of the parameter values.
Abstract: Leave-one-out cross-validation (LOO) and the widely applicable information criterion (WAIC) are methods for estimating pointwise out-of-sample prediction accuracy from a fitted Bayesian model using the log-likelihood evaluated at the posterior simulations of the parameter values. LOO and WAIC have various advantages over simpler estimates of predictive error such as AIC and DIC but are less used in practice because they involve additional computational steps. Here we lay out fast and stable computations for LOO and WAIC that can be performed using existing simulation draws. We introduce an efficient computation of LOO using Pareto-smoothed importance sampling (PSIS), a new procedure for regularizing importance weights. Although WAIC is asymptotically equal to LOO, we demonstrate that PSIS-LOO is more robust in the finite case with weak priors or influential observations. As a byproduct of our calculations, we also obtain approximate standard errors for estimated predictive errors and for comparing of predictive errors between two models. We implement the computations in an R package called 'loo' and demonstrate using models fit with the Bayesian inference package Stan.

2,455 citations

Journal ArticleDOI
TL;DR: In this paper, leave-one-out cross-validation (LOO) and the widely applicable information criterion (WAIC) are used to estimate pointwise out-of-sample prediction accuracy from a fitted Bayesian model using the log-likelihood evaluated at the posterior simulations of the parameter values.
Abstract: Leave-one-out cross-validation (LOO) and the widely applicable information criterion (WAIC) are methods for estimating pointwise out-of-sample prediction accuracy from a fitted Bayesian model using the log-likelihood evaluated at the posterior simulations of the parameter values. LOO and WAIC have various advantages over simpler estimates of predictive error such as AIC and DIC but are less used in practice because they involve additional computational steps. Here we lay out fast and stable computations for LOO and WAIC that can be performed using existing simulation draws. We introduce an efficient computation of LOO using Pareto-smoothed importance sampling (PSIS), a new procedure for regularizing importance weights. Although WAIC is asymptotically equal to LOO, we demonstrate that PSIS-LOO is more robust in the finite case with weak priors or influential observations. As a byproduct of our calculations, we also obtain approximate standard errors for estimated predictive errors and for comparison of predictive errors between two models. We implement the computations in an R package called loo and demonstrate using models fit with the Bayesian inference package Stan.

1,533 citations

Journal ArticleDOI
Rupert R A Bourne1, Seth Flaxman2, Tasanee Braithwaite1, Maria V Cicinelli, Aditi Das, Jost B. Jonas3, Jill E Keeffe4, John H Kempen5, Janet L Leasher6, Hans Limburg, Kovin Naidoo7, Kovin Naidoo8, Konrad Pesudovs9, Serge Resnikoff7, Serge Resnikoff10, Alexander J Silvester11, Gretchen A Stevens12, Nina Tahhan7, Nina Tahhan10, Tien Yin Wong13, Hugh R. Taylor14, Rupert R A Bourne1, Peter Ackland, Aries Arditi, Yaniv Barkana, Banu Bozkurt15, Alain M. Bron16, Donald L. Budenz17, Feng Cai, Robert J Casson18, Usha Chakravarthy19, Jaewan Choi, Maria Vittoria Cicinelli, Nathan Congdon19, Reza Dana20, Rakhi Dandona21, Lalit Dandona22, Iva Dekaris, Monte A. Del Monte23, Jenny deva24, Laura Dreer25, Leon B. Ellwein26, Marcela Frazier25, Kevin D. Frick27, David S. Friedman27, João M. Furtado28, H. Gao29, Gus Gazzard30, Ronnie George, Stephen Gichuhi31, Victor H. Gonzalez, Billy R. Hammond32, Mary Elizabeth Hartnett33, Minguang He14, James F. Hejtmancik26, Flavio E. Hirai34, John J Huang35, April D. Ingram36, Jonathan C. Javitt27, Jost B. Jonas3, Charlotte E. Joslin, John H. Kempen20, John H. Kempen37, Moncef Khairallah, Rohit C Khanna4, Judy E. Kim38, George N. Lambrou39, Van C. Lansingh, Paolo Lanzetta40, Jennifer I. Lim41, Kaweh Mansouri, Anu A. Mathew42, Alan R. Morse, Beatriz Munoz27, David C. Musch23, Vinay Nangia, Maria Palaiou20, Maurizio Battaglia Parodi, Fernando Yaacov Pena42, Tunde Peto19, Harry A. Quigley27, Murugesan Raju43, Pradeep Y. Ramulu27, Alan L. Robin27, Luca Rossetti44, Jinan B. Saaddine45, Mya Sandar46, Janet B. Serle47, Tueng T. Shen22, Rajesh K. Shetty48, Pamela C. Sieving26, Juan Carlos Silva49, Rita S. Sitorus50, Dwight Stambolian37, Gretchen Stevens12, Hugh Taylor14, Jaime Tejedor, James M. Tielsch27, Miltiadis K. Tsilimbaris51, Jan C. van Meurs52, Rohit Varma53, Gianni Virgili54, Jimmy Volmink55, Ya Xing Wang, Ningli Wang56, Sheila K. West27, Peter Wiedemann57, Tien Wong13, Richard Wormald58, Yingfeng Zheng46 
Anglia Ruskin University1, University of Oxford2, Heidelberg University3, L V Prasad Eye Institute4, Massachusetts Eye and Ear Infirmary5, Nova Southeastern University6, Brien Holden Vision Institute7, University of KwaZulu-Natal8, Flinders University9, University of New South Wales10, Royal Liverpool University Hospital11, World Health Organization12, National University of Singapore13, University of Melbourne14, Selçuk University15, University of Burgundy16, University of Miami17, University of Adelaide18, Queen's University Belfast19, Harvard University20, The George Institute for Global Health21, University of Washington22, University of Michigan23, Universiti Tunku Abdul Rahman24, University of Alabama25, National Institutes of Health26, Johns Hopkins University27, University of São Paulo28, Henry Ford Health System29, University College London30, University of Nairobi31, University of Georgia32, University of Utah33, Federal University of São Paulo34, Yale University35, Alberta Children's Hospital36, University of Pennsylvania37, Medical College of Wisconsin38, Novartis39, University of Udine40, University of Illinois at Urbana–Champaign41, Royal Children's Hospital42, University of Missouri43, University of Milan44, Centers for Disease Control and Prevention45, Singapore National Eye Center46, Icahn School of Medicine at Mount Sinai47, Mayo Clinic48, Pan American Health Organization49, University of Indonesia50, University of Crete51, Erasmus University Rotterdam52, University of Southern California53, University of Florence54, Stellenbosch University55, Capital Medical University56, Leipzig University57, Moorfields Eye Hospital58
TL;DR: There is an ongoing reduction in the age-standardised prevalence of blindness and visual impairment, yet the growth and ageing of the world's population is causing a substantial increase in number of people affected, highlighting the need to scale up vision impairment alleviation efforts at all levels.

1,473 citations

Journal ArticleDOI
TL;DR: Brms provides an intuitive and powerful formula syntax, which extends the well known formula syntax of lme4, which is introduced in detail and demonstrated its usefulness with four examples, each showing other relevant aspects of the syntax.
Abstract: The brms package allows R users to easily specify a wide range of Bayesian single-level and multilevel models, which are fitted with the probabilistic programming language Stan behind the scenes. Several response distributions are supported, of which all parameters (e.g., location, scale, and shape) can be predicted at the same time thus allowing for distributional regression. Non-linear relationships may be specified using non-linear predictor terms or semi-parametric approaches such as splines or Gaussian processes. To make all of these modeling options possible in a multilevel framework, brms provides an intuitive and powerful formula syntax, which extends the well known formula syntax of lme4. The purpose of the present paper is to introduce this syntax in detail and to demonstrate its usefulness with four examples, each showing other relevant aspects of the syntax.

1,463 citations