scispace - formally typeset
Search or ask a question
Author

Nicky Best

Bio: Nicky Best is an academic researcher from GlaxoSmithKline. The author has contributed to research in topics: Population & Bayesian probability. The author has an hindex of 49, co-authored 146 publications receiving 18394 citations. Previous affiliations of Nicky Best include Imperial College London & Harvard University.


Papers
More filters
Journal ArticleDOI
TL;DR: How and why various modern computing concepts, such as object-orientation and run-time linking, feature in the software's design are discussed and how the framework may be extended.
Abstract: WinBUGS is a fully extensible modular framework for constructing and analysing Bayesian full probability models. Models may be specified either textually via the BUGS language or pictorially using a graphical interface called DoodleBUGS. WinBUGS processes the model specification and constructs an object-oriented representation of the model. The software offers a user-interface, based on dialogue boxes and menu commands, through which the model may then be analysed using Markov chain Monte Carlo techniques. In this paper we discuss how and why various modern computing concepts, such as object-orientation and run-time linking, feature in the software's design. We also discuss how the framework may be extended. It is possible to write specific applications that form an apparently seamless interface with WinBUGS for users with specialized requirements. It is also possible to interface with WinBUGS at a lower level by incorporating new object types that may be used by WinBUGS without knowledge of the modules in which they are implemented. Neither of these types of extension require access to, or even recompilation of, the WinBUGS source-code.

5,620 citations

01 Mar 2006
TL;DR: Bayesian inference with Markov Chain Monte Carlo with coda package for R contains a set of functions designed to help the user answer questions about how many samples are required to accurately estimate posterior quantities of interest.
Abstract: [1st paragraph] At first sight, Bayesian inference with Markov Chain Monte Carlo (MCMC) appears to be straightforward. The user defines a full probability model, perhaps using one of the programs discussed in this issue; an underlying sampling engine takes the model definition and returns a sequence of dependent samples from the posterior distribution of the model parameters, given the supplied data. The user can derive any summary of the posterior distribution from this sample. For example, to calculate a 95% credible interval for a parameter α, it suffices to take 1000 MCMC iterations of α and sort them so that α1<α2<...<α1000. The credible interval estimate is then (α25, α975). However, there is a price to be paid for this simplicity. Unlike most numerical methods used in statistical inference, MCMC does not give a clear indication of whether it has converged. The underlying Markov chain theory only guarantees that the distribution of the output will converge to the posterior in the limit as the number of iterations increases to infinity. The user is generally ignorant about how quickly convergence occurs, and therefore has to fall back on post hoc testing of the sampled output. By convention, the sample is divided into two parts: a “burn in” period during which all samples are discarded, and the remainder of the run in which the chain is considered to have converged sufficiently close to the limiting distribution to be used. Two questions then arise: 1. How long should the burn in period be? 2. How many samples are required to accurately estimate posterior quantities of interest? The coda package for R contains a set of functions designed to help the user answer these questions. Some of these convergence diagnostics are simple graphical ways of summarizing the data. Others are formal statistical tests.

3,098 citations

Journal ArticleDOI
TL;DR: A balanced critical appraisal of the BUGS software is provided, highlighting how various ideas have led to unprecedented flexibility while at the same time producing negative side effects.
Abstract: BUGS is a software package for Bayesian inference using Gibbs sampling. The software has been instrumental in raising awareness of Bayesian modelling among both academic and commercial communities internationally, and has enjoyed considerable success over its 20-year life span. Despite this, the software has a number of shortcomings and a principal aim of this paper is to provide a balanced critical appraisal, in particular highlighting how various ideas have led to unprecedented flexibility while at the same time producing negative side effects. We also present a historical overview of the BUGS project and some future perspectives. Copyright © 2009 John Wiley & Sons, Ltd.

1,865 citations

Book
02 Oct 2012
TL;DR: Introduction: Probability and Parameters Probability Probability distributions Calculating properties of probability distributions Monte Carlo integration Monte Carlo Simulations Using BUGS using BUGs to simulate from distributions Transformations of random variables Complex calculations using Monte Carlo Multivariate Monte Carlo analysis Predictions with unknown parameters
Abstract: Introduction: Probability and Parameters Probability Probability distributions Calculating properties of probability distributions Monte Carlo integration Monte Carlo Simulations Using BUGS Introduction to BUGS DoodleBUGS Using BUGS to simulate from distributions Transformations of random variables Complex calculations using Monte Carlo Multivariate Monte Carlo analysis Predictions with unknown parameters Introduction to Bayesian Inference Bayesian learning Posterior predictive distributions Conjugate Bayesian inference Inference about a discrete parameter Combinations of conjugate analyses Bayesian and classical methods Introduction to Markov Chain Monte Carlo Methods Bayesian computation Initial values Convergence Efficiency and accuracy Beyond MCMC Prior Distributions Different purposes of priors Vague, 'objective' and 'reference' priors Representation of informative priors Mixture of prior distributions Sensitivity analysis Regression Models Linear regression with normal errors Linear regression with non-normal errors Nonlinear regression with normal errors Multivariate responses Generalised linear regression models Inference on functions of parameters Further reading Categorical Data 2 x 2 tables Multinomial models Ordinal regression Further reading Model Checking and Comparison Introduction Deviance Residuals Predictive checks and Bayesian p-values Model assessment by embedding in larger models Model comparison using deviances Bayes factors Model uncertainty Discussion on model comparison Prior-data conflict Issues in Modelling Missing data Prediction Measurement error Cutting feedback New distributions Censored, truncated and grouped observations Constrained parameters Bootstrapping Ranking Hierarchical Models Exchangeability Priors Hierarchical regression models Hierarchical models for variances Redundant parameterisations More general formulations Checking of hierarchical models Comparison of hierarchical models Further resources Specialised Models Time-to-event data Time series models Spatial models Evidence synthesis Differential equation and pharmacokinetic models Finite mixture and latent class models Piecewise parametric models Bayesian nonparametric models Different Implementations of BUGS Introduction BUGS engines and interfaces Expert systems and MCMC methods Classic BUGS WinBUGS OpenBUGS JAGS A Appendix: BUGS Language Syntax Introduction Distributions Deterministic functions Repetition Multivariate quantities Indexing Data transformations Commenting B Appendix: Functions in BUGS Standard functions Trigonometric functions Matrix algebra Distribution utilities and model checking Functionals and differential equations Miscellaneous C Appendix: Distributions in BUGS Continuous univariate, unrestricted range Continuous univariate, restricted to be positive Continuous univariate, restricted to a finite interval Continuous multivariate distributions Discrete univariate distributions Discrete multivariate distributions Bibliography Index

772 citations

Journal ArticleDOI
TL;DR: A robust nonlinear full probability model for population pharmacokinetic data is proposed and it is demonstrated that the method enables Bayesian inference for this model, through an analysis of antibiotic administration in new‐born babies.
Abstract: Gibbs sampling is a powerful technique for statistical inference. It involves little more than sampling from full conditional distributions, which can be both complex and computationally expensive to evaluate. Gilks and Wild have shown that in practice full conditionals are often log‐concave, and they proposed a method of adaptive rejection sampling for efficiently sampling from univariate log‐concave distributions. In this paper, to deal with non‐log‐concave full conditional distributions, we generalize adaptive rejection sampling to include a Hastings‐Metropolis algorithm step. One important field of application in which statistical models may lead to non‐log‐concave full conditionals is population pharmacokinetics. Here, the relationship between drug dose and blood or plasma concentration in a group of patients typically is modelled by using nonlinear mixed effects models. Often, the data used for analysis are routinely collected hospital measurements, which tend to be noisy and irregular. Consequently, a robust (t‐distributed) error structure is appropriate to account for outlying observations and/or patients. We propose a robust nonlinear full probability model for population pharmacokinetic data. We demonstrate that our method enables Bayesian inference for this model, through an analysis of antibiotic administration in new‐born babies.

687 citations


Cited by
More filters
Book
23 Sep 2019
TL;DR: The Cochrane Handbook for Systematic Reviews of Interventions is the official document that describes in detail the process of preparing and maintaining Cochrane systematic reviews on the effects of healthcare interventions.
Abstract: The Cochrane Handbook for Systematic Reviews of Interventions is the official document that describes in detail the process of preparing and maintaining Cochrane systematic reviews on the effects of healthcare interventions.

21,235 citations

Journal ArticleDOI
TL;DR: The focus is on applied inference for Bayesian posterior distributions in real problems, which often tend toward normal- ity after transformations and marginalization, and the results are derived as normal-theory approximations to exact Bayesian inference, conditional on the observed simulations.
Abstract: The Gibbs sampler, the algorithm of Metropolis and similar iterative simulation methods are potentially very helpful for summarizing multivariate distributions. Used naively, however, iterative simulation can give misleading answers. Our methods are simple and generally applicable to the output of any iterative simulation; they are designed for researchers primarily interested in the science underlying the data and models they are analyzing, rather than for researchers interested in the probability theory underlying the iterative simulations themselves. Our recommended strategy is to use several independent sequences, with starting points sampled from an overdispersed distribution. At each step of the iterative simulation, we obtain, for each univariate estimand of interest, a distributional estimate and an estimate of how much sharper the distributional estimate might become if the simulations were continued indefinitely. Because our focus is on applied inference for Bayesian posterior distributions in real problems, which often tend toward normality after transformations and marginalization, we derive our results as normal-theory approximations to exact Bayesian inference, conditional on the observed simulations. The methods are illustrated on a random-effects mixture model applied to experimental measurements of reaction times of normal and schizophrenic patients.

13,884 citations

Book
24 Aug 2012
TL;DR: This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach, and is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.
Abstract: Today's Web-enabled deluge of electronic data calls for automated methods of data analysis. Machine learning provides these, developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data. This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach. The coverage combines breadth and depth, offering necessary background material on such topics as probability, optimization, and linear algebra as well as discussion of recent developments in the field, including conditional random fields, L1 regularization, and deep learning. The book is written in an informal, accessible style, complete with pseudo-code for the most important algorithms. All topics are copiously illustrated with color images and worked examples drawn from such application domains as biology, text processing, computer vision, and robotics. Rather than providing a cookbook of different heuristic methods, the book stresses a principled model-based approach, often using the language of graphical models to specify models in a concise and intuitive way. Almost all the models described have been implemented in a MATLAB software package--PMTK (probabilistic modeling toolkit)--that is freely available online. The book is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.

8,059 citations

Journal ArticleDOI
TL;DR: It is argued that researchers using LMEMs for confirmatory hypothesis testing should minimally adhere to the standards that have been in place for many decades, and it is shown thatLMEMs generalize best when they include the maximal random effects structure justified by the design.

6,878 citations

Journal ArticleDOI
TL;DR: A new, multifunctional phylogenetics package, phytools, for the R statistical computing environment is presented, with a focus on phylogenetic tree-building in 2.1.
Abstract: Summary 1. Here, I present a new, multifunctional phylogenetics package, phytools, for the R statistical computing environment. 2. The focus of the package is on methods for phylogenetic comparative biology; however, it also includes tools for tree inference, phylogeny input/output, plotting, manipulation and several other tasks. 3. I describe and tabulate the major methods implemented in phytools, and in addition provide some demonstration of its use in the form of two illustrative examples. 4. Finally, I conclude by briefly describing an active web-log that I use to document present and future developments for phytools. I also note other web resources for phylogenetics in the R computational environment.

6,404 citations