scispace - formally typeset
Journal ArticleDOI

Approximate Bayesian-inference With the Weighted Likelihood Bootstrap

Michael A. Newton
- 01 Jan 1994 - 
- Vol. 56, Iss: 1, pp 3-26
TLDR
The weighted likelihood bootstrap (WLB) as mentioned in this paper is a generalization of the Rubin's Bayesian bootstrap, which is used to simulate the posterior distribution of a posterior distribution.
Abstract
We introduce the weighted likelihood bootstrap (WLB) as a way to simulate approximately from a posterior distribution This method is often easy to implement, requiring only an algorithm for calculating the maximum likelihood estimator, such as iteratively reweighted least squares In the generic weighting scheme, the WLB is first order correct under quite general conditions Inaccuracies can be removed by using the WLB as a source of samples in the sampling-importance resampling (SIR) algorithm, which also allows incorporation of particular prior information The SIR-adjusted WLB can be a competitive alternative to other integration methods in certain models Asymptotic expansions elucidate the second-order properties of the WLB, which is a generalization of Rubin's Bayesian bootstrap The calculation of approximate Bayes factors for model comparison is also considered We note that, given a sample simulated from the posterior distribution, the required marginal likelihood may be simulation consistently estimated by the harmonic mean of the associated likelihood values; a modification of this estimator that avoids instability is also noted These methods provide simple ways of calculating approximate Bayes factors and posterior model probabilities for a very wide class of models

read more

Citations
More filters
Journal ArticleDOI

MrBayes 3.2: Efficient Bayesian Phylogenetic Inference and Model Choice across a Large Model Space

TL;DR: The new version provides convergence diagnostics and allows multiple analyses to be run in parallel with convergence progress monitored on the fly, and provides more output options than previously, including samples of ancestral states, site rates, site dN/dS rations, branch rates, and node dates.
Journal ArticleDOI

BEAST: Bayesian evolutionary analysis by sampling trees

TL;DR: BEAST is a fast, flexible software architecture for Bayesian analysis of molecular sequences related by an evolutionary tree that provides models for DNA and protein sequence evolution, highly parametric coalescent analysis, relaxed clock phylogenetics, non-contemporaneous sequence data, statistical alignment and a wide range of options for prior distributions.
Book

Machine Learning : A Probabilistic Perspective

TL;DR: This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach, and is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.
Journal ArticleDOI

mclust 5: Clustering, Classification and Density Estimation Using Gaussian Finite Mixture Models.

TL;DR: This updated version of mclust adds new covariance structures, dimension reduction capabilities for visualisation, model selection criteria, initialisation strategies for the EM algorithm, and bootstrap-based inference, making it a full-featured R package for data analysis via finite mixture modelling.
Journal ArticleDOI

Generalized Linear Models (2nd ed.)

John H. Schuenemeyer
- 01 May 1992 - 
References
More filters
BookDOI

Density estimation for statistics and data analysis

TL;DR: The Kernel Method for Multivariate Data: Three Important Methods and Density Estimation in Action.
Journal ArticleDOI

Bootstrap Methods: Another Look at the Jackknife

TL;DR: In this article, the authors discuss the problem of estimating the sampling distribution of a pre-specified random variable R(X, F) on the basis of the observed data x.
Journal ArticleDOI

Inference from Iterative Simulation Using Multiple Sequences

TL;DR: The focus is on applied inference for Bayesian posterior distributions in real problems, which often tend toward normal- ity after transformations and marginalization, and the results are derived as normal-theory approximations to exact Bayesian inference, conditional on the observed simulations.
Journal ArticleDOI

Random-effects models for longitudinal data

Nan M. Laird, +1 more
- 01 Dec 1982 - 
TL;DR: In this article, a unified approach to fitting two-stage random-effects models, based on a combination of empirical Bayes and maximum likelihood estimation of model parameters and using the EM algorithm, is discussed.
Book

The jackknife, the bootstrap, and other resampling plans

Bradley Efron
TL;DR: The Delta Method and the Influence Function Cross-Validation, Jackknife and Bootstrap Balanced Repeated Replication (half-sampling) Random Subsampling Nonparametric Confidence Intervals as mentioned in this paper.