Scaling It Up: Stochastic Search Structure Learning in Graphical Models
TLDR
A new framework for structure learning is proposed that is based on continuous spike and slab priors and uses latent variables to identify graphs and efficiently handles problems with hundreds of variables.Abstract:
Gaussian concentration graph models and covariance graph models are two classes of graphical models that are useful for uncovering latent dependence structures among multivariate variables. In the Bayesian literature, graphs are often determined through the use of priors over the space of positive definite matrices with fixed zeros, but these methods present daunting computational burdens in large problems. Motivated by the superior computational efficiency of continuous shrinkage priors for regression analysis, we propose a new framework for structure learning that is based on continuous spike and slab priors and uses latent variables to identify graphs. We discuss model specification, computation, and inference for both concentration and covariance graph models. The new approach produces reliable estimates of graphs and efficiently handles problems with hundreds of variables.read more
Citations
More filters
Journal ArticleDOI
Bayesian statistics and modelling
Rens van de Schoot,Sarah Depaoli,Ruth King,Ruth King,Bianca Kramer,Kaspar Märtens,Mahlet G. Tadesse,Marina Vannucci,Andrew Gelman,Duco Veen,Joukje Willemsen,Christopher Yau,Christopher Yau +12 more
TL;DR: This Primer on Bayesian statistics summarizes the most important aspects of determining prior distributions, likelihood functions and posterior distributions, in addition to discussing different applications of the method across disciplines.
Journal ArticleDOI
Wishart distributions for decomposable covariance graph models
Kshitij Khare,Bala Rajaratnam +1 more
TL;DR: This paper constructs on the cone P G a family of Wishart distributions which serve a similar purpose in the covariance graph setting as those constructed by Letac and Massam and proves convergence of this block Gibbs sampler and establishes hyper-Markov properties for this class of priors.
Journal ArticleDOI
Bayesian modelling of Dupuytren disease by using Gaussian copula graphical models
TL;DR: A computationally efficient Bayesian framework to discover potential risk factors and investigate which fingers are jointly affected in Dupuytren disease is provided and a transdimensional Markov chain Monte Carlo algorithm based on a birth–death process is constructed.
Journal ArticleDOI
Joint Bayesian variable and graph selection for regression models with network-structured predictors.
TL;DR: This work develops a Bayesian approach to perform selection of predictors that are linked within a network by combining a sparse regression model relating the predictors to a response variable with a graphical model describing conditional dependencies among the predictor.
Journal ArticleDOI
Model-based clustering with sparse covariance matrices
TL;DR: In this paper, a penalized likelihood approach is employed for estimation and a general penalty term on the graph configurations can be used to induce different levels of sparsity and incorporate prior knowledge.
References
More filters
Journal ArticleDOI
Sparse inverse covariance estimation with the graphical lasso
TL;DR: Using a coordinate descent procedure for the lasso, a simple algorithm is developed that solves a 1000-node problem in at most a minute and is 30-4000 times faster than competing methods.
Journal ArticleDOI
The Bayesian Lasso
Trevor Park,George Casella +1 more
TL;DR: The Lasso estimate for linear regression parameters can be interpreted as a Bayesian posterior mode estimate when the regression parameters have independent Laplace (i.e., double-exponential) priors.
Journal ArticleDOI
Variable selection via Gibbs sampling
TL;DR: In this paper, the Gibbs sampler is used to indirectly sample from the multinomial posterior distribution on the set of possible subset choices to identify the promising subsets by their more frequent appearance in the Gibbs sample.
Journal ArticleDOI
Model selection and estimation in the Gaussian graphical model
Ming Yuan,Yi Lin +1 more
TL;DR: The implementation of the penalized likelihood methods for estimating the concentration matrix in the Gaussian graphical model is nontrivial, but it is shown that the computation can be done effectively by taking advantage of the efficient maxdet algorithm developed in convex optimization.
Journal ArticleDOI
The horseshoe estimator for sparse signals
TL;DR: In this article, the authors proposed a new approach to sparsity called the horseshoe estimator, which is a member of the same family of multivariate scale mixtures of normals.