scispace - formally typeset
Search or ask a question

Showing papers on "Gaussian published in 2011"


Journal ArticleDOI
TL;DR: It is shown that, using an approximate stochastic weak solution to (linear) stochastically partial differential equations, some Gaussian fields in the Matérn class can provide an explicit link, for any triangulation of , between GFs and GMRFs, formulated as a basis function representation.
Abstract: Continuously indexed Gaussian fields (GFs) are the most important ingredient in spatial statistical modelling and geostatistics. The specification through the covariance function gives an intuitive interpretation of the field properties. On the computational side, GFs are hampered with the big n problem, since the cost of factorizing dense matrices is cubic in the dimension. Although computational power today is at an all time high, this fact seems still to be a computational bottleneck in many applications. Along with GFs, there is the class of Gaussian Markov random fields (GMRFs) which are discretely indexed. The Markov property makes the precision matrix involved sparse, which enables the use of numerical algorithms for sparse matrices, that for fields in R-2 only use the square root of the time required by general algorithms. The specification of a GMRF is through its full conditional distributions but its marginal properties are not transparent in such a parameterization. We show that, using an approximate stochastic weak solution to (linear) stochastic partial differential equations, we can, for some GFs in the Matern class, provide an explicit link, for any triangulation of R-d, between GFs and GMRFs, formulated as a basis function representation. The consequence is that we can take the best from the two worlds and do the modelling by using GFs but do the computations by using GMRFs. Perhaps more importantly, our approach generalizes to other covariance functions generated by SPDEs, including oscillating and non-stationary GFs, as well as GFs on manifolds. We illustrate our approach by analysing global temperature data with a non-stationary model defined on a sphere. (Less)

2,212 citations


Book
01 Jan 2011
TL;DR: In this paper, the authors present a statistical model of quantum theory, including symmetry groups in quantum mechanics, and unbiased measurement and optimality of Gaussian states, and supplement - Statistical Structure of Quantum Theory and Hidden Variables.
Abstract: Foreword to 2nd English edition.- Foreword to 2nd Russian edition.- Preface.- Chapters: I. Statistical Models.- II. Mathematics of Quantum Theory.- III. Symmetry Groups in Quantum Mechanics.- IV. Covariant Measurements and Optimality.- V. Gaussian States.- VI Unbiased Measurements.- Supplement - Statistical Structure of Quantum Theory and Hidden Variables.- References.

1,600 citations


Proceedings ArticleDOI
03 Oct 2011
TL;DR: In this article, the generalized AMP (G-AMP) algorithm is proposed to estimate a random vector observed through a linear transform followed by a componentwise probabilistic measurement channel.
Abstract: We consider the estimation of a random vector observed through a linear transform followed by a componentwise probabilistic measurement channel. Although such linear mixing estimation problems are generally highly non-convex, Gaussian approximations of belief propagation (BP) have proven to be computationally attractive and highly effective in a range of applications. Recently, Bayati and Montanari have provided a rigorous and extremely general analysis of a large class of approximate message passing (AMP) algorithms that includes many Gaussian approximate BP methods. This paper extends their analysis to a larger class of algorithms to include what we call generalized AMP (G-AMP). G-AMP incorporates general (possibly non-AWGN) measurement channels. Similar to the AWGN output channel case, we show that the asymptotic behavior of the G-AMP algorithm under large i.i.d. Gaussian transform matrices is described by a simple set of state evolution (SE) equations. The general SE equations recover and extend several earlier results, including SE equations for approximate BP on general output channels by Guo and Wang.

1,030 citations


Journal ArticleDOI
TL;DR: This paper presents a unified framework for the rigid and nonrigid point set registration problem in the presence of significant amounts of noise and outliers, and shows that the popular iterative closest point (ICP) method and several existing point setRegistration methods in the field are closely related and can be reinterpreted meaningfully in this general framework.
Abstract: In this paper, we present a unified framework for the rigid and nonrigid point set registration problem in the presence of significant amounts of noise and outliers. The key idea of this registration framework is to represent the input point sets using Gaussian mixture models. Then, the problem of point set registration is reformulated as the problem of aligning two Gaussian mixtures such that a statistical discrepancy measure between the two corresponding mixtures is minimized. We show that the popular iterative closest point (ICP) method and several existing point set registration methods in the field are closely related and can be reinterpreted meaningfully in our general framework. Our instantiation of this general framework is based on the the L2 distance between two Gaussian mixtures, which has the closed-form expression and in turn leads to a computationally efficient registration algorithm. The resulting registration algorithm exhibits inherent statistical robustness, has an intuitive interpretation, and is simple to implement. We also provide theoretical and experimental comparisons with other robust methods for point set registration.

909 citations


Proceedings Article
12 Dec 2011
TL;DR: A novel algorithm is proposed for solving the resulting optimization problem which is a regularized log-determinant program based on Newton's method and employs a quadratic approximation, but with some modifications that leverage the structure of the sparse Gaussian MLE problem.
Abstract: The l1 regularized Gaussian maximum likelihood estimator has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix, or alternatively the underlying graph structure of a Gaussian Markov Random Field, from very limited samples. We propose a novel algorithm for solving the resulting optimization problem which is a regularized log-determinant program. In contrast to other state-of-the-art methods that largely use first order gradient information, our algorithm is based on Newton's method and employs a quadratic approximation, but with some modifications that leverage the structure of the sparse Gaussian MLE problem. We show that our method is superlinearly convergent, and also present experimental results using synthetic and real application data that demonstrate the considerable improvements in performance of our method when compared to other state-of-the-art methods.

343 citations


Journal ArticleDOI
TL;DR: This work introduces optimal inverses for the Anscombe transformation, in particular the exact unbiased inverse, a maximum likelihood (ML) inverse, and a more sophisticated minimum mean square error (MMSE) inverse.
Abstract: The removal of Poisson noise is often performed through the following three-step procedure. First, the noise variance is stabilized by applying the Anscombe root transformation to the data, producing a signal in which the noise can be treated as additive Gaussian with unitary variance. Second, the noise is removed using a conventional denoising algorithm for additive white Gaussian noise. Third, an inverse transformation is applied to the denoised signal, obtaining the estimate of the signal of interest. The choice of the proper inverse transformation is crucial in order to minimize the bias error which arises when the nonlinear forward transformation is applied. We introduce optimal inverses for the Anscombe transformation, in particular the exact unbiased inverse, a maximum likelihood (ML) inverse, and a more sophisticated minimum mean square error (MMSE) inverse. We then present an experimental analysis using a few state-of-the-art denoising algorithms and show that the estimation can be consistently improved by applying the exact unbiased inverse, particularly at the low-count regime. This results in a very efficient filtering solution that is competitive with some of the best existing methods for Poisson image denoising.

341 citations


BookDOI
01 Jan 2011
TL;DR: In this paper, the generalized Lorenz-Mie Theories in the Strict Sense, and other GLMTs are used for axisymmetric and Gaussian beams.
Abstract: Background in Maxwell's Electromagnetism and Maxwell's Equations.- Resolution of Special Maxwell's Equations.- Generalized Lorenz-Mie Theories in the Strict Sense, and other GLMTs.- Gaussian Beams, and Other Beams.- Finite Series.- Special Cases of Axisymmetric and Gaussian Beams.- The Localized Approximation and Localized Beam Models.- Applications, and Miscellaneous Issues.- Conclusion.

313 citations


Posted Content
TL;DR: This paper proposes a new direct method to estimate a causal ordering and connection strengths based on non-Gaussianity that requires no algorithmic parameters and is guaranteed to converge to the right solution within a small fixed number of steps if the data strictly follows the model.
Abstract: Structural equation models and Bayesian networks have been widely used to analyze causal relations between continuous variables. In such frameworks, linear acyclic models are typically used to model the data-generating process of variables. Recently, it was shown that use of non-Gaussianity identifies the full structure of a linear acyclic model, i.e., a causal ordering of variables and their connection strengths, without using any prior knowledge on the network structure, which is not the case with conventional methods. However, existing estimation methods are based on iterative search algorithms and may not converge to a correct solution in a finite number of steps. In this paper, we propose a new direct method to estimate a causal ordering and connection strengths based on non-Gaussianity. In contrast to the previous methods, our algorithm requires no algorithmic parameters and is guaranteed to converge to the right solution within a small fixed number of steps if the data strictly follows the model.

286 citations


Journal ArticleDOI
TL;DR: Li et al. as mentioned in this paper used the innovation statistics of Desroziers et al and applied a Kalman filter analysis update to the inflation parameters based on the Gaussian assumption.
Abstract: In ensemble Kalman filters, the underestimation of forecast error variance due to limited ensemble size and other sources of imperfection is commonly treated by empirical covariance inflation. To avoid manual optimization of multiplicative inflation parameters, previous studies proposed adaptive inflation approaches using observations. Anderson applied Bayesian estimation theory to the probability density function of inflation parameters. Alternatively, Li et al. used the innovation statistics of Desroziers et al. and applied a Kalman filter analysis update to the inflation parameters based on the Gaussian assumption. In this study, Li et al.’s Gaussian approach is advanced to include the variance of the estimated inflation as derived from the central limit theorem. It is shown that the Gaussian approach is an accurate approximation of Anderson’s general Bayesian approach. An advanced implementation of the Gaussian approach with the local ensemble transform Kalman filter is proposed, where the ada...

277 citations


Journal ArticleDOI
TL;DR: It is shown that the minimum mean-square error (MMSE) of estimating an arbitrary random variable from its observation contaminated by Gaussian noise is found to be infinitely differentiable at all positive SNR, and in fact a real analytic function in SNR under mild conditions.
Abstract: Consider the minimum mean-square error (MMSE) of estimating an arbitrary random variable from its observation contaminated by Gaussian noise. The MMSE can be regarded as a function of the signal-to-noise ratio (SNR) as well as a functional of the input distribution (of the random variable to be estimated). It is shown that the MMSE is concave in the input distribution at any given SNR. For a given input distribution, the MMSE is found to be infinitely differentiable at all positive SNR, and in fact a real analytic function in SNR under mild conditions. The key to these regularity results is that the posterior distribution conditioned on the observation through Gaussian channels always decays at least as quickly as some Gaussian density. Furthermore, simple expressions for the first three derivatives of the MMSE with respect to the SNR are obtained. It is also shown that, as functions of the SNR, the curves for the MMSE of a Gaussian input and that of a non-Gaussian input cross at most once over all SNRs. These properties lead to simple proofs of the facts that Gaussian inputs achieve both the secrecy capacity of scalar Gaussian wiretap channels and the capacity of scalar Gaussian broadcast channels, as well as a simple proof of the entropy power inequality in the special case where one of the variables is Gaussian.

273 citations


Journal ArticleDOI
TL;DR: Basis Pursuit DeQuantizer of moment p (BPDQp) as discussed by the authors is a new convex optimization program for recovering sparse or compressible signals from uniformly quantized measurements, which minimizes the sparsity of the signal to be reconstructed subject to a data fidelity constraint expressed in the lp-norm of the residual error for 2 ≤ p ≤ ∞.
Abstract: In this paper, we study the problem of recovering sparse or compressible signals from uniformly quantized measurements. We present a new class of convex optimization programs, or decoders, coined Basis Pursuit DeQuantizer of moment p (BPDQp), that model the quantization distortion more faithfully than the commonly used Basis Pursuit DeNoise (BPDN) program. Our decoders proceed by minimizing the sparsity of the signal to be reconstructed subject to a data-fidelity constraint expressed in the lp-norm of the residual error for 2 ≤ p ≤ ∞. We show theoretically that, (i) the reconstruction error of these new decoders is bounded if the sensing matrix satisfies an extended Restricted Isometry Property involving the Iρ norm, and (ii), for Gaussian random matrices and uniformly quantized measurements, BPDQp performance exceeds that of BPDN by dividing the reconstruction error due to quantization by √(p + 1). This last effect happens with high probability when the number of measurements exceeds a value growing with p, i.e., in an oversampled situation compared to what is commonly required by BPDN = BPDQ2. To demonstrate the theoretical power of BPDQp, we report numerical simulations on signal and image reconstruction problems.

Journal ArticleDOI
TL;DR: A new method is proposed that transforms the original state vector into a new vector that is univariate Gaussian at all times, which performs better than the standard EnKF in all aspects analyzed.

Journal ArticleDOI
TL;DR: In this paper, a Poisson tensor factorization (CP-APR) algorithm is proposed for sparse count data, which is based on a majorization-minimization approach.
Abstract: Tensors have found application in a variety of fields, ranging from chemometrics to signal processing and beyond. In this paper, we consider the problem of multilinear modeling of sparse count data. Our goal is to develop a descriptive tensor factorization model of such data, along with appropriate algorithms and theory. To do so, we propose that the random variation is best described via a Poisson distribution, which better describes the zeros observed in the data as compared to the typical assumption of a Gaussian distribution. Under a Poisson assumption, we fit a model to observed data using the negative log-likelihood score. We present a new algorithm for Poisson tensor factorization called CANDECOMP-PARAFAC Alternating Poisson Regression (CP-APR) that is based on a majorization-minimization approach. It can be shown that CP-APR is a generalization of the Lee-Seung multiplicative updates. We show how to prevent the algorithm from converging to non-KKT points and prove convergence of CP-APR under mild conditions. We also explain how to implement CP-APR for large-scale sparse tensors and present results on several data sets, both real and simulated.

Proceedings Article
12 Dec 2011
TL;DR: This work presents a simple yet effective GP model for training on input points corrupted by i.i.d. Gaussian noise, and compares it to others over a range of different regression problems and shows that it improves over current methods.
Abstract: In standard Gaussian Process regression input locations are assumed to be noise free. We present a simple yet effective GP model for training on input points corrupted by i.i.d. Gaussian noise. To make computations tractable we use a local linear expansion about each input point. This allows the input noise to be recast as output noise proportional to the squared gradient of the GP posterior mean. The input noise variances are inferred from the data as extra hyperparameters. They are trained alongside other hyperparameters by the usual method of maximisation of the marginal likelihood. Training uses an iterative scheme, which alternates between optimising the hyperparameters and calculating the posterior gradient. Analytic predictive moments can then be found for Gaussian distributed test points. We compare our model to others over a range of different regression problems and show that it improves over current methods.

Journal ArticleDOI
TL;DR: In this article, a non-Gaussianity-based method is proposed to estimate the causal ordering and connection strength of a linear acyclic model, which is guaranteed to converge to the right solution within a fixed number of steps if the data strictly follows the model.
Abstract: Structural equation models and Bayesian networks have been widely used to analyze causal relations between continuous variables. In such frameworks, linear acyclic models are typically used to model the data-generating process of variables. Recently, it was shown that use of non-Gaussianity identifies the full structure of a linear acyclic model, that is, a causal ordering of variables and their connection strengths, without using any prior knowledge on the network structure, which is not the case with conventional methods. However, existing estimation methods are based on iterative search algorithms and may not converge to a correct solution in a finite number of steps. In this paper, we propose a new direct method to estimate a causal ordering and connection strengths based on non-Gaussianity. In contrast to the previous methods, our algorithm requires no algorithmic parameters and is guaranteed to converge to the right solution within a small fixed number of steps if the data strictly follows the model, that is, if all the model assumptions are met and the sample size is infinite.

Journal ArticleDOI
TL;DR: Improved reconciliation procedure considerably extends the secure range of a CVQKD with a Gaussian modulation, giving a secret key rate of about 10{sup -3} bit per pulse at a distance of 120 km for reasonable physical parameters.
Abstract: We designed high-efficiency error correcting codes allowing us to extract an errorless secret key in a continuous-variable quantum key distribution (CVQKD) protocol using a Gaussian modulation of coherent states and a homodyne detection. These codes are available for a wide range of signal-to-noise ratios on an additive white Gaussian noise channel with a binary modulation and can be combined with a multidimensional reconciliation method proven secure against arbitrary collective attacks. This improved reconciliation procedure considerably extends the secure range of a CVQKD with a Gaussian modulation, giving a secret key rate of about ${10}^{\ensuremath{-}3}$ bit per pulse at a distance of 120 km for reasonable physical parameters.

Journal ArticleDOI
TL;DR: Using the coupled-mode and coupled-power theories, impacts of random phase-offsets and correlation lengths on crosstalk in multi-core fibers are investigated for the first time.
Abstract: Coupled-mode and coupled-power theories are described for multi-core fiber design and analysis. First, in order to satisfy the law of power conservation, mode-coupling coefficients are redefined and then, closed-form power-coupling coefficients are derived based on exponential, Gaussian, and triangular autocorrelation functions. Using the coupled-mode and coupled-power theories, impacts of random phase-offsets and correlation lengths on crosstalk in multi-core fibers are investigated for the first time. The simulation results are in good agreement with the measurement results. Furthermore, from the simulation results obtained by both theories, it is confirmed that the reciprocity is satisfied in multi-core fibers.

Journal ArticleDOI
TL;DR: In this article, the Riemannian/Alexandrov geometry of Gaussian measures, from the view point of the L 2 -Wasserstein geometry, is studied.
Abstract: This paper concerns the Riemannian/Alexandrov geometry of Gaussian measures, from the view point of the L 2 -Wasserstein geometry. The space of Gaussian measures is of finite dimension, which allows to write down the ex plicit Riemannian metric which in turn induces the L 2 -Wasserstein distance. Moreover, its completion as a metric space provides a complete picture of the singular behavior of the L 2 Wasserstein geometry. In particular, the singular set is st ratified according to the dimension of the support of the Gaussian measures, providing an explicit nontrivial example of Alexandrov space with extremal sets.

Journal ArticleDOI
TL;DR: This work improves the logDemons by integrating elasticity and incompressibility for soft-tissue tracking, and replaces the Gaussian smoothing by an efficient elastic-like regulariser based on isotropic differential quadratic forms of vector fields.
Abstract: Tracking soft tissues in medical images using non-linear image registration algorithms requires methods that are fast and provide spatial transformations consistent with the biological characteristics of the tissues. LogDemons algorithm is a fast non-linear registration method that computes diffeomorphic transformations parameterised by stationary velocity fields. Although computationally efficient, its use for tissue tracking has been limited because of its ad-hoc Gaussian regularisation, which hampers the implementation of more biologically motivated regularisations. In this work, we improve the logDemons by integrating elasticity and incompressibility for soft-tissue tracking. To that end, a mathematical justification of demons Gaussian regularisation is proposed. Building on this result, we replace the Gaussian smoothing by an efficient elastic-like regulariser based on isotropic differential quadratic forms of vector fields. The registration energy functional is finally minimised under the divergence-free constraint to get incompressible deformations. As the elastic regulariser and the constraint are linear, the method remains computationally tractable and easy to implement. Tests on synthetic incompressible deformations showed that our approach outperforms the original logDemons in terms of elastic incompressible deformation recovery without reducing the image matching accuracy. As an application, we applied the proposed algorithm to estimate 3D myocardium strain on clinical cine MRI of two adult patients. Results showed that incompressibility constraint improves the cardiac motion recovery when compared to the ground truth provided by 3D tagged MRI.

Journal ArticleDOI
TL;DR: A comprehensive Bayesian approach for graphical model determination in observational studies that can accommodate binary, ordinal or continuous variables simultaneously simultaneously is proposed.
Abstract: We propose a comprehensive Bayesian approach for graphical model determination in observational studies that can accommodate binary, ordinal or continuous variables simultaneously. Our new models are called copula Gaussian graphical models (CGGMs) and embed graphical model selection inside a semiparametric Gaussian copula. The domain of applicability of our methods is very broad and encompasses many studies from social science and economics. We illustrate the use of the copula Gaussian graphical models in the analysis of a 16-dimensional functional disability contingency table.

Journal ArticleDOI
TL;DR: This work addresses the estimation of phase in the presence of phase diffusion and evaluates the ultimate quantum limits to precision for phase-shifted Gaussian states and finds that homodyne detection is a nearly optimal detection scheme in the limit of very small and large noise.
Abstract: The measurement problem for the optical phase has been traditionally attacked for noiseless schemes or in the presence of amplitude or detection noise. Here we address the estimation of phase in the presence of phase diffusion and evaluate the ultimate quantum limits to precision for phase-shifted Gaussian states. We look for the optimal detection scheme and derive approximate scaling laws for the quantum Fisher information and the optimal squeezing fraction in terms of the total energy and the amount of noise. We also find that homodyne detection is a nearly optimal detection scheme in the limit of very small and large noise.

Journal ArticleDOI
Jun Sun1, Wei Fang1, Vasile Palade2, Xiaojun Wu1, Wenbo Xu1 
TL;DR: It is shown that the GAQPSO algorithm is an effective approach that can improve the QPSO performance considerably and is less likely to be stuck in local optima and hence it can achieve better solutions in most cases.

Journal ArticleDOI
TL;DR: The Laplacian pyramid is ubiquitous for decomposing images into multiple scales and is widely used for image analysis as discussed by the authors, however, because it is constructed with spatially invariant Gaussian kernels, it is not suitable for image segmentation.
Abstract: The Laplacian pyramid is ubiquitous for decomposing images into multiple scales and is widely used for image analysis. However, because it is constructed with spatially invariant Gaussian kernels, ...

Proceedings ArticleDOI
01 Nov 2011
TL;DR: This paper embeds the BG-AMP algorithm within an expectation-maximization (EM) framework, and simultaneously reconstruct the signal while learning the prior signal and noise parameters, and achieves excellent performance on a range of signal types.
Abstract: The approximate message passing (AMP) algorithm originally proposed by Donoho, Maleki, and Montanari yields a computationally attractive solution to the usual l 1 -regularized least-squares problem faced in compressed sensing, whose solution is known to be robust to the signal distribution When the signal is drawn iid from a marginal distribution that is not least-favorable, better performance can be attained using a Bayesian variation of AMP The latter, however, assumes that the distribution is perfectly known In this paper, we navigate the space between these two extremes by modeling the signal as iid Bernoulli-Gaussian (BG) with unknown prior sparsity, mean, and variance, and the noise as zero-mean Gaussian with unknown variance, and we simultaneously reconstruct the signal while learning the prior signal and noise parameters To accomplish this task, we embed the BG-AMP algorithm within an expectation-maximization (EM) framework Numerical experiments confirm the excellent performance of our proposed EM-BG-AMP on a range of signal types12

Journal ArticleDOI
TL;DR: The results show that for good performance, the regularity of the GP prior should match the regularities of the unknown response function, and is expressible in a certain concentration function.
Abstract: We consider the quality of learning a response function by a nonparametric Bayesian approach using a Gaussian process (GP) prior on the response function. We upper bound the quadratic risk of the learning procedure, which in turn is an upper bound on the Kullback-Leibler information between the predictive and true data distribution. The upper bound is expressed in small ball probabilities and concentration measures of the GP prior. We illustrate the computation of the upper bound for the Matern and squared exponential kernels. For these priors the risk, and hence the information criterion, tends to zero for all continuous response functions. However, the rate at which this happens depends on the combination of true response function and Gaussian prior, and is expressible in a certain concentration function. In particular, the results show that for good performance, the regularity of the GP prior should match the regularity of the unknown response function.

Journal ArticleDOI
Hongwei Guo1
TL;DR: Gaussian functions are suitable for describing many processes in mathematics, science, and engineering, making them very useful in the fields of signal and image processing.
Abstract: Gaussian functions are suitable for describing many processes in mathematics, science, and engineering, making them very useful in the fields of signal and image processing. For example, the random noise in a signal, induced by complicated physical factors, can be simply modeled with the Gaussian distribution according to the central limit theorem from the probability theory.

Journal ArticleDOI
TL;DR: It is shown how the use of decoy states makes the protocols secure against arbitrary collective attacks, which implies their unconditional security in the asymptotic limit.
Abstract: In this paper, we consider continuous-variable quantum-key-distribution (QKD) protocols which use non-Gaussian modulations. These specific modulation schemes are compatible with very efficient error-correction procedures, hence allowing the protocols to outperform previous protocols in terms of achievable range. In their simplest implementation, these protocols are secure for any linear quantum channels (hence against Gaussian attacks). We also show how the use of decoy states makes the protocols secure against arbitrary collective attacks, which implies their unconditional security in the asymptotic limit.

01 Jan 2011
TL;DR: This document includes some detailed supplemental derivations used in the bandwidth estimation for the online Kernel Density Estimator which was proposed in the paper \Multivariate Online Kernel D density Estimation with Gaussian Kernels.
Abstract: This document includes some detailed supplemental derivations used in the bandwidth estimation for the online Kernel Density Estimator which was proposed in the paper \Multivariate Online Kernel Density Estimation with Gaussian Kernels" by authors Matej Kristan, Ale

Journal ArticleDOI
TL;DR: This work introduces efficient Markov chain Monte Carlo methods for inference and model determination in multivariate and matrix-variate Gaussian graphical models and extends their sampling algorithms to a novel class of conditionally autoregressive models for sparse estimation inMultivariate lattice data.
Abstract: We introduce efficient Markov chain Monte Carlo methods for inference and model determination in multivariate and matrix-variate Gaussian graphical models. Our framework is based on the G-Wishart prior for the precision matrix associated with graphs that can be decomposable or non-decomposable. We extend our sampling algorithms to a novel class of conditionally autoregressive models for sparse estimation in multivariate lattice data, with a special emphasis on the analysis of spatial data. These models embed a great deal of flexibility in estimating both the correlation structure across outcomes and the spatial correlation structure, thereby allowing for adaptive smoothing and spatial autocorrelation parameters. Our methods are illustrated using a simulated example and a real-world application which concerns cancer mortality surveillance. Supplementary materials with computer code and the datasets needed to replicate our numerical results together with additional tables of results are available online.

Journal ArticleDOI
Nicolas Chopin1
TL;DR: This work designs a table-based algorithm that is computationally faster than alternative algorithms and an accept-reject algorithm for simulating a Gaussian vector X, conditional on the fact that each component of X belongs to a finite interval, or a semi-finite interval.
Abstract: We consider the problem of simulating a Gaussian vector X, conditional on the fact that each component of X belongs to a finite interval [a i ,b i ], or a semi-finite interval [a i ,+?). In the one-dimensional case, we design a table-based algorithm that is computationally faster than alternative algorithms. In the two-dimensional case, we design an accept-reject algorithm. According to our calculations and numerical studies, the acceptance rate of this algorithm is bounded from below by 0.5 for semi-finite truncation intervals, and by 0.47 for finite intervals. Extension to three or more dimensions is discussed.