Convergence in Variance of Chebyshev Accelerated Gibbs Samplers
Summary (1 min read)
1. Introduction. Iterations of the form (1.1)
- What makes this correspondence important is that the convergence properties of the solver are inherited by the sampler (and vice versa), which means that acceleration techniques developed for the solver may be applied to the sampler.
- The main purpose of this paper is to establish the equivalence of convergence in mean and covariance in the case of Chebyshev polynomial acceleration, without the assumption of the target distribution being Gaussian.
12). Along the horizontal axis are values of the ratio of the extreme eigenvalues of M
- By recursion the authors prove the following statement.
- Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php.
- Axelsson points out [1, Rem. 5.11] two deficiencies of the first-order Chebyshev iterative method as a solver:.
- First, the number of steps p needs to be selected in advance, with the method not being optimal for any other number of steps.
- The solution for iterative solvers, and hence for iterative samplers, is to develop the second-order methods, which have neither of these deficiencies.
5. Numerical examples sampling from Gaussians at different resolutions.
- The following examples both use a cubic-element discretization of the cubic domain [0, 1] 3 , with trilinear interpolation from nodal values within each element.
- The examples also both use R = 1/4, though they differ in the number of nodes (or elements) in each coordinate direction.
6. Discussion.
- While performing this research the authors have recognized the debt they othey to (the late) Gene Golub, who pioneered first-and second-order Chebyshev acceleration for linear solvers [9] , which they have built upon.
- The authors are pleased to demonstrate the connection between Gene's work and the sampling algorithms from statistics by publishing in this journal that Gene had wanted to remain titled the Journal on Scientific and Statistical Computing [22] .
Did you find this useful? Give us your feedback
Citations
3 citations
Cites background or methods from "Convergence in Variance of Chebyshe..."
...target distributions are Gaussian and identical then it follows from the theory in [17, 19, 18] that to construct an efficient AR(1) process we should try to satisfy the following conditions: 1....
[...]
...sampling from Gaussian distributions [17, 19, 18]....
[...]
...If, in addition, the target is Gaussian, then the MH accept/reject step is redundant and we can use [17, 19, 18] to analyse and accelerate the proposal chain generated by (5....
[...]
...Moreover, acceleration techniques suggested in [18, 19] for the proposal chain may not accelerate the MH algorithm....
[...]
...Since sampling from the Gaussian N(A−1b,A−1) is equivalent to solving the linear system Ax = b (see [17, 18, 19]), our notation aligns with literature on solving linear systems....
[...]
1 citations
Cites background from "Convergence in Variance of Chebyshe..."
...Since sampling from N(A−1b,A−1) is closely related to solving Ax = b (see [19, 20, 21]), our notation aligns with literature on solving linear systems....
[...]
...Such algorithms are analyzed and accelerated in [19, 20, 21]....
[...]
...All convergent stochastic AR(1) proposals with equilibrium distribution N(A−1β,A−1), including the case when N(A−1β,A−1) and N(A−1b,A−1) are identical, may be found using matrix splitting of A to find G, g and Σ, see [19, 20, 21], which also gives rates of convergence, etc....
[...]
...If θ = 1 2 and the target is Gaussian then acceleration is possible [19, 21, 20]....
[...]
...Such chains are precisely the convergent (generalized) Gibbs samplers for normal targets [21] including blocking and reparametrization (preconditioning in numerical analysis), with the equivalence to stationary linear iterative solvers affording extensive detail about the chain, such as the n-step distribution, error polynomial, and convergence rate, as well as acceleration [20, 21]....
[...]
1 citations
1 citations
Cites methods from "Convergence in Variance of Chebyshe..."
...Markov chain Monte Carlo (MCMC) is a class of methods for simulation of stochastic processes....
[...]
...When real historic data is available, we can use MCMC and Gibbs sampling to simulate multiple future scenarios and reduce the uncertainty of the investment decisions....
[...]
...This work is one of the first to combine Stackelberg equilibria to a large-scale realistic game with MCMC techniques....
[...]
...Several authors used MCMC for modelling of wind speeds or wind power outputs [12], [13]....
[...]
...We build on this work by developing a principled framework, based on game-theoretic and state-ofthe-art sampling techniques, i.e. Markov chain Monte Carlo (MCMC)....
[...]
References
[...]
34,729 citations
"Convergence in Variance of Chebyshe..." refers methods in this paper
...A standard method of reducing the asymptotic average reduction factor is by polynomial acceleration, particularly using Chebyshev polynomials [1, 6, 10, 23]....
[...]
...Analogous to the SSOR solver algorithms in [10, 23], the Chebyshev sampler implements sequential forward and backward sweeps of an SOR sampler [7, 20] (i....
[...]
...Since a symmetric splitting is required, Chebyshev acceleration in a linear solver is commonly implemented with a symmetric successive overrelaxation (SSOR) splitting A = MSSOR−NSSOR, with algorithms to be found, for example, in [10, 23]....
[...]
...For example, they occur in the stationary linear iterative methods used to solve systems of linear equations [1, 10, 17, 23]....
[...]
...We could have equally followed the excellent presentations of the same methods in Golub and Van Loan [10] or Saad [23]....
[...]
18,761 citations
13,484 citations
"Convergence in Variance of Chebyshe..." refers methods in this paper
...A standard method of reducing the asymptotic average reduction factor is by polynomial acceleration, particularly using Chebyshev polynomials [1, 6, 10, 23]....
[...]
...Analogous to the SSOR solver algorithms in [10, 23], the Chebyshev sampler implements sequential forward and backward sweeps of an SOR sampler [7, 20] (i....
[...]
...Since a symmetric splitting is required, Chebyshev acceleration in a linear solver is commonly implemented with a symmetric successive overrelaxation (SSOR) splitting A = MSSOR−NSSOR, with algorithms to be found, for example, in [10, 23]....
[...]
...For example, they occur in the stationary linear iterative methods used to solve systems of linear equations [1, 10, 17, 23]....
[...]
...We could have equally followed the excellent presentations of the same methods in Golub and Van Loan [10] or Saad [23]....
[...]
8,608 citations
6,884 citations
"Convergence in Variance of Chebyshe..." refers background in this paper
...84], [19], though distributions with nonconnected support for which Algorithm 1 fails are easy to find [19]....
[...]
...Convergence in both cases is geometric, with the asymptotic average reduction factor given by ρ (G) (though this is called the “convergence rate” in the statistics literature [19])....
[...]