Convergence in Variance of Chebyshev Accelerated Gibbs Samplers
Summary (1 min read)
1. Introduction. Iterations of the form (1.1)
- What makes this correspondence important is that the convergence properties of the solver are inherited by the sampler (and vice versa), which means that acceleration techniques developed for the solver may be applied to the sampler.
- The main purpose of this paper is to establish the equivalence of convergence in mean and covariance in the case of Chebyshev polynomial acceleration, without the assumption of the target distribution being Gaussian.
12). Along the horizontal axis are values of the ratio of the extreme eigenvalues of M
- By recursion the authors prove the following statement.
- Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php.
- Axelsson points out [1, Rem. 5.11] two deficiencies of the first-order Chebyshev iterative method as a solver:.
- First, the number of steps p needs to be selected in advance, with the method not being optimal for any other number of steps.
- The solution for iterative solvers, and hence for iterative samplers, is to develop the second-order methods, which have neither of these deficiencies.
5. Numerical examples sampling from Gaussians at different resolutions.
- The following examples both use a cubic-element discretization of the cubic domain [0, 1] 3 , with trilinear interpolation from nodal values within each element.
- The examples also both use R = 1/4, though they differ in the number of nodes (or elements) in each coordinate direction.
6. Discussion.
- While performing this research the authors have recognized the debt they othey to (the late) Gene Golub, who pioneered first-and second-order Chebyshev acceleration for linear solvers [9] , which they have built upon.
- The authors are pleased to demonstrate the connection between Gene's work and the sampling algorithms from statistics by publishing in this journal that Gene had wanted to remain titled the Journal on Scientific and Statistical Computing [22] .
Did you find this useful? Give us your feedback
Citations
124 citations
63 citations
21 citations
Cites background or methods from "Convergence in Variance of Chebyshe..."
...The Chebyshev accelerated SSOR solver and corresponding Chebyshev accelerated SSOR sampler (Fox and Parker, 2014) are depicted in panels C and D of Figure 1....
[...]
...But even sooner, after k∗∗ = k∗/2 iterations, the Chebyshev error reduction for the variance is predicted to be smaller than ε (Fox and Parker [19])....
[...]
...Using Theorem 5, we derived the Chebyshev accelerated SSOR sampler (Fox and Parker, 2014) by iteratively updating parameters via (13) and then generating a sampler via (17)....
[...]
...But even sooner, after k∗∗ = k∗/2 iterations, the Chebyshev error reduction for the variance is predicted to be smaller than ε (Fox and Parker, 2014). imsart-bj ver....
[...]
...Fox and Parker (2014) considered point-wise convergence of the mean and variance of a Gibbs SSOR sampler accelerated by Chebyshev polynomials....
[...]
19 citations
17 citations
References
2,527 citations
"Convergence in Variance of Chebyshe..." refers background in this paper
...Interestingly, the deterministic and stochastic iterations converge under exactly the same conditions, with a necessary and sufficient condition being that the spectral radius of G be strictly less than 1, that is, ρ (G) < 1 [4, 26]....
[...]
2,511 citations
"Convergence in Variance of Chebyshe..." refers methods in this paper
...The recent advent of adaptive Monte Carlo methods [13, 20] does offer the possibility of adapting to the mean and covariance within the iteration, as in the adaptive Metropolis (AM) algorithm....
[...]
2,212 citations
"Convergence in Variance of Chebyshe..." refers methods in this paper
...Our example uses the relationship between stationary GMRFs and stochastic PDEs that was noted by Whittle [25] for the Matérn (or Whittle–Matérn; see [12]) class of covariance functions and that was also exploited in [3, 15]....
[...]
2,043 citations
"Convergence in Variance of Chebyshe..." refers methods in this paper
...A standard method of reducing the asymptotic average reduction factor is by polynomial acceleration, particularly using Chebyshev polynomials [1, 6, 10, 23]....
[...]
...The original formulation used a modified first-order iteration, as above, though the resulting algorithm is impractical due to numerical difficulties [1]....
[...]
...We follow Axelsson by considering the splitting with M = I, from which the general case follows by considering the (preconditioned) equations with M−1A in place of A....
[...]
...Axelsson’s result in (4.4) that specifies the required number of iterations to achieve a desired error reduction in the solver suggests that, for any ε > 0, after (4.13) p∗ = ⌈ ln(ε/2) ln(σ2) ⌉ iterations, the variance error reduction is smaller than ε. 4.3....
[...]
...Axelsson gives this result [1, p. 183], and it is interesting to note that a little good fortune happens; the three equations can be satisfied with just two coefficients because the recursion for the Chebyshev polynomials turns one equation into a second....
[...]
1,612 citations