Convergence in Variance of Chebyshev Accelerated Gibbs Samplers
Summary (1 min read)
1. Introduction. Iterations of the form (1.1)
- What makes this correspondence important is that the convergence properties of the solver are inherited by the sampler (and vice versa), which means that acceleration techniques developed for the solver may be applied to the sampler.
- The main purpose of this paper is to establish the equivalence of convergence in mean and covariance in the case of Chebyshev polynomial acceleration, without the assumption of the target distribution being Gaussian.
12). Along the horizontal axis are values of the ratio of the extreme eigenvalues of M
- By recursion the authors prove the following statement.
- Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php.
- Axelsson points out [1, Rem. 5.11] two deficiencies of the first-order Chebyshev iterative method as a solver:.
- First, the number of steps p needs to be selected in advance, with the method not being optimal for any other number of steps.
- The solution for iterative solvers, and hence for iterative samplers, is to develop the second-order methods, which have neither of these deficiencies.
5. Numerical examples sampling from Gaussians at different resolutions.
- The following examples both use a cubic-element discretization of the cubic domain [0, 1] 3 , with trilinear interpolation from nodal values within each element.
- The examples also both use R = 1/4, though they differ in the number of nodes (or elements) in each coordinate direction.
6. Discussion.
- While performing this research the authors have recognized the debt they othey to (the late) Gene Golub, who pioneered first-and second-order Chebyshev acceleration for linear solvers [9] , which they have built upon.
- The authors are pleased to demonstrate the connection between Gene's work and the sampling algorithms from statistics by publishing in this journal that Gene had wanted to remain titled the Journal on Scientific and Statistical Computing [22] .
Did you find this useful? Give us your feedback
Citations
9 citations
7 citations
Cites background or methods from "Convergence in Variance of Chebyshe..."
...Fox and Parker (2014) applied this approach to attain an iterative sampler with an optimal geometric convergence rate using Chebyshev polynomials....
[...]
...Convergence in the variance is even faster after only k∗∗stat = k∗stat/2 iterations for stationary samplers, or after k∗∗Cheby = k∗Cheby/2, (10) iterations for Chebyshev accelerated samplers (Fox and Parker 2014)....
[...]
...... Chebyshev Accelerated Sampling Chebyshev polynomial acceleration can be applied via equations (6) and (7) for any symmetric matrix splitting (i.e., M and N are symmetric) (Golub and Van Loan 1989; Fox and Parker 2014)....
[...]
...The Chebyshev accelerated sampler implementation in Fox and Parker (2014) is such an implementation....
[...]
...In all of the examples that Fox and Parker (2014, 2017) have studied using computationally expensive diagnostics, the Chebyshev accelerated samplers behave like the corresponding solvers, and converge with the predicted convergence rates in finite precision....
[...]
7 citations
Cites background or methods from "Convergence in Variance of Chebyshe..."
...Moreover, when multiplematrix splittings are used, the different splittings in [27] have differences only in the Chebyshev coefficients while in our work, different matrix splittings correspond to different graph structures....
[...]
...While the idea of converting a linear solver to a sampler is also discussed in [27], their work is different from ours because their algorithm does not consider graph structures in constructing the matrix splitting that is used (i....
[...]
...The authors of [27] have proposed a sampling framework that generalizes and accelerates the Gibbs sampler....
[...]
...The sampling algorithm in [27] adds additional noises corresponding to the first or second order Chebyshev coefficients to accelerate the Gibbs sampler....
[...]
4 citations
3 citations
Cites background or methods from "Convergence in Variance of Chebyshe..."
...If θ = 12 , then the proposal target and target distributions are the same, the MH accept/reject step is redundant, and we can use [11, 12, 13] to analyse and accelerate the AR(1) process (14)....
[...]
...By including the effect of Metropolis-Hastings we extend earlier work by Fox and Parker, who used matrix splitting techniques to analyse the performance and improve efficiency of AR(1) processes for targeting Gaussian distributions....
[...]
...[12] Fox, C. and Parker, A. Accelerated Gibbs sampling of normal distributions using matrix splittings and polynomials. not yet published....
[...]
...If the proposal target and target distributions are Gaussian and identical then it follows from the theory in [11, 12, 13] that to construct an efficient AR(1) process we should try to satisfy the following conditions: 1....
[...]
...Whilst we do not specify any new acceleration strategies, our results are an important step in this direction because we give a criteria to evaluate AR(1) process proposals, including accelerations used in Fox and Parker [11, 12, 13]....
[...]
References
[...]
34,729 citations
"Convergence in Variance of Chebyshe..." refers methods in this paper
...A standard method of reducing the asymptotic average reduction factor is by polynomial acceleration, particularly using Chebyshev polynomials [1, 6, 10, 23]....
[...]
...Analogous to the SSOR solver algorithms in [10, 23], the Chebyshev sampler implements sequential forward and backward sweeps of an SOR sampler [7, 20] (i....
[...]
...Since a symmetric splitting is required, Chebyshev acceleration in a linear solver is commonly implemented with a symmetric successive overrelaxation (SSOR) splitting A = MSSOR−NSSOR, with algorithms to be found, for example, in [10, 23]....
[...]
...For example, they occur in the stationary linear iterative methods used to solve systems of linear equations [1, 10, 17, 23]....
[...]
...We could have equally followed the excellent presentations of the same methods in Golub and Van Loan [10] or Saad [23]....
[...]
18,761 citations
13,484 citations
"Convergence in Variance of Chebyshe..." refers methods in this paper
...A standard method of reducing the asymptotic average reduction factor is by polynomial acceleration, particularly using Chebyshev polynomials [1, 6, 10, 23]....
[...]
...Analogous to the SSOR solver algorithms in [10, 23], the Chebyshev sampler implements sequential forward and backward sweeps of an SOR sampler [7, 20] (i....
[...]
...Since a symmetric splitting is required, Chebyshev acceleration in a linear solver is commonly implemented with a symmetric successive overrelaxation (SSOR) splitting A = MSSOR−NSSOR, with algorithms to be found, for example, in [10, 23]....
[...]
...For example, they occur in the stationary linear iterative methods used to solve systems of linear equations [1, 10, 17, 23]....
[...]
...We could have equally followed the excellent presentations of the same methods in Golub and Van Loan [10] or Saad [23]....
[...]
8,608 citations
6,884 citations
"Convergence in Variance of Chebyshe..." refers background in this paper
...84], [19], though distributions with nonconnected support for which Algorithm 1 fails are easy to find [19]....
[...]
...Convergence in both cases is geometric, with the asymptotic average reduction factor given by ρ (G) (though this is called the “convergence rate” in the statistics literature [19])....
[...]