scispace - formally typeset
Search or ask a question

Showing papers on "Bidiagonalization published in 2015"


Journal Article
TL;DR: The fundamental Krylov-Tikhonov techniques based on Lanczos bidiagonalization and the Arnoldi algorithms are reviewed and the use of the unsymmetric Lanczos process that has just marginally been considered in this setting is studied.
Abstract: In the framework of large-scale linear discrete ill-posed problems, Krylov projection methods represent an essential tool since their development, which dates back to the early 1950's. In recent years, the use of these methods in a hybrid fashion or to solve Tikhonov regularized problems has received great attention especially for problems involving the restoration of digital images. In this paper we review the fundamental Krylov-Tikhonov techniques based on Lanczos bidiagonalization and the Arnoldi algorithms. Moreover, we study the use of the unsymmetric Lanczos process that, to the best of our knowledge, has just marginally been considered in this setting. Many numerical experiments and comparisons of different methods are presented.

105 citations


Journal ArticleDOI
TL;DR: In this paper, a hybrid, two-stage meta-method, PHSVDS, is proposed to compute the largest and smallest singular triplets of large, sparse matrices.
Abstract: The computation of a few singular triplets of large, sparse matrices is a challenging task, especially when the smallest magnitude singular values are needed in high accuracy. Most recent efforts try to address this problem through variations of the Lanczos bidiagonalization method, but they are still challenged even for medium matrix sizes due to the difficulty of the problem. We propose a novel SVD approach that can take advantage of preconditioning and of any well-designed eigensolver to compute both largest and smallest singular triplets. Accuracy and efficiency is achieved through a hybrid, two-stage meta-method, PHSVDS. In the first stage, PHSVDS solves the normal equations up to the best achievable accuracy. If further accuracy is required, the method switches automatically to an eigenvalue problem with the augmented matrix. Thus it combines the advantages of the two stages, faster convergence and accuracy, respectively. For the augmented matrix, solving the interior eigenvalue is facilitated by pr...

29 citations


Journal ArticleDOI
TL;DR: In this paper, the Lanczos bidiagonalization algorithm was used to solve a Tikhonov cost function in a short time and the performance of the inverse modeling greatly decreases by substituting the forward operator matrix with a matrix of lower dimension.
Abstract: This paper describes the application of a new inversion method to recover a three-dimensional density model from measured gravity anomalies. To attain this, the survey area is divided into a large number of rectangular prisms in a mesh with unknown densities. The results show that the application of the Lanczos bidiagonalization algorithm in the inversion helps to solve a Tikhonov cost function in a short time. The performance time of the inverse modeling greatly decreases by substituting the forward operator matrix with a matrix of lower dimension. A least-squares QR (LSQR) method is applied to select the best value of a regularization parameter. A Euler deconvolution method was used to avoid the natural trend of gravity structures to concentrate at shallow depth. Finally, the newly developed method was applied to synthetic data to demonstrate its suitability and then to real data from the Bandar Charak region (Hormozgan, south Iran). The 3D gravity inversion results were able to detect the location of the known salt dome (density contrast of −0.2 g/cm3) intrusion in the study area.

21 citations


Journal ArticleDOI
TL;DR: The core concept to the multiple right-hand sides case $AX\approx B$ in [I. Hnětynkova, M. Plesinger, and Z. Strakos, SIAM J. Matrix Anal. Appl., 34 (2013), pp. 917--931], which is highly nontrivial, is based on application of the singular value decomposition.
Abstract: The concept of the core problem in total least squares (TLS) problems with single right-hand side introduced in [C. C. Paige and Z. Strakos, SIAM J. Matrix Anal. Appl., 27 (2005), pp. 861--875] separates necessary and sufficient information for solving the problem from redundancies and irrelevant information contained in the data. It is based on orthogonal transformations such that the resulting problem decomposes into two independent parts. One of the parts has nonzero right-hand side and minimal dimensions and it always has the unique TLS solution. The other part has trivial (zero) right-hand side and maximal dimensions. Assuming exact arithmetic, the core problem can be obtained by the Golub--Kahan bidiagonalization. Extension of the core concept to the multiple right-hand sides case $AX\approx B$ in [I. Hnětynkova, M. Plesinger, and Z. Strakos, SIAM J. Matrix Anal. Appl., 34 (2013), pp. 917--931], which is highly nontrivial, is based on application of the singular value decomposition. In this paper we...

17 citations


01 Jan 2015
TL;DR: In this article, a generalized generalized Golub-Kahan bidiagonalization for matrix pair decomposition was proposed, which is an extension of the generalized Krylov method for matrix pairs, and simplifies to the latter method when B is the identity matrix.
Abstract: We describe a novel method for reducing a pair of large matrices {A;B} to a pair of small matrices {H;K}. The method is an extension of Golub-Kahan bidiagonalization to matrix pairs, and simplifies to the latter method when B is the identity matrix. Applications to Tikhonov regularization of large linear discrete ill-posed problems are described. In these problems the matrix A represents a discretization of a compact integral operator and B is a regularization matrix. Keywords: Generalized Golub-Kahan bidiagonalization, eneralized Lanczos bidiagonalization, generalized Krylov method, matrix pair decomposition, ill-posed problem, Tikhonov regularization, multi-parameter regularization.

16 citations


Journal ArticleDOI
TL;DR: A novel method for reducing a Pair of large matrices A and B to a pair of small matrices H,K is described, an extension of Golub–Kahan bidiagonalization to matrix pairs, and simplifies to the latter method when B is the identity matrix.
Abstract: We describe a novel method for reducing a pair of large matrices $$\{A,B\}$${A,B} to a pair of small matrices $$\{H,K\}$${H,K} The method is an extension of Golub---Kahan bidiagonalization to matrix pairs, and simplifies to the latter method when B is the identity matrix Applications to Tikhonov regularization of large linear discrete ill-posed problems are described In these problems the matrix A represents a discretization of a compact integral operator and B is a regularization matrix

14 citations


Journal ArticleDOI
TL;DR: This paper presents a novel block LSMR (least squares minimal residual) algorithm for solving non-symmetric linear systems with multiple right-hand sides and observes that the Frobenius norm of residual matrix decreases monotonically.
Abstract: It is well known that if the coefficient matrix in a linear system is large and sparse or sometimes not readily available, then iterative solvers may become the only choice. The block solvers are an attractive class of iterative solvers for solving linear systems with multiple right-hand sides. In general, the block solvers are more suitable for dense systems with preconditioner. In this paper, we present a novel block LSMR (least squares minimal residual) algorithm for solving non-symmetric linear systems with multiple right-hand sides. This algorithm is based on the block bidiagonalization and LSMR algorithm and derived by minimizing the 2-norm of each column of normal equation. Then, we give some properties of the new algorithm. In addition, the convergence of the stated algorithm is studied. In practice, we also observe that the Frobenius norm of residual matrix decreases monotonically. Finally, some numerical examples are presented to show the efficiency of the new method in comparison with the traditional LSMR method.

7 citations


01 Jan 2015
TL;DR: This work develops a new extended Lanczos bidiagonalization method and obtains a lower bound for the condition number of a large, sparse, real matrix, and yields probabilistic upper bounds for $\kappa(A)$.
Abstract: Reliable estimates for the condition number of a large, sparse, real matrix A are important in many applications. To get an approximation for the condition number ?(A), an approximation for the smallest singular value is needed. Standard Krylov subspaces are usually unsuitable for finding a good approximation to the smallest singular value. Therefore, we study extended Krylov subspaces which turn out to be ideal for the simultaneous approximation of both the smallest and largest singular value of a matrix. First, we develop a new extended Lanczos bidiagonalization method. With this method we obtain a lower bound for the condition number. Moreover, the method also yields probabilistic upper bounds for ?(A). The user can select the probability with which the upper bound holds, as well as the ratio of the probabilistic upper bound and the lower bound. Keywords: Extended Lanczos bidiagonalization, extended Krylov method, matrix condition number, lower bound, probabilistic upper bound.

5 citations


Journal ArticleDOI
TL;DR: In this article, an extended Lanczos bidiagonalization method was proposed for the simultaneous approximation of both the smallest and largest singular values of a matrix. But this method is not suitable for finding a good approximation to the smallest singular value.
Abstract: Reliable estimates for the condition number of a large, sparse, real matrix $A$ are important in many applications. To get an approximation for the condition number $\kappa(A)$, an approximation for the smallest singular value is needed. Standard Krylov subspaces are usually unsuitable for finding a good approximation to the smallest singular value. Therefore, we study extended Krylov subspaces which turn out to be ideal for the simultaneous approximation of both the smallest and largest singular value of a matrix. First, we develop a new extended Lanczos bidiagonalization method. With this method we obtain a lower bound for the condition number. Moreover, the method also yields probabilistic upper bounds for $\kappa(A)$. The user can select the probability with which the upper bound holds, as well as the ratio of the probabilistic upper bound and the lower bound.

4 citations


DOI
20 Sep 2015
TL;DR: A block version of the LSMR algorithm for solving linear systems with multiple right-hand sides is presented in this paper, which is based on the block bidiagonalization and derived by minimizing the Frobenius norm of the residual matrix of normal equations.
Abstract: LSMR (Least Squares Minimal Residual) is an iterative method for the solution of the linear system of equations and leastsquares problems. This paper presents a block version of the LSMR algorithm for solving linear systems with multiple right-hand sides. The new algorithm is based on the block bidiagonalization and derived by minimizing the Frobenius norm of the resid ual matrix of normal equations. In addition, the convergence of the proposed algorithm is discussed. In practice, it is also observed that the Frobenius norm of the residual matrix decreases monotonically. Finally, numerical experiments from real applications are employed to verify the effectiveness of the presented method.

3 citations


Posted Content
TL;DR: Hawkins and Ben-Israel as mentioned in this paper developed numerical algorithms for the efficient evaluation of quantities associated with generalized matrix functions, based on Gaussian quadrature and Golub-Kahan bidiagonalization.
Abstract: We develop numerical algorithms for the efficient evaluation of quantities associated with generalized matrix functions [J. B. Hawkins and A. Ben-Israel, Linear and Multilinear Algebra 1(2), 1973, pp. 163-171]. Our algorithms are based on Gaussian quadrature and Golub--Kahan bidiagonalization. Block variants are also investigated. Numerical experiments are performed to illustrate the effectiveness and efficiency of our techniques in computing generalized matrix functions arising in the analysis of networks.

ReportDOI
12 Apr 2015
TL;DR: This paper develops three potential implementations of communication-avoiding Lanczos bidiagonalization algorithms and discusses their different computational requirements and shows how to obtain a communication- avoidanceing LSQR least squares solver.
Abstract: : Communication - the movement of data between levels of memory hierarchy or between processors over a network - is the most expensive operation in terms of both time and energy at all scales of computing. Achieving scalable performance in terms of time and energy thus requires a dramatic shift in the field of algorithmic design. Solvers for sparse linear algebra problems, ubiquitous throughout scientific codes, are often the bottlenecks in application performance due to a low computation/communication ratio. In this paper we develop three potential implementations of communication-avoiding Lanczos bidiagonalization algorithms and discuss their different computational requirements. Based on these new algorithms, we also show how to obtain a communication-avoiding LSQR least squares solver.

Posted Content
TL;DR: In this paper, Lanczos bidiagonalization based Krylov subspace iterative method and its mathematically equivalent CGLS applied to normal equations system are commonly used for large-scale discrete ill-posed problems.
Abstract: LSQR, a Lanczos bidiagonalization based Krylov subspace iterative method, and its mathematically equivalent CGLS applied to normal equations system, are commonly used for large-scale discrete ill-posed problems. It is well known that LSQR and CGLS have regularizing effects, where the number of iterations plays the role of the regularization parameter. However, it has long been unknown whether the regularizing effects are good enough to find best possible regularized solutions. Here a best possible regularized solution means that it is at least as accurate as the best regularized solution obtained by the truncated singular value decomposition (TSVD) method. In this paper, we establish bounds for the distance between the $k$-dimensional Krylov subspace and the $k$-dimensional dominant right singular space. They show that the Krylov subspace captures the dominant right singular space better for severely and moderately ill-posed problems than for mildly ill-posed problems. Our general conclusions are that LSQR has better regularizing effects for the first two kinds of problems than for the third kind, and a hybrid LSQR with additional regularization is generally needed for mildly ill-posed problems. Exploiting the established bounds, we derive an estimate for the accuracy of the rank $k$ approximation generated by Lanczos bidiagonalization. Numerical experiments illustrate that the regularizing effects of LSQR are good enough to compute best possible regularized solutions for severely and moderately ill-posed problems, stronger than our theory predicts, but they are not for mildly ill-posed problems and additional regularization is needed.

Proceedings ArticleDOI
TL;DR: In this article, the authors present an efficient parallel algorithm for the inversion of 3D gravity data, which goal is to estimate the depth of a sedimentary basin in which the density contrast varies parabolically with depth.
Abstract: It is present an efficient parallel algorithm for the inversion of 3-D gravity data, which goal is to estimate the depth of a sedimentary basin in which the density contrast varies parabolically with depth. The efficiency of the gravity inversion methods applied to the interpretation of sedimentary basins depends on the number of data and model parameters to be estimated, making it very poor when the number of parameters is very large. We present the simulation results with a synthetic model of a sedimentary basin inspired in a real situation, taking advantage of a parallel Levenberg-Marquardt algorithm implemented using both MPI and OpenMP. Lanczos bidiagonalization method has been used to obtain the solution for the linearized subproblem at each iteration. The idea of obtaining the solution of a large system of equations using the bidiagonalization procedure is quite useful in practical problems, and allows to implement selection methods for the optimal regularization parameter in an easy way, like the weighted generalized cross validation method, adopted in this work. The hybrid parallel implementation combined with Lanczos bidiagonalization allows us to achieve a significant reduction of the computational cost, which is otherwise very high due to the scale of the problem.

Journal ArticleDOI
01 Jan 2015
TL;DR: A performance analysis of the main two-sided factorizations used in large dense singular value decomposition (SVD) or eigenvalue problems: the bidiagonalization, tridiagonalization and the upper Hessenberg factorizations on heterogeneous systems of multicore CPUs and Xeon Phi coprocessors.
Abstract: Many applications, ranging from big data analytics to nanostructure designs, require the solution of large dense singular value decomposition (SVD) or eigenvalue problems A first step in the solution methodology for these problems is the reduction of the matrix at hand to condensed form by two-sided orthogonal transformations This step is standardly used to significantly accelerate the solution process We present a performance analysis of the main two-sided factorizations used in these reductions: the bidiagonalization, tridiagonalization, and the upper Hessenberg factorizations on heterogeneous systems of multicore CPUs and Xeon Phi coprocessors We derive a performance model and use it to guide the analysis and to evaluate performance We develop optimized implementations for these methods that get up to 80% of the optimal performance bounds Finally, we describe the heterogeneous multicore and coprocessor development considerations and the techniques that enable us to achieve these high-performance results The work here presents the first highly optimized implementation of these main factorizations for Xeon Phi coprocessors Compared to the LAPACK versions optmized by Intel for Xeon Phi (in MKL), we achieve up to 50% speedup

Posted Content
TL;DR: This work introduces and analyzes an inexact Golub-Kahan-Lanczos bidiagonalization procedure, where the inexactness is related to the inaccuracy of the operations of f(A)v, and particular outer and inner stopping criteria are devised so as to cope with the lack of a true residual.
Abstract: Given a large square matrix $A$ and a sufficiently regular function $f$ so that $f(A)$ is well defined, we are interested in the approximation of the leading singular values and corresponding singular vectors of $f(A)$, and in particular of $\|f(A)\|$, where $\|\cdot \|$ is the matrix norm induced by the Euclidean vector norm. Since neither $f(A)$ nor $f(A)v$ can be computed exactly, we introduce and analyze an inexact Golub-Kahan-Lanczos bidiagonalization procedure, where the inexactness is related to the inaccuracy of the operations $f(A)v$, $f(A)^*v$. Particular outer and inner stopping criteria are devised so as to cope with the lack of a true residual. Numerical experiments with the new algorithm on typical application problems are reported.

Posted Content
TL;DR: In this paper, the authors used the Tikhonov regularization for projected solutions of large-scale ill-posed problems and derived the weight parameter for the weighted generalized cross validation, and also explained why the weighted cross validation without weighting is not always useful.
Abstract: Tikhonov regularization for projected solutions of large-scale ill-posed problems is considered. The Golub-Kahan iterative bidiagonalization is used to project the problem onto a subspace and regularization then applied to find a subspace approximation to the full problem. Determination of the regularization parameter for the projected problem by unbiased predictive risk estimation, generalized cross validation and discrepancy principle techniques is investigated. It is shown that the regularized parameter obtained by the unbiased predictive risk estimator can provide a good estimate for that to be used for a full problem which is moderately to severely ill-posed. A similar analysis provides the weight parameter for the weighted generalized cross validation such that the approach is also useful in these cases, and also explains why the generalized cross validation without weighting is not always useful. All results are independent of whether systems are over or underdetermined. Numerical simulations for standard one dimensional test problems and two dimensional data, for both image restoration and tomographic image reconstruction, support the analysis and validate the techniques. The size of the projected problem is found using an extension of a noise revealing function for the projected problem Hn\u etynkova, Ple\u singer, and Strako\u s, [\textit{BIT Numerical Mathematics} {\bf 49} (2009), 4 pp. 669-696.]. Furthermore, an iteratively reweighted regularization approach for edge preserving regularization is extended for projected systems, providing stabilization of the solutions of the projected systems and reducing dependence on the determination of the size of the projected subspace.

Journal ArticleDOI
TL;DR: A complex generalization of wedge-shaped matrices is introduced and some further spectral properties are shown, complementing the already known ones, focus in particular on nonzero components of eigenvectors.