scispace - formally typeset
Search or ask a question

Showing papers on "Bidiagonalization published in 2007"


Journal Article
TL;DR: A weighted generalized cross validation (WGCV) method is described that the semi-convergence behavior of the Lanczos method can be overcome, making the solution less sensitive to the number of iterations.
Abstract: Lanczos-hybrid regularization methods have been proposed as effective approaches for solving largescale ill-posed inverse problems. Lanczos methods restrict the solution to lie in a Krylov subspace, but they are hindered by semi-convergence behavior, in that the quality of the solution first increases and then decreases. Hybrid methods apply a standard regularization technique, such as Tikhonov regularization, to the projected problem at each iteration. Thus, regularization in hybrid methods is achieved both by Krylov filtering and by appropriate choice of a regularization parameter at each iteration. In this paper we describe a weighted generalized cross validation (WGCV) method for choosing the parameter. Using this method we demonstrate that the semi-convergence behavior of the Lanczos method can be overcome, making the solution less sensitive to the number of iterations.

178 citations


Journal ArticleDOI
TL;DR: The quest for a highly accurate and efficient SVD algorithm has led to a new, superior variant of the Jacobi algorithm, which has inherited all good high accuracy properties and can outperform the QR algorithm.
Abstract: This paper is the result of concerted efforts to break the barrier between numerical accuracy and run-time efficiency in computing the fundamental decomposition of numerical linear algebra—the singular value decomposition (SVD) of general dense matrices. It is an unfortunate fact that the numerically most accurate one-sided Jacobi SVD algorithm is several times slower than generally less accurate bidiagonalization-based methods such as the QR or the divide-and-conquer algorithm. Our quest for a highly accurate and efficient SVD algorithm has led us to a new, superior variant of the Jacobi algorithm. The new algorithm has inherited all good high accuracy properties of the Jacobi algorithm, and it can outperform the QR algorithm.

154 citations


Journal ArticleDOI
TL;DR: A projection-based iterative algorithm based on a joint bidiagonalization algorithm appropriate for large-scale problems when it is computationally infeasible to transform the regularized problem to standard form and how estimates of the corresponding optimal regularization parameter can be efficiently obtained is presented.
Abstract: We present a projection-based iterative algorithm for computing general-form Tikhonov regularized solutions to the problem $\min_x\{\| Ax-b \|_2^2+\lambda^2\| Lx \|_2^2\}$, where the regularization matrix $L$ is not the identity. Our algorithm is designed for the common case where $\lambda$ is not known a priori. It is based on a joint bidiagonalization algorithm and is appropriate for large-scale problems when it is computationally infeasible to transform the regularized problem to standard form. By considering the projected problem, we show how estimates of the corresponding optimal regularization parameter can be efficiently obtained. Numerical results illustrate the promise of our projection-based approach.

112 citations


Journal ArticleDOI
TL;DR: In this paper, the authors incorporate the Pavtial Reorthogonalization Package (PROPACK) package into the double-difference seismic tomography code to estimate the model resolution matrix for large seismic tomograph problems.
Abstract: SUMMARY The PROPACK package developed by Larsen is able to efficiently and accurately estimate singular values and vectors for large matrices based on the Lanczos bidiagonalization with partial reorthogonalization. We incorporate the Pavtial Reorthogonalization Package (PROPACK) package into the double-difference seismic tomography code tomoDD and estimate the model resolution matrix for large seismic tomography problems. Compared to previous Least Squares QR (LSQR)-based methods for estimating the model resolution matrix the PROPACK-based method calculates the full resolution matrix and thus gives a complete description of how well the model is resolved. Several observations are drawn from the application to data from the 2001 eruption of Mt Etna: for this example, it is reasonable to use ray-sampling density information to characterize the model resolution qualitatively; the model resolution resulting from just velocity inversion bears a close linear relationship to that from simultaneous inversion but always overestimates resolution; and the inversion system using differential times has a greater ability to resolve the source region structure than the system using absolute times.

80 citations


Journal ArticleDOI
TL;DR: In this article, restarted block Lanczos bidiagonalization methods based on augmentation of Ritz vectors or harmonic Ritz vector by block Krylov subspaces are described.
Abstract: The problem of computing a few of the largest or smallest singular values and associated singular vectors of a large matrix arises in many applications. This paper describes restarted block Lanczos bidiagonalization methods based on augmentation of Ritz vectors or harmonic Ritz vectors by block Krylov subspaces.

66 citations


01 Jan 2007
TL;DR: This paper presents a Lanczos bidiagonalization procedure implemented in SLEPc, a software library for the solution of large, sparse eigenvalue problems on parallel computers, which is numerically robust and scales well up to hundreds of processors.
Abstract: Lanczos bidiagonalization is a competitive method for computing a partial singular value decomposition of a large sparse matrix, that is, when only a subset of the singular values and corresponding singular vectors are required. However, a straightforward implementation of the algorithm has the problem of loss of orthogonality between computed Lanczos vectors, and some reorthogonalization technique must be applied. Also, an effective restarting strategy must be used to prevent excessive growth of the cost of reorthogonalization per iteration. On the other hand, if the method is to be implemented on a distributed-memory parallel computer, then additional precautions are required so that parallel efficiency is maintained as the number of processors increases. In this paper, we present a Lanczos bidiagonalization procedure implemented in SLEPc, a software library for the solution of large, sparse eigenvalue problems on parallel computers. The solver is numerically robust and scales well up to hundreds of processors.

38 citations


Journal ArticleDOI
TL;DR: The inconsistency caused by truncation in the conventional partial least squares (PLS) regression algorithm with one dependent variable was studied in this paper, where it was shown that the conventional PLS algorithm with orthogonal score vectors uses one model space to compute the regression vector but another model space for representing the reconstructed data.
Abstract: This paper documents an inconsistency caused by truncation in the conventional partial least squares (PLS) regression algorithm with one dependent variable. It is shown that the conventional PLS algorithm with orthogonal score vectors uses one model space to compute the regression vector but another model space to represent the reconstructed data. In comparison, the Bidiag2 bidiagonalization algorithm and the non-orthogonal PLS algorithm of Martens are consistent in using the same space for the regression vector and for the reconstructed data, while the SIMPLS algorithm has the same inconsistency as the conventional PLS algorithm. The magnitude of the difference depends on the degree of truncation of the model space: it is generally larger with greater truncations but is not a monotonic function of the truncation. A numerical example demonstrates implications for outlier detection. Consistent behavior upon truncation may be important when PLS regression is implemented as part of standardized analytical procedures. Copyright © 2007 John Wiley & Sons, Ltd.

34 citations


Journal ArticleDOI
TL;DR: It is proved that the partial upper bidiagonalization of the extended matrix [b, A] determines a core approximation problem A11x1 ≈ b1, with all necessary and sufficient information for solving the original problem given by b1 and A11.

19 citations


Journal ArticleDOI
TL;DR: Two new algorithms for one-sided bidiagonalization are presented, the first is a block version which improves execution time by improving cache utilization from the use of BLAS 2.5 operations and more BLAS 3 operations and the second is adapted to parallel computation.
Abstract: Two new algorithms for one-sided bidiagonalization are presented. The first is a block version which improves execution time by improving cache utilization from the use of BLAS 2.5 operations and more BLAS 3 operations. The second is adapted to parallel computation. When incorporated into singular value decomposition software, the second algorithm is faster than the corresponding ScaLAPACK routine in most cases. An error analysis is presented for the first algorithm. Numerical results and timings are presented for both algorithms.

19 citations


Journal ArticleDOI
TL;DR: In this paper, the determination of the regularization parameter and the dimension of the Krylov subspace for this kind of method is discussed. But this method is not suitable for the problem of large linear ill-posed problems.
Abstract: Several numerical methods for the solution of large linear ill-posed problems combine Tikhonov regularization with an iterative method based on partial Lanczos bidiagonalization of the operator. This paper discusses the determination of the regularization parameter and the dimension of the Krylov subspace for this kind of method. A method that requires a Krylov subspace of minimal dimension is referred to as greedy.

13 citations


Journal ArticleDOI
TL;DR: This paper introduces a modification of the algorithm for the reduction of rectangular dense matrices to bidiagonal form which halves the number of communication instances and investigates the best data distribution schemes for the different codes.
Abstract: A new stable method for the reduction of rectangular dense matrices to bidiagonal form has been proposed recently. This is a one-sided method since it can be entirely expressed in terms of operations with (full) columns of the matrix under transformation. The algorithm is well suited to parallel computing and, in order to make it even more attractive for distributed memory systems, we introduce a modification which halves the number of communication instances. In this paper we present such a modification. A block organization of the algorithm to use level 3 BLAS routines seems difficult and, at least for the moment, it relies upon level 2 BLAS routines. Nevertheless, we found that our sequential code is competitive with the LAPACK DGEBRD routine. We also compare the time taken by our parallel codes and the ScaLAPACK PDGEBRD routine. We investigated the best data distribution schemes for the different codes and we can state that our parallel codes are also competitive with the ScaLAPACK routine.

01 Jan 2007
TL;DR: It is proved that the partial upper bidiagonalization of the extended matrix [b, A] determines a core approximation problem A11x1 ≈ b1, with all necessary and sufficient information for solving the original problem given by b1 and A11.
Abstract: The Lanczos tridiagonalization orthogonally transforms a real symmetric matrix A to symmetric tridiagonal form. The Golub–Kahan bidiagonalization orthogonally reduces a nonsymmetric rectangular matrix to upper or lower bidiagonal form. Both algorithms are very closely related. The paper [C.C. Paige, Z. Strakos, Core problems in linear algebraic systems, SIAM J. Matrix Anal. Appl. 27 (2006) 861–875] presents a new formulation of orthogonally invariant linear approximation problems Ax ≈ b. It is proved that the partial upper bidiagonalization of the extended matrix [b, A] determines a core approximation problem A11x1 ≈ b1, with all necessary and sufficient information for solving the original problem given by b1 and A11. It is further shown how the core problem can be used in a simple and efficient way for solving different formulations of the original approximation problem. Our contribution relates the core problem formulation to the Lanczos tridiagonalization and derives its characteristics from the relationship between the Golub–Kahan bidiagonalization, the Lanczos tridiagonalization and the wellknown properties of Jacobi matrices.

Proceedings ArticleDOI
01 Oct 2007
TL;DR: This paper uses iterative Lanczos bidiagonalization algorithm, combined with projection- based Tikhonov regularization for the reconstruction of super-resolution reconstructions from larger sizes of low-resolution datasets for which they are normally not able to compute the pseudo-inverse directly due to heavy computation load.
Abstract: Super-resolution reconstruction using the pseudo- inverse approach has been proven to obtain excellent results; however it is computationally heavy. We have shown previously that incorporating blob-based basis functions into super- resolution reconstruction can guarantee better results and save computational time, and we are able to use a lower number of low-resolution datasets for the super-resolution reconstruction. Instead of directly finding the pseudo-inverse, we use iterative Lanczos bidiagonalization algorithm, combined with projection- based Tikhonov regularization for the reconstruction. We show in this paper that by using this iterative algorithm, we are able to recover super-resolution reconstructions from larger sizes of low-resolution datasets for which we are normally not able to compute the pseudo-inverse directly due to heavy computation load.