scispace - formally typeset
Search or ask a question

Showing papers on "QR decomposition published in 1995"


Journal ArticleDOI
TL;DR: By repeatedly applying the Wedderburn rank-one reduction formula to reduce ranks, a biconjugation process analogous to the Gram–Schmidt process with oblique projections can be developed.
Abstract: Let $A \in R^{m \times n} $ denote an arbitrary matrix. If $x \in R^n $ and $y \in R^m $ are vectors such that $\omega = y^T Ax e 0$, then the matrix $B: = A - \omega ^{ - 1} Axy^T A$ A has rank exactly one less than the rank of A. This Wedderburn rank-one reduction formula is easy to prove, yet the idea is so powerful that perhaps all matrix factorizations can be derived from it. The formula also appears in places such as the positive definite secant updates BFGS and DFP as well as the ABS methods. By repeatedly applying the formula to reduce ranks, a biconjugation process analogous to the Gram–Schmidt process with oblique projections can be developed. This process provides a mechanism for constructing factorizations such as ${\text{LDM}}^T $, QR, and SVD under a common framework of a general biconjugate decomposition $V^T AU = \Omega $ that is diagonal and nonsingular. Two characterizations of biconjugation provide new insight into the Lanczos method and its breakdown. One characterization shows that ...

110 citations


01 Jan 1995
TL;DR: This paper describes the implementation of a parallel variant of GMRES on Paragon that builds an orthonormal Krylov basis in two steps: it computes a Newton basis then orthogonalises it and a QR factorisation of a rectangular matrix with few long vectors.
Abstract: This paper describes the implementation of a parallel variant of GMRES on Paragon. This variant builds an orthonormal Krylov basis in two steps: it rst computes a Newton basis then orthogonalises it. The rst step requires matrix-vector products with a general sparse unsymmetric matrix and the second step is a QR factorisation of a rectangular matrix with few long vectors. The algorithm has been implemented for a distributed memory parallel computer. The distributed sparse matrix-vector product avoids global communications thanks to the initial setup of the communication pattern. The QR factorisation is distributed by using Givens rotations which require only local communications. Results on an Intel Paragon show the eciency and the scalability of our algorithm.

66 citations


Journal ArticleDOI
TL;DR: A complete orthogonal decomposition algorithm to solve a full-rank weighted least-squares problem and it is shown that it is also stable and simpler and more efficient than the NSH method.
Abstract: Consider a full-rank weighted least-squares problem in which the weight matrix is highly ill-conditioned. Because of the ill-conditioning, standard methods for solving least-squares problems, QR factorization and the nullspace method for example, break down. G. W. Stewart established a norm bound for such a system of equations, indicating that it may be possible to find an algorithm that gives an accurate solution. S. A. Vavasis proposed a new definition of stability that is based on this result. He also proposed the NSH algorithm for solving this least-squares problem and showed that it satisfies the new definition of stability. This paper describes a complete orthogonal decomposition algorithm to solve this problem and shows that it is also stable. This new algorithm is simpler and more efficient than the NSH method.

50 citations


Journal ArticleDOI
TL;DR: Two iterative algorithms which generate sequences convergent to the minimal Euclidean length solution in the general case (inconsistent system and rank deficient matrix) and need no special assumptions about the system.
Abstract: For numerical computation of the minimal Euclidean norm (least-squares) solution of overdetermined linear systems, usually direct solvers are used (like QR decomposition, see [4]). The iterative methods for such kind of problems need special assumptions about the system (consistency, full rank of the system matrix, some parameters they use or they give not the minimal length solution, [2,3,5,8,10,13]). In the present paper we purpose two iterative algorithms which generate sequences convergent to the minimal Euclidean length solution in the general case (inconsistent system and rank deficient matrix). The algorithms use only some combinations and properties of the well-known Kaczmarz iterative method ([13]) and need no special assumptions about the system.

50 citations


Journal ArticleDOI
TL;DR: An accurate algorithm is presented for downdating a row in the rank-revealing URV decomposition that was recently introduced by Stewart and can produce accurate results even for ill-conditioned problems.
Abstract: An accurate algorithm is presented for downdating a row in the rank-revealing URV decomposition that was recently introduced by Stewart. By downdating the full rank part and the noise part in two separate steps, the new algorithm can produce accurate results even for ill-conditioned problems. Such problems occur, for example, when the rank of the matrix is decreased as a consequence of the downdate. Other possible generalizations of existing QR decomposition downdating algorithms for the rank-revealing URV downdating are discussed. Numerical test results are presented that compare the performance of these new URV decomposition downdating algorithms in the sliding window method.

38 citations


Patent
14 Mar 1995
TL;DR: In this article, a dynamical system analyser is used to perform a singular value decomposition of a time series of signals from a nonlinear (possibly chaotic) dynamical systems.
Abstract: A dynamical system analyser (10) incorporates a computer (22) to perform a singular value decomposition of a time series of signals from a nonlinear (possibly chaotic) dynamical system (14). Relatively low-noise singular vectors from the decomposition are loaded into a finite impulse response filter (34). The time series is formed into Takens' vectors each of which is projected onto each of the singular vectors by the filter (34). Each Takens' vector thereby provides the co-ordinates of a respective point on a trajectory of the system (14) in a phase space. A heuristic processor (44) is used to transform delayed co-ordinates by QR decomposition and least squares fitting so that they are fitted to non-delayed co-ordinates. The heuristic processor (44) generates a mathematical model to implement this transformation, which predicts future system states on the basis of respective current states. A trial system is employed to generate like co-ordinates for transformation in the heuristic processor (44). This produces estimates of the trial system's future states predicted from the comparison system's model. Alternatively, divergences between such estimates and actual behavior may be obtained. As a further alternative, mathematical models derived by the analyser (10) from different dynamical systems may be compared.

38 citations


Journal ArticleDOI
Zheng-She Liu1
TL;DR: This paper presents a fast adaptive least squares algorithm for the parameter estimation of linear and some nonlinear time-varying systems and can be easily extended to construct other kinds of algorithms, such as the generalized adaptive least square algorithm, the augmented matrix algorithm, and the maximum likelihood algorithm.
Abstract: Recent attention in adaptive least squares parameter estimation has been focused on methods derived from the QR factorization owing to the fact that the QR-based algorithms are much more numerically stable and accurate than the traditional pseudo-inverse-based algorithms, also known as normal equation-based algorithms, even though the former is usually much slower than the latter. This paper presents a fast adaptive least squares algorithm for the parameter estimation of linear and some nonlinear time-varying systems. The algorithm is based on Householder transformations. As verified by simulation results, this algorithm exhibits good numerical stability and accuracy. In addition, the new algorithm requires computation and storage with order of O(N) rather than O(N/sup 2/) where N is the number of unknown parameters to be estimated. This algorithm can be easily extended to construct other kinds of algorithms, such as the generalized adaptive least squares algorithm, the augmented matrix algorithm, and the maximum likelihood algorithm. >

35 citations


Journal ArticleDOI
TL;DR: The perturbation result for the smallest singular values of a triangular matrix is stronger than the traditional results because it guarantees high relative accuracy in the smallest plural values after an off-diagonal block of the matrix has been set to zero.
Abstract: We extend the Golub--Kahan algorithm for computing the singular value decomposition of bidiagonal matrices to triangular matrices $R$. Our algorithm avoids the explicit formation of $R^TR$ or $RR^T$. We derive a relation between left and right singular vectors of triangular matrices and use it to prove monotonic convergence of singular values and singular vectors. The convergence rate for singular values equals the square of the convergence rate for singular vectors. The convergence behaviour explains the occurrence of deflation in the interior of the matrix. We analyse the relationship between our algorithm and rank-revealing QR and URV decompositions. As a consequence, we obtain an algorithm for computing the URV decomposition, as well as a divide-and-conquer algorithm that computes singular values of dense matrices and may be beneficial on a parallel architecture. Our perturbation result for the smallest singular values of a triangular matrix is stronger than the traditional results because it guarantees high relative accuracy in the smallest singular values after an off-diagonal block of the matrix has been set to zero.

33 citations


Journal ArticleDOI
Ji-guang Sun1
TL;DR: Borders of this note improve the known bounds in the literature and certain new perturbation bounds of the orthogonal factor in the QR factorization of a real matrix are derived.

32 citations


Journal ArticleDOI
01 Mar 1995
TL;DR: A parallel shared memory implementation of multifrontal QR factorization using a combination of tree and node level parallelism and a buddy system based on Fibonacci blocks to achieve high performance for general large and sparse matrices is discussed.
Abstract: We discuss a parallel shared memory implementation of multifrontal QR factorization. To achieve high performance for general large and sparse matrices, a combination of tree and node level parallelism is used. Acceptable load balancing is obtained by the use of a pool-of-tasks approach. For the storage of frontal and update matrices, we use a buddy system based on Fibonacci blocks. It turns out to be more efficient than blocks of size 2i, as proposed by other authors. Also the order in which memory space for update and frontal matrices are allocated is shown to be of importance. An implementation of the proposed algorithm on the CRAY X-MP/416 (four processors), gives speedups of about three with about 20% of extra real memory space required.

27 citations


Proceedings ArticleDOI
09 May 1995
TL;DR: Two new, closely related adaptive algorithms for LS system identification based on Givens rotations, with lower complexity compared to previously derived ones are presented.
Abstract: The paper presents two new, closely related adaptive algorithms for LS system identification. The starting point for the derivation of the algorithms is the inverse Cholesky factor of the data correlation matrix, obtained via a QR decomposition (QRD). Both are of O(p) computational complexity with p being the order of the system. The first algorithm is a fixed order QRD scheme with enhanced parallelism. The second is a lattice type algorithm based on Givens rotations, with lower complexity compared to previously derived ones.

Proceedings ArticleDOI
16 Oct 1995
TL;DR: An algorithm for recursive least squares optimisation based on the method of QR decomposition by Givens rotations is reformulated in terms of parameters whose magnitude is never greater than one to enable the design of a much simpler application specific integrated circuit to implement the G Vivens rotation processor for adaptive filtering and beamforming.
Abstract: An algorithm for recursive least squares optimisation based on the method of QR decomposition by Givens rotations is reformulated in terms of parameters whose magnitude is never greater than one. In view of the direct analogy to statistical normalisation, it is referred to as the normalised Givens rotation algorithm. An important consequence of the normalisation is that most of the resulting least squares computation may be carried out using fixed point arithmetic. This should enable the design of a much simpler application specific integrated circuit to implement the Givens rotation processor for adaptive filtering and beamforming.

Proceedings ArticleDOI
09 May 1995
TL;DR: Simulation results demonstrate the efficiency of this new combined structure for acoustic echo cancellation, and a fixed-point implementation of the proposed scheme confirms the expected numerical robustness of the fast QR-RLS algorithm.
Abstract: High quality acoustic echo cancellation is now required by hands-free systems used in mobile radio and teleconference communications. The demand for fast convergence, good tracking capabilities, and reduced complexity cannot be met by classical adaptive filtering algorithms. In this paper, a new echo canceller using multirate systems and a fast QR-decomposition based RLS algorithm is investigated. Simulation results demonstrate the efficiency of this new combined structure for acoustic echo cancellation, and a fixed-point implementation of the proposed scheme confirms the expected numerical robustness of the fast QR-RLS algorithm.

Journal ArticleDOI
TL;DR: This paper identifies which zero patterns of symmetric matrices are preserved under the QR algorithm, a basic algorithm for computing the eigenvalues of dense matrices.
Abstract: The QR algorithm is a basic algorithm for computing the eigenvalues of dense matrices. For efficiency reasons it is prerequisite that the algorithm is applied only after the original matrix has been reduced to a matrix of a particular shape, most notably Hessenberg and tridiagonal, which is preserved during the iterative process. In certain circumstances a reduction to another matrix shape may be advantageous. In this paper, we identify which zero patterns of symmetric matrices are preserved under the QR algorithm.

Journal ArticleDOI
TL;DR: In this article, the authors derived an exact operation count for Strassen's method with rectangular matrices and determined the recursion threshold that minimizes the operation count, and showed that when using the method on the whole product than to apply the method to square submatrices is more efficient.

Journal ArticleDOI
TL;DR: Solving the weighted and constrained linear least squares problem with the presented weighted modified Gram-Schmidt algorithm is seen to be numerically equivalent to an algorithm based on a weighted Householder-likeQR factorization applied to a slightly larger problem.
Abstract: A framework and an algorithm for using modified Gram-Schmidt for constrained and weighted linear least squares problems is presented. It is shown that a direct implementation of a weighted modified ...

Journal ArticleDOI
TL;DR: On the way to achieving this goal, a detailed description of the errors produced when using the authors' $M$-invariant reflections is attained.
Abstract: Backward errors are derived for the solution of the constrained and weighted linear least squares problem when using the weighted $QR$ factorization with column pivoting, [SIAM J. Matrix Anal. Appl., 13 (1992), pp.~1298--1313]. On the way to achieving this goal we attain a detailed description of the errors produced when using our $M$-invariant reflections.

Journal ArticleDOI
TL;DR: A new implementation of the existing 4SID is proposed, which reduces the computational burden to O(NM) by exploiting the displacement and low-rank structure of the matrices.

Journal ArticleDOI
TL;DR: Parallel strategies are proposed for updating the QR decomposition of an mxn matrix after adding k rows and are found to complete the updating in fewer steps by comparison to a recently published algorithm.
Abstract: Parallel strategies are proposed for updating the QR decomposition of an mxn matrix after adding k rows (k ≫ n). These strategies are based on Givens rotations and are found to complete the updating in fewer steps by comparison to a recently published algorithm. An efficient adaptation of the first parallel strategy to compute the QR decomposition of structured banded matrices is also discussed in detail.

Journal ArticleDOI
TL;DR: In this article, a new mathematical treatment is proposed to the hybrid method of charge simulation and finite differences for the computation of electric fields, entirely applicable to the similar hybrid model of charge simulations and finite elements, which makes use of the fixed point theory, the QR decomposition (by using the modified Gram-Schmidt method) and the conjugate gradients squared method with a preconditioning technique.
Abstract: A new mathematical treatment is proposed to the hybrid method of charge simulation and finite differences for the computation of electric fields, entirely applicable to the similar hybrid method of charge simulation and finite elements. The resulting system of linear equations is solved by making use of the fixed point theory, the QR decomposition (by using the modified Gram-Schmidt method) and the conjugate gradients squared method with a preconditioning technique. New procedures are suggested for the discretization of the boundary conditions, which lead to results with higher precision. Case studies are included. >

Journal ArticleDOI
TL;DR: In this paper, a method for determination of eigenvalue derivatives and two exact analytical methods for determining eigenvector derivatives are presented, which are applicable to general viscous damped systems.

Journal ArticleDOI
TL;DR: It is demonstrated that near-optimal estimates can be computed for problems of practical importance using only a small number of iterations, which can be performed in a finely parallel manner over the spatial domain of the random field.

Journal ArticleDOI
01 Jul 1995
TL;DR: A new parallel solver in the class of partition methods for general, nonsingular tridiagonal linear systems, based on the QR factorization, which depends on the conditioning of the sub-blocks in each processor.
Abstract: We describe a new parallel solver in the class of partition methods for general, nonsingular tridiagonal linear systems. Starting from an already known partitioning of the coefficient matrix among the parallel processors, we define a factorization, based on the QR factorization, which depends on the conditioning of the sub-blocks in each processor. Moreover, also the reduced system, whose solution is the only scalar section of the algorithm, has a dimension which depends both on the conditioning of these sub-blocks, and on the number of processors. We analyze the stability properties of the obtained parallel factorization, and report some numerical tests carried out on a net of transputers.

Journal ArticleDOI
TL;DR: This work presents a tracking procedure based on the rank-revealing QR (RRQR) factorization and investigates its numerical properties by applying it to the direction-of-arrival problem and compares it to that obtained using an EVD-based technique.
Abstract: We present a tracking procedure based on the rank-revealing QR (RRQR) factorization and investigate its numerical properties by applying it to the direction-of-arrival problem. We address numerical issues raised by the related work proposed earlier by Prasad et al. (1991), and we compare the performance of the proposed algorithm to that obtained using an EVD-based technique.

Journal ArticleDOI
TL;DR: A new algorithm which directly computes a component of the residual vector without finding the weight (solution) vector is introduced for the recursive least squares (RLS) problem with sliding window method using the QR decomposition.

Journal ArticleDOI
01 Jul 1995
TL;DR: The authors present new learning algorithm schemes using feedback error learning for a neural network model applied to adaptive nonlinear control of a robot arm, namely the QR-WRLS algorithm and its parallel counterpart algorithms.
Abstract: The authors present new learning algorithm schemes using feedback error learning for a neural network model applied to adaptive nonlinear control of a robot arm, namely the QR-WRLS algorithm and its parallel counterpart algorithms. It involves a QR decomposition to transform the system into upper triangular form, and estimation of the neural network weights by a weighted recursive least squares (WRLS) technique. The QR decomposition method, which is known to be numerically stable, is exploited in an algorithm which involves successive applications of a unitary transformation (Givens rotation) directly to the data matrix. The WRLS weight estimation method chosen allows the selection of weighting factors such that each of the linear equations is weighted differently. The QR-WRLS algorithm is shown to provide fast, robust and stable online learning of the dynamic relations necessary for robot control. We show the results of applying these learning schemes with some flexible forgetting strategies to a two-link manipulator. A comparison of their performance with backpropagation (BP) algorithm and the recursive prediction error learning algorithm is included (RPE).

01 Jan 1995
TL;DR: A block algorithm for QR decomposition that is derivable by the compiler and has good performance on small matrices | sizes that are typically run on nodes of a massively parallel system or workstation.
Abstract: Because of an imbalance between computation and memory speed in modern processors, programmers are explicitly restructuring codes to perform well on particular memory systems, leading to machine-speciic programs. This paper describes a block algorithm for QR decomposition that is derivable by the compiler and has good performance on small matrices | sizes that are typically run on nodes of a massively parallel system or workstation. The advantage of our algorithm over the one found in LAPACK is that it can be derived by the compiler and needs no hand optimization.

Proceedings ArticleDOI
04 Apr 1995
TL;DR: In this paper, an adaptive beamforming array antenna using a quadrature mirror filter to split the signal into sub-bands is discussed, and the minimum variance distortionless response (MVDR) method is used in conjunction with the QR decomposition (QRD) algorithm for the evaluation of the antenna beamformer.
Abstract: An adaptive beamforming array antenna using a quadrature mirror filter to split the signal into sub-bands is discussed. The minimum variance distortionless response (MVDR) method is used in conjunction with the QR decomposition (QRD) algorithm for the evaluation of the antenna beamformer. The simulated results show that the speed of convergence of weights increases rapidly as the sub-banding is used.

Journal ArticleDOI
01 Jan 1995
TL;DR: Cyclic pivoting as mentioned in this paper is a generalization of column pivoting and reverse pivoting, and it can give tight estimates of any two a priori-chosen consecutive singular values of a matrix.
Abstract: We introduce a pair of dual concepts: pivoted blocks and reverse pivoted blocks. These blocks are the outcome of a special column pivoting strategy in QR factorization. Our main result is that under such a column pivoting strategy, the QR factorization of a given matrix can give tight estimates of any two a priori-chosen consecutive singular values of that matrix. In particular, a rank-revealing QR factorization is guaranteed when the two chosen consecutive singular values straddle a gap in the singular value spectrum that gives rise to the rank degeneracy of the given matrix. The pivoting strategy, called cyclic pivoting, can be viewed as a generalization of Golub's column pivoting and Stewart's reverse column pivoting. Numerical experiments confirm the tight estimates that our theory asserts.

Journal ArticleDOI
TL;DR: Experiments demonstrate that functional test programs based on this reduced physical model of the integral non-linearity error in high resolution R-2R D/A converters achieve shorter test times and lower prediction errors than those based on larger models obtained by straight QR factorization.
Abstract: A reduced physical model of the integral non-linearity error in high resolution R-2R D/A converters is obtained by circuit analysis and application of the ambiguity algorithm. Its relationships with the well establisheda priori model based on Rademacher functions is discussed. Experiments, carried out on a sample of commercial 12 bit converters, demonstrate that functional test programs based on this model achieve shorter test times and lower prediction errors than those based on larger models obtained by straight QR factorization.