scispace - formally typeset
Search or ask a question

Showing papers on "QR decomposition published in 1993"


Journal ArticleDOI
TL;DR: A new subspace algorithm is derived to consistently identify stochastic state space models from given output data without forming the covariance matrix and using only semi-infinite block Hankel matrices.

480 citations


Book ChapterDOI
TL;DR: The chapter discusses the problem of perfect or nearly perfect linear dependencies, the complex QR decompositions, the QR decomposition in regression, the essential properties of the QR decay, and the use of Householder transformations to compute the QR decompposition.
Abstract: Publisher Summary The QR decomposition is one of the most basic tools for statistical computation, yet it is also one of the most versatile and powerful. The chapter discusses the problem of perfect or nearly perfect linear dependencies, the complex QR decomposition, the QR decomposition in regression, the essential properties of the QR decomposition, and the use of Householder transformations to compute the QR decomposition. A classical alternative to the Householder QR algorithm is the Gram–Schmidt method. Least-squares regression estimates can also be found using either the Cholesky factorization or the singular value decomposition. The QR decomposition approach has been observed to offer excellent numerical properties at reasonable computational cost, while providing factors, Q and R , which are quite generally useful. Although the QR decomposition is a long established technique, it is not in stasis. The QR decomposition is a key to stable and efficient solutions to many least-squares problems. A diverse collection of examples in statistics is given in the chapter and it is certain that there are many more problems for which orthogonalization algorithms can be exploited. In addition, the QR decomposition provides ready insight into the statistical properties of regression estimates.

156 citations


Journal ArticleDOI
TL;DR: A new computationally efficient algorithm for re- cursive least squares filtering is derived, which is based upon an inverse QR decomposition, which solves directly for the time-recursive least squares filter vector, while avoiding the highly serial backsubstitution step required in previously direct QR approaches.
Abstract: A new computationally efficient algorithm for re- cursive least squares filtering is derived, which is based upon an inverse QR decomposition. The method solves directly for the time-recursive least squares filter vector, while avoiding the highly serial backsubstitution step required in previously de- rived direct QR approaches. Furthermore, the method employs orthogonal rotation operations to recursively update the filter, and thus preserves the inherent stability properties of QR ap- proaches to recursive least squares filtering. The results of sim- ulations over extremely long data sets are also presented, which suggest stability of the new time-recursive algorithm. Finally, parallel implementation of the resulting method is briefly dis- cussed, and computational wavefronts are displayed.

119 citations


Journal ArticleDOI
TL;DR: In this article, an iterative algorithm for moving a triangular matrix toward diagonality was proposed, which is related to algorithms for refining rank-revealing triangular decompositions and in a variant form, to the QR algorithm.

84 citations


Journal ArticleDOI
TL;DR: In this article, error bounds are derived for a first-order expansion of the LU factorization of a perturbation of the identity, and the results are applied to obtain perturbations expansions of LU, Cholesky, and QR factorizations.
Abstract: In this paper error bounds are derived for a first-order expansion of the LU factorization of a perturbation of the identity The results are applied to obtain perturbation expansions of the LU, Cholesky, and QR factorizations

53 citations


Journal ArticleDOI
TL;DR: The numerical stability of a recent QR-based fast least-squares algorithm is established from a backward stability perspective, which frees the analysis of stationary assumptions on the filtered sequences and obviates the tedious linearization methods of previous approaches.
Abstract: The numerical stability of a recent QR-based fast least-squares algorithm is established from a backward stability perspective. A stability domain approach applicable to any least-squares algorithm, constructed from the set of reachable states in exact arithmetic, is presented. The error propagation question is shown to be subordinate to a backward consistency constraint, which requires that the set of numerically reachable variables be contained within the stability domain associated to the algorithm. This leads to a conceptually lucid approach to the numerical stability question which frees the analysis of stationary assumptions on the filtered sequences and obviates the tedious linearization methods of previous approaches. Moreover, initialization phenomena and considerations related to poorly exciting inputs admit clear interpretations from this perspective. The algorithm under study is proved, in contrast to many fast algorithms, to be minimal. >

49 citations


Journal ArticleDOI
TL;DR: A QR-recursive-least squares (RLS) adaptive algorithm for non-linear filtering is presented that retains the fast convergence behavior of the RLS Volterra filters and is numerically stable.
Abstract: A QR-recursive-least squares (RLS) adaptive algorithm for non-linear filtering is presented. The algorithm is based solely on Givens rotation. Hence the algorithm is numerically stable and highly amenable to parallel implementations. The computational complexity of the algorithm is comparable to that of the fast transversal Volterra filters. The algorithm is based on a truncated second-order Volterra series model; however, it can be easily extended to other types of polynomial nonlinearities. The algorithm is derived by transforming the nonlinear filtering problem into an equivalent multichannel linear filtering problem with a different number of coefficients in each channel. The derivation of the algorithm is based on a channel-decomposition strategy which involves processing the channels in a sequential fashion during each iteration. This avoids matrix processing and leads to a scalar implementation. Results of extensive experimental studies demonstrating the properties of the algorithm in finite and 'infinite' precision environments are also presented. The results indicate that the algorithm retains the fast convergence behavior of the RLS Volterra filters and is numerically stable. >

36 citations


Journal ArticleDOI
TL;DR: This work considers a different so-called rank-revealing two-sided orthogonal decomposition which decomposes the matrix into a product of a unitARY matrix, a triangular matrix and another unitary matrix in such a way that the effective rank of the matrix is obvious and at the same time the noise subspace is exhibited explicity.
Abstract: Solving Total Least Squares (TLS) problemsAX≈B requires the computation of the noise subspace of the data matrix [A;B]. The widely used tool for doing this is the Singular Value Decomposition (SVD). However, the SVD has the drawback that it is computationally expensive. Therefore, we consider here a different so-called rank-revealing two-sided orthogonal decomposition which decomposes the matrix into a product of a unitary matrix, a triangular matrix and another unitary matrix in such a way that the effective rank of the matrix is obvious and at the same time the noise subspace is exhibited explicity. We show how this decompsition leads to an efficient and reliable TLS algorithm that can be parallelized in an efficient way.

28 citations


Journal ArticleDOI
TL;DR: In this article, a first-order componentwise perturbation analysis of the $QR$ decomposition is presented, and the bounds derived are invariant under column scaling of the underlying matrix.
Abstract: A first-order componentwise perturbation analysis of the $QR$ decomposition is presented. In contrast to the traditional normwise perturbation analysis, the bounds derived are invariant under column scaling of the underlying matrix. As an application, an assessment of the accuracy of the computed $QR$ decomposition using Householder transformations is given.

27 citations


Journal ArticleDOI
TL;DR: It is shown that regularization can be incorporated into the algorithm with virtually no extra work and it is possible to compute regularized solutions to ill-conditioned Toeplitz least squares problems using only $O(mn)$ operations.
Abstract: Fast orthogonalization schemes for $m \times n$ Toeplitz matrices T, introduced by Bojanczyk, Brent, and de Hoog (BBH) and Chun, Kailath, and Lev-Ari (CKL), are extended to compute directly an inverse $QR$ factorization of T using only $O(mn)$ operations. An inverse factorization allows for an efficient parallel implementation, and the algorithm is computationally less expensive for computing a solution to the Toeplitz least squares problem than previously studied inverse $QR$ methods. In addition, it is shown that regularization can be incorporated into the algorithm with virtually no extra work. Thus it is possible to compute regularized solutions to ill-conditioned Toeplitz least squares problems using only $O(mn)$ operations. An application to ill-conditioned problems occurring in signal restoration is provided, illustrating the effectiveness of this method.

26 citations


Book ChapterDOI
01 Jan 1993
TL;DR: This chapter addresses the important subject of adaptive beamforming or “null-steering” as applied to an adaptive antenna array for the purposes of noise cancellation with results described here centre upon a particular systolic array first proposed by Gentleman and Kung.
Abstract: In this chapter we address the important subject of adaptive beamforming or “null-steering” as applied to an adaptive antenna array for the purposes of noise cancellation. The chapter is not intended as a general review or tutorial discussion of the subject. It is concerned solely with the application of systolic arrays to adaptive beamforming networks based on least-squares minimization. The results described here centre upon a particular systolic array first proposed by Gentleman and Kung [5.1] for performing the QR decomposition of a matrix in an efficient row-recursive manner. It will become clear in our subsequent development of the subject that this array provides the basic architectural component for the solution of a wide variety of signal processing problems. Most of the work described in this chapter was carried out during the last five or six years as part of a joint research project between the Royal Signals and Radar Establishment (RSRE) and STC Technology Ltd (STL). The key results have been described in previous publications but no complete overview of the subject has been presented and many important details have not been reported to date. Since the techniques are now being developed for practical application in a number of laboratories worldwide, it seems appropriate to present a more complete discussion in this book.

Ming Gu1
11 Jan 1993
TL;DR: A novel and stable method for computing the eigenvectors of a symmetric rank-one modification of a asymmetric matrix whose eigendecomposition is known and a modified version of the fast multipole method of Carrier, Greengard and Rokhlin to speed up these algorithms stably.
Abstract: We discuss three sets of problems in numerical linear algebra: algorithms for modified symmetric eigenproblems, relative perturbation theory for symmetric eigenproblems, and the existence of and algorithms for a new rank-revealing QR factorization In Part I we discuss algorithms for modified symmetric eigenproblems We present an algorithm for computing the eigendecomposition of a symmetric rank-one modification of a symmetric matrix whose eigendecomposition is known Previous algorithms for this problem suffer a potential loss of orthogonality among the computed eigenvectors, unless extended precision arithmetic is used Our algorithm is based on a novel and stable method for computing the eigenvectors It does not require extended precision yet is as efficient as previous algorithms Based on this algorithm, we present algorithms for the symmetric tridiagonal eigenproblem, the bidiagonal singular value decomposition, and updating and downdating the singular value decomposition We also present a modified version of the fast multipole method of Carrier, Greengard and Rokhlin to speed up these algorithms stably In Part II we discuss relative perturbation theory for symmetric eigenproblems We study the effects of component-wise relative perturbations of a symmetric matrix on its eigenvalues and of a general matrix on its singular values We characterize a class of matrices whose eigenvalues or singular values incur small relative changes under such perturbations In a well-defined sense our results are optimal up to a small constant factor In Part III we discuss rank-revealing factorizations Given a matrix M, we show that there exists a permutation $\Pi$ and an integer k such that, in the QR factorization $$M\ \Pi = Q\left(\sp{A\sb{k}}\ \sbsp{C\sb{k}}{B\sb{k}}\right),$$the rank of M is revealed by a sufficiently large and well-conditioned square matrix $A\sb{k}$ with minimum column dimension In addition, $C\sb{k}$ is sufficiently small and $B\sb{k}$ is linearly dependent on $A\sb{k}$ with bounded coefficients We discuss the properties of and relate existing rank-revealing QR algorithms to such factorizations We present an efficient algorithm for computing such factorizations

Journal ArticleDOI
TL;DR: It is shown that a fast algorithm for theQR factorization of a Toeplitz or Hankel matrixA is weakly stable in the sense that RTR is close toATA when the algorithm is used to solve the semi-normal equations RTRx=ATb.
Abstract: We show that a fast algorithm for the QR factorization of a Toeplitz or Hankel matrix A is weakly stable in the sense that R^T.R is close to A^T.A. Thus, when the algorithm is used to solve the semi-normal equations R^T.Rx = A^Tb, we obtain a weakly stable method for the solution of a nonsingular Toeplitz or Hankel linear system Ax = b. The algorithm also applies to the solution of the full-rank Toeplitz or Hankel least squares problem.

Proceedings ArticleDOI
27 Apr 1993
TL;DR: A method of scaling based on a QR decomposition provides for approximately an order of magnitude reduction in error over previously reported scaling methods, and has the significant advantage of requiring no a priori knowledge of the array's response.
Abstract: Based on a Fourier series model for an antenna array's response and a typical calibration procedure, a maximum likelihood (ML) solution for the array parameters can be derived. The authors review the Fourier series model of an array's response and introduce a suboptimum method of determining the model parameters. The performance of the suboptimal solution is significantly influenced by the scaling of the principal components (M.A. Koerber, 1992). A method of scaling based on a QR decomposition is presented. This method provides for approximately an order of magnitude reduction in error over previously reported scaling methods. This approach has the significant advantage of requiring no a priori knowledge of the array's response. Simulation results compare the use of QR decomposition based scaling with the ML solution. >

Journal ArticleDOI
TL;DR: A Jacobi-type updating algorithms for the SVD or the URV decomposition is developed, which is related to the QR algorithm for the symmetric eigenvalue problem and provides a cheap alternative to earlier-developed updating algorithms based on two-sided transformations.

Proceedings ArticleDOI
03 May 1993
TL;DR: The scaled tangent rotation (STAR) RLS algorithm (STAR-RLS) is designed such that fine-grain pipelining can be accomplished very easily and the computational complexity and inter cell communications are considerably lower than the QRD-R LS algorithm and the square-root free techniques.
Abstract: The QR decomposition based recursive least-squares (RLS) adaptive filtering algorithm (QRD-RLS) has a processing speed limitation. Fine-grain pipelining of the recursive loops within the cells using look-ahead techniques requires large hardware increase. A new scaled tangent rotation (STAR) is used instead of the usual Givens rotations. The scaled tangent rotation (STAR) RLS algorithm (STAR-RLS) is designed such that fine-grain pipelining can be accomplished very easily. The scaled tangent rotation are not exactly orthogonal transformations but tend to become orthogonal asymptotically. Simulation results show that the algorithm performance is similar to that of the QRD-RLS algorithm. The STAR-RLS algorithm can be mapped onto a systolic array. The computational complexity and inter cell communications are considerably lower than the QRD-RLS algorithm and the square-root free techniques. >

Journal ArticleDOI
TL;DR: It is demonstrated that many of the benefits of the singular value decomposition based methods are achievable under the truncated QR methods with much lower computational cost.
Abstract: To reduce the computational complexity of resolving closely spaced frequencies, three truncated QR methods are proposed: (1) truncated QR without column pivoting (TQR): (2) truncated QR with reordered columns) TQRR); and (3) truncated QR with column pivoting (TQRP). It is demonstrated that many of the benefits of the singular value decomposition based methods are achievable under the truncated QR methods with much lower computational cost. Based on the forward-backward linear prediction model, computer simulations and comparisons are provided for different truncation methods under various SNRs. Comparisons of asymptotic performance for large data samples are also given. >

Journal ArticleDOI
TL;DR: The computable zero/nonzero structures for the matrices Q and R are proven to be tight, and the conditions on the pattern for A are the weakest possible (namely, that it allows matrices A with full column rank).
Abstract: Given only the zero–nonzero pattern of an $m \times n$ matrix A of full column rank, which entries of Q and which entries of R in its $QR$ factorization must be zero, and which entries may be nonzero? A complete answer to this question is given, which involves an interesting interplay between combinatorial structure and the algebra implicit in orthogonality. To this end some new sparse structural concepts are introduced, and an algorithm to determine the structure of Q is given. The structure of R then follows immediately from that of Q and A. The computable zero/nonzero structures for the matrices Q and R are proven to be tight, and the conditions on the pattern for A are the weakest possible (namely, that it allows matrices A with full column rank). This complements existing work that focussed upon R and then only under an additional combinatorial assumption (the strong Hall property).

Journal ArticleDOI
01 Jun 1993
TL;DR: A method based on compound disjoint Given's rotations, for reorthogonalizing the QR decomposition after deleting columns, is proposed.
Abstract: In this note we propose a method based on compound disjoint Given's rotations, for reorthogonalizing the QR decomposition after deleting columns.

Proceedings ArticleDOI
Peter Strobach1, D. Goryn1
27 Apr 1993
TL;DR: A set of recursions for sequential updating of QR based algorithms in finite (or sliding) windows using orthonormal Givens plane rotations is presented.
Abstract: A set of recursions for sequential updating of QR based algorithms in finite (or sliding) windows using orthonormal Givens plane rotations is presented. These recursions can be used for (1) sliding window spatial QR-RLS adaptive filtering including implicit error computation and/or (2) sliding window tracking of orthonormal database for array processing applications. The proposed sliding window QR algorithms have approximately twice the computational complexity when compared with the classical exponentially weighted growing window QR algorithms. They are based on very elementary computation structures which are familiar from the well-known Gentleman and Kung array. >

Journal ArticleDOI
TL;DR: This paper compares AUTO's original linear solver (an LU decomposition with partial pivoting) and the implementation of the analogous QR algorithm to AUTO and considers the compactification algorithm, Gaussian elimination with row partial pivot, and a QR algorithm applied to linear systems arising from solving BVPs.

Journal ArticleDOI
TL;DR: A method of updating the unitary basis matrix for QR decomposition by applying a sliding window to the input data is presented, and the sliding-window feature gives this alternative update a superior nonstationary performance characteristic.
Abstract: A method of updating the unitary basis matrix for QR decomposition by applying a sliding window to the input data is presented. The basis matrix update allows the recursive update to be applied to problems which require the tracking of a basis set, and the sliding-window feature gives this alternative update a superior nonstationary performance characteristic. >

Proceedings ArticleDOI
27 Apr 1993
TL;DR: A new class of motion-compensating estimators, based on a discrete formulation of pel-recursive motion compensation, is studied, and it is found that the proposed estimators perform better than the standard methods, particularly in areas of large displacement.
Abstract: A new class of motion-compensating estimators, based on a discrete formulation of pel-recursive motion compensation, is studied. This class of four-dimensional predictors is more general than the bilinear interpolation, which is used in typical pel-recursive algorithms. The operators performing the estimation are obtained through a least squares solution, both with and without weighting, of a set of linear equations relating the intensity of pixels in a causal neighborhood to pixels in the past frame. In the proposed method, the displaced luminance, and not the motion, is computed directly. The operators' performance is compared with the standard pel-recursive techniques, and it is found that the proposed estimators perform better than the standard methods, particularly in areas of large displacement. Some computational issues are discussed, and an implementation utilizing recursive least squares with QR decomposition is proposed. >

Journal ArticleDOI
TL;DR: Extensive numerical experiments show that the proposed algorithms are efficient and that one of them usually gives better accuracy than standard implementations of the QR orthogonalization algorithm with Householder reflections.
Abstract: The ABS class for linear and nonlinear systems has been recently introduced by Abaffy, Broyden, Galantai and Spedicato. Here we consider various ways of applying these algorithms to the determination of the minimal euclidean norm solution of over-determined linear systems in the least squares sense. Extensive numerical experiments show that the proposed algorithms are efficient and that one of them usually gives better accuracy than standard implementations of the QR orthogonalization algorithm with Householder reflections.

01 Jan 1993
TL;DR: This paper addresses several important aspects of parallel implementation of QR decomposition of a matrix on a distributed memory MIMD machine, the Fujitsu AP1000, and implemented various orthogonal factorisation algorithms on a 128-cell AP1000 located at the Australian National University.
Abstract: This paper addresses several important aspects of parallel implementation of QR decomposition of a matrix on a distributed memory MIMD machine, the Fujitsu AP1000. They include: Among various QR decomposition algorithms, which one is most suitable for implementation on the AP1000? With the total number of cells given, what is the best aspect ratio of the array to achieve optimal performance? How efficient is the AP1000 in computing the QR decomposition of a matrix? To help answer these questions we have implemented various orthogonal factorisation algorithms on a 128-cell AP1000 located at the Australian National University. After extensive experiments some interesting results have been obtained and are presented in the paper. Comments Only the Abstract of [4] is given here. For related work see [1, 2, 3].

Journal ArticleDOI
TL;DR: A significantly better algorithm is proposed for sequencing QR factorization implementation of the quasi-Newton method for solving systems of nonlinear algebraic equations.
Abstract: One of the most popular algorithms for solving systems of nonlinear algebraic equations is the sequencing QR factorization implementation of the quasi-Newton method. We propose a significantly better algorithm and give computational results.

Journal ArticleDOI
TL;DR: Ranging from fully dense Chebyshev approximation-type matrices to highly sparse, relatively small, band-widths matrices generated from continuum mechanics problems, numerical results are presented that indicate good stability and objective function value accuracy.
Abstract: The vector-supercomputer CRAY series has provided significant speed and significant digits accuracy for solving difficult, large-scale, and ill-conditioned linear programming problems with the linear programming (LP) primal scaling algorithm of Dikin [Soviet Math. Dokl., 8 (1967), pp. 674–675]. Ranging from fully dense Chebyshev approximation-type matrices to highly sparse, relatively small, band-widths matrices generated from continuum mechanics problems, numerical results are presented that indicate good stability and objective function value accuracy. In a significant number of these experiments, a substantial speed-up in CPU time for obtaining good objective function accuracy has been obtained over a very stable implementation of the simplex method, termed LINOP. The companion QR-based scaling implementation, SHP, was applied to these dense problems, where high accuracy levels are required.A conjugate gradient-based implementation, termed HYBY, was applied to a discretized plane strain plasticity prob...

Proceedings ArticleDOI
27 Apr 1993
TL;DR: A typical least squares algorithm based on the technique of QR decomposition (QRD) and the CORDIC-based implementation of this cascadable algorithm is presented and its robustness to the roundoff errors is discussed.
Abstract: A typical least squares algorithm based on the technique of QR decomposition (QRD) is first reviewed. Problems for direct VLSI implementation are pointed out and a modification of the algorithm to obtain a systolic version is proposed. The systolic QRD-based algorithm should be useful for applications of least squares techniques to adaptive filtering. The CORDIC-based implementation of this cascadable algorithm is presented and its robustness to the roundoff errors is discussed. >

Journal ArticleDOI
01 Nov 1993
TL;DR: A parallel algorithm for the QR factorization of a dense matrix without column pivoting on a message passing multiprocessor that combines the numerical efficiency of Householder reflections with the excellent communication properties of the torus-wrap mapping is presented.
Abstract: We present a parallel algorithm for the QR factorization of a dense matrix without column pivoting on a message passing multiprocessor. The algorithm combines the numerical efficiency of Householder reflections with the excellent communication properties of the torus-wrap mapping. Analytical results indicate that the communication overhead for this algorithm is less than that for other common approaches. Numerical results on an nCUBE 2 confirm the efficiency of our technique.

Proceedings ArticleDOI
19 Oct 1993
TL;DR: New stable parallel algorithms based on Householder transformations and compound Given's rotations to compute the QR decomposition of a rectangular matrix are proposed.
Abstract: We propose new stable parallel algorithms based on Householder transformations and compound Given's rotations to compute the QR decomposition of a rectangular matrix The predicted execution time of all algorithms on the massively parallel SIMD array processor AMT DAP 510, have been obtained and analyzed Modified versions of these algorithms are also considered for updating the QR decomposition, when rows are inserted in the data matrix >