scispace - formally typeset
Search or ask a question
Topic

QR decomposition

About: QR decomposition is a research topic. Over the lifetime, 3504 publications have been published within this topic receiving 100599 citations. The topic is also known as: QR factorization.


Papers
More filters
Patent
13 Oct 1988
TL;DR: In this paper, a processor for constrained least squares computations is provided, which incorporates a systolic array of boundary, internal, constraint and multiplier cells arranged as triangular and rectangular sub-arrays.
Abstract: A processor is provided which is suitable for constrained least squares computations. It incorporates a systolic array of boundary, internal, constraint and multiplier cells arranged as triangular and rectangular sub-arrays. The triangular sub-array contains boundary cells along its major diagonal and connected via delays, together with above-diagonal internal cells. It computes and updates a QR decomposition of a data matrix X incorporating successive data vectors having individual signals as elements. The rectangular sub-array incorporates constraint cell columns each implementing a respective constraint vector and terminating at a respective multiplier cell. The constraint cells store respective conjugate constraint factors obtained by constraint vector transformation in the triangular sub-array. Rotation parameters produced by QR decomposition in the triangular sub-array are employed by the rectangular sub-array to rotate a zero input and update stored constraint factors. Cumulative rotation of this input and summation of squared moduli of constraint factors are carried out in cascade down constraint columns. The boundary cells are arranged for cumulative multiplication of cosine rotation parameters. Multiplier cells multiply cumulatively multiplied cosine parameters by cumulatively rotated constraint column inputs and divide by summed squared moduli of constraint factors to provide residuals. Each constraint column produces respective residuals in succession corresponding to weighting of the data matrix X to produce minimum signals subject to a respective constraint governing the form of weighting, as required to compute minimum variance distortionless response (MVDR) results.

28 citations

Journal ArticleDOI
TL;DR: In this article, a general, linearly constrained (LC) recursive least squares (RLS) array-beamforming algorithm based on an inverse QR decomposition is developed for suppressing moving jammers efficiently.
Abstract: A general, linearly constrained (LC) recursive least squares (RLS) array-beamforming algorithm, based on an inverse QR decomposition, is developed for suppressing moving jammers efficiently. In fact, by using the inverse QR decomposition-recursive least squares (QRD-RLS) algorithm approach, the least-squares (LS) weight vector can be computed without back substitution and is suitable for implementation using a systolic array to achieve fast convergence and good numerical properties. The merits of this new constrained algorithm are verified by evaluating the performance, in terms of the learning curve, to investigate the convergence property and numerical efficiency, and the output signal-to-interference-and-noise ratio. We show that our proposed algorithm outperforms the conventional linearly constrained LMS (LCLMS) algorithm, and the one using the fast linear constrained RLS algorithm and its modified version.

28 citations

Proceedings Article
01 Jan 2009
TL;DR: SuiteSparseQR is a sparse multifrontal QR factorization algorithm that obtains a substantial fraction of the theoretical peak performance of a multicore computer.
Abstract: SuiteSparseQR is a sparse multifrontal QR factorization algorithm. Dense matrix methods within each frontal matrix enable the method to obtain high performance on multicore architectures. Parallelism across different frontal matrices is handled with Intel's Threading Building Blocks library. Rank-detection is performed within each frontal matrix using Heath's method, which does not require column pivoting. The resulting sparse QR factorization obtains a substantial fraction of the theoretical peak performance of a multicore computer.

28 citations

Journal ArticleDOI
01 Mar 1995
TL;DR: A parallel shared memory implementation of multifrontal QR factorization using a combination of tree and node level parallelism and a buddy system based on Fibonacci blocks to achieve high performance for general large and sparse matrices is discussed.
Abstract: We discuss a parallel shared memory implementation of multifrontal QR factorization. To achieve high performance for general large and sparse matrices, a combination of tree and node level parallelism is used. Acceptable load balancing is obtained by the use of a pool-of-tasks approach. For the storage of frontal and update matrices, we use a buddy system based on Fibonacci blocks. It turns out to be more efficient than blocks of size 2i, as proposed by other authors. Also the order in which memory space for update and frontal matrices are allocated is shown to be of importance. An implementation of the proposed algorithm on the CRAY X-MP/416 (four processors), gives speedups of about three with about 20% of extra real memory space required.

27 citations

Journal ArticleDOI
TL;DR: This paper shows how to compute the QR-factorization of a rank structured matrix in an efficient way by means of the Givens-weight representation, and shows how this representation can be used as a preprocessing step for the solution of linear systems.
Abstract: In this paper we show how to compute the QR-factorization of a rank structured matrix in an efficient way by means of the Givens-weight representation. We also show how the QR-factorization can be used as a preprocessing step for the solution of linear systems. Provided the representation is chosen in an appropriate manner, the complexity of the QR-factorization is $O((ar^2+brs+cs^2)n)$ operations, where $n$ is the matrix size, $r$ is some measure for the average rank of the rank structure, $s$ is some measure for the bandwidth of the unstructured matrix part around the main diagonal, and $a,b,c\in \mathbb{R}$ are certain weighting parameters. The complexity of the solution of the linear system with given QR-factorization is then only $O((dr+es)n)$ operations for suitable $d,e\in \mathbb{R}$. The performance of this scheme will be demonstrated by some numerical experiments.

27 citations


Network Information
Related Topics (5)
Optimization problem
96.4K papers, 2.1M citations
85% related
Network packet
159.7K papers, 2.2M citations
84% related
Robustness (computer science)
94.7K papers, 1.6M citations
83% related
Wireless network
122.5K papers, 2.1M citations
83% related
Wireless sensor network
142K papers, 2.4M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202331
202273
202190
2020132
2019126
2018139