scispace - formally typeset
Search or ask a question

Showing papers on "Singular value decomposition published in 1982"


Journal ArticleDOI
01 Sep 1982
TL;DR: In this paper, the frequency estimation performance of the forward-backward linear prediction (FBLP) method was improved for short data records and low signal-to-noise ratio (SNR) by using information about the rank M of the signal correlation matrix.
Abstract: The frequency-estimation performance of the forward-backward linear prediction (FBLP) method of Nuttall/Uhych and Clayton, is significantly improved for short data records and low signal-to-noise ratio (SNR) by using information about the rank M of the signal correlation matrix. A source for the improvement is an implied replacement of the usual estimated correlation matrix by a least squares approximation matrix having the lower rank M. A second, related cause for the improvement is an increase in the order of the prediction filter beyond conventional limits. Computationally, the recommended signal processing is the same as for the FBLP method, except that the vector of prediction coefficients is formed from a linear combination of the M principal eigenvectors of the estimated correlation matrix. Alternatively, singular value decomposition can be used in the implementation. In one special case, which we call the Kumaresan-Prony (KP) case, the new prediction coefficients can be calculated in a very simple way. Philosophically, the improvement can be considered to result from a preliminary estimation of the explainable, predictable components of the data, rather than attempting to explain all of the observed data by linear prediction.

1,072 citations


Journal ArticleDOI
TL;DR: In this paper, the estimation procedure presented here makes use of "backward prediction" in addition to singular value decomposition (SVD) for accurate estimation of closely spaced frequencies of sinusoidal signals in noise.
Abstract: We have presented techniques [1] - [6] based on linear prediction (LP) and singular value decomposition (SVD) for accurate estimation of closely spaced frequencies of sinusoidal signals in noise. In this note we extend these techniques to estimate the parameters of exponentially damped sinusoidal signals in noise. The estimation procedure presented here makes use of "backward prediction" in addition to SVD. First, the method is applied to data consisting of one and two exponentially damped sinusoids. The choice of one and two signal components facilitates the comparison of estimation error in pole damping factors and pole frequencies to the appropriate Cramer-Rao (CR) bounds and to traditional methods of linear prediction. Second, our method is applied to an example due to Steiglitz [8] in which the data consists of noisy values of the impulse response samples (composed of many exponentially damped sinusoids) of a linear system having both poles and zeros. The poles of the system are accurately determined by our method and the zeros are obtained subsequently, using Shanks' method.

881 citations


Journal ArticleDOI
TL;DR: The method is based on successively predicting each element in the data matrix after deleting the corresponding row and column of the matrix, and makes use of recently published algorithms for updating a singular value decomposition.
Abstract: A method is described for choosing the number of components to retain in a principal component analysis when the aim is dimensionality reduction. The correspondence between principal component analysis and the singular value decomposition of the data matrix is used. The method is based on successively predicting each element in the data matrix after deleting the corresponding row and column of the matrix, and makes use of recently published algorithms for updating a singular value decomposition. These are very fast, which renders the proposed technique a practicable one for routine data analysis.

364 citations


Journal ArticleDOI
Tony F. Chan1
TL;DR: An improved version of the original GR-SVD algorithm is presented, which works best for matrices with m >> n, but is more efficient even when m is only slightly greater than n and in some cases can achieve as much as 50 percent savings.
Abstract: The most well-known and widely used algorithm for computing the Singular Value Decomposition (SVD) A --U ~ V T of an m x n rectangular matrix A is the Golub-Reinsch algorithm (GR-SVD). In this paper, an improved version of the original GR-SVD algorithm is presented. The new algorithm works best for matrices with m >> n, but is more efficient even when m is only slightly greater than n (usually when m ~ 2n) and in some cases can achieve as much as 50 percent savings. If the matrix U ~s exphcltly desired, then n 2 extra storage locations are required, but otherwise no extra storage is needed. The two main modifications are: (1) first triangularizing A by Householder transformations before bldmgonahzing it (thin idea seems to be widely known among some researchers in the field, but as far as can be determined, neither a detailed analysis nor an lmplementatmn has been published before), and (2) accumulating the left Givens transformations in GR-SVD on an n x n array instead of on an m x n array. A PFORT-verified FORTRAN Implementation m included. Comparisons with the EISPACK SVD routine are given.

241 citations


Journal ArticleDOI
TL;DR: LP estimation of frequencies can be greatly improved at low SNR by singular value decomposition (SVD) of the LP data matrix, as is done in Pisarenko's method and its variants.
Abstract: Linear-prediction-based (LP) methods for fitting multiple-sinusoid signal models to observed data, such as the forward-backward (FBLP) method of Nuttall [5] and Ulrych and Clayton [6], are very ill-conditioned. The locations of estimated spectral peaks can be greatly affected by a small amount of noise because of the appearance of outliers. LP estimation of frequencies can be greatly improved at low SNR by singular value decomposition (SVD) of the LP data matrix. The improved performance at low SNR is also better than that obtained by using the eigenvector corresponding to the minimum eigenvalue of the correlation matrix, as is done in Pisarenko's method and its variants.

238 citations


Journal ArticleDOI
TL;DR: In this paper, the singular value decomposition (SVD) of a 3 × 3 matrix containing the eight so-called pure parameters is used to determine the 3D motion parameters of a rigid planar patch.
Abstract: We show that the three-dimensional (3-D) motion parameters of a rigid planar patch can be determined by computing the singular value decomposition (SVD) of a 3 × 3 matrix containing the eight so called "pure parameters." Furthermore, aside from a scale factor for the translation parameters, the number of solutions is either one or two, depending on the multiplicity of the singular values of the matrix.

238 citations


Journal ArticleDOI
TL;DR: In this paper, a discussion in expository form of the use of singular value decomposition in multiple linear regression, with special reference to the problems of collinearity and near-coincurrence, is presented.
Abstract: Principal component analysis, particularly in the form of singular value decomposition, is a useful technique for a number of applications, including the analysis of two-way tables, evaluation of experimental design, empirical fitting of functions, and regression. This paper is a discussion in expository form of the use of singular value decomposition in multiple linear regression, with special reference to the problems of collinearity and near collinearity.

225 citations


Book
31 Aug 1982
TL;DR: In this article, the Smith-Mcmillan Form Relation to Coprime Fractions (SMCF) is used to describe the relationship between a matrix and its components.
Abstract: 1. On the Advantages of Feedback.- 1.1. Introduction.- 1.2. Singular Value Decomposition of a Matrix.- 1.3. Large Loop Gain.- 2. Matrix Fraction Description of Transfer Functions.- 2.1. Introduction.- 2.2. Polynomials, Euclidean Rings, and Modules.- 2.3. Polynomial Matrices.- 2.3.1. Divisors, Coprimeness, Rank.- 2.3.2. Elementary Operations on Polynomial Matrices.- 2.3.3. Elementary Operations and Differential Equations.- 2.3.4. Standard Forms: Hermite and Smith Forms.- 2.3.5. The Solution Space of D(p) ?(t) = 9 t ? 0.- 2.3.6. Greatest Common Divisor Extraction.- 2.4. Matrix Fraction Descriptions of Rational Transfer Function Matrices.- 2.4.1. Coprime Fractions.- 2.4.2. Smith-Mcmillan Form Relation to Coprime Fractions.- 2.4.3. Proper Transfer Function Matrices.- 2.4.4. Poles and Zeros.- 2.4.5. Dynamical Interpretation of Poles and Zeros.- 2.5. Realization and Polynomial Matrix Fractions.- 3. Polynomial Matrix System Descriptions and Related Transfer Functions.- 3.1. Introduction.- 3.2. Dynamics of a PMD Redundancy.- 3.2.1. Dynamics of a PMD.- 3.2.2. Reachability of PDMs.- 3.2.3. Observability of PMDs.- 3.2.4. Minimality, Hidden Modes, Poles, and Zeros.- 3.3. Well-Formed and Exponentially Stable PMDs.- 3.3.1. Well-Formed PMDs.- 3.3.2. Exponentially Stable PMDs 121 3.4. Transfer Functions: Right-Left Fractions Internally Proper Fractions.- 4. Interconnected Systems.- 4.1. Introduction.- 4.2. Exponential Stability of an Interconnection of Subsystems.- 4.3. Feedback System Exponential Stability.- 4.4. Special Properties of Feedback Systems.- 5. Single-Input Single-Output Systems.- 5.1. Introduction.- 5.2. Problem Statement and Analysis.- 5.3. Design.- 6. The Closed-Loop Eigenvalue Placement Problem.- 6.1. Introduction.- 6.2. The Compensator Problem.- 7. Asymptotic Tracking.- 7.1. Introduction.- 7.2. Theory of Asymptotic Tracking.- 7.3. The Tracking Compensator Problem.- 8. Design with Stable Plants.- 8.1. Introduction.- 8.2. Q-Parametrization Design Properties.- 8.3. Q-Design Algorithm for Decoupling by Feedback.- 8.4. Two-Step Compensation Theorem for Unstable Plants.- Epilogue.- Appendices.- A. Rings and Fields.- B. Matrices with Elements in a Commutative Ring IK.- C. Division of a Polynomial Vector on the Left by a Polynomial Matrix.- References.- Symbols.

216 citations


Journal ArticleDOI
01 Jun 1982
TL;DR: A new method is presented for estimating the signal component of a noisy record of data by assuming the approximate value of rank of a matrix which is formed from the samples of the signal is assumed to be known or obtainable from singular value decomposition (SVD).
Abstract: A new method is presented for estimating the signal component of a noisy record of data. Only a little prior information about the signal is assumed. Specifically, the approximate value of rank of a matrix which is formed from the samples of the signal is assumed to be known or obtainable from singular value decomposition (SVD).

204 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provide an example to demonstrate some distorted loading patterns which can result from the direct application of PC analysis (or eigenvector analysis, factor analysis, or asymptotic singular decomposition) on irregularly spaced data.
Abstract: Principal component (PC) analysis performed on irregularly spaced data can produce distorted loading patterns. We provide an example to demonstrate some distorted patterns which can result from the direct application of PC analysis (or eigenvector analysis, factor analysis, or asymptotic singular decomposition) on irregularly spaced data. The PCs overestimate loadings in areas of dense data. The problem can be avoided by interpolating the irregularly spaced data to a grid which closely approximates equal-area.

99 citations


Journal ArticleDOI
TL;DR: In this article, a quasi-linear inverse problem with a priori information about model parameters is formulated in a stochastic framework by using a singular value decomposition technique for arbitrary rectangular matrices.
Abstract: A quasi-linear inverse problem with a priori information about model parameters is formulated in a stochastic framework by using a singular value decomposition technique for arbitrary rectangular matrices.In many geophysical inverse problems, we have a priori information from which a most plausible solution and the statistics of its probable error can be guessed. Starting from the most plausible solution, the optimization of model parameters is made by the successive iteration of solving a set of standardized linear equations for corrections at each step. In under-determined cases, the solution depends inherently on the initial guess of model parameters, and then uncertainties in the solution are evaluated by the covariances of estimation error which results not only from the random noise in data but also from the probable error in the initial guess.A best linear inverse operator which minimizes the variances of estimation errors within the framework of a generalized least-squares approach is directly obtained from the "natural inverse" of Lanczos for a coefficient matrix by regarding an eigenvalue in the inverse as zero if it is smaller than unity. This provides a theoretical basis on the "sharp cutoff approach" of Wiggins and also Jackson in their general inverse formalisms.

Journal ArticleDOI
TL;DR: It is shown how the effects of ill conditioning can be mitigated by using an appropriate regularization technique and a conjugate gradient descent (CGD) algorithm is described that yields a reconstruction nearly identical with that obtained by using the regularized SVD algorithm.
Abstract: A least-squares estimation procedure was recently proposed for the restoration of an object that has had some high-frequency components removed [ J. Opt. Soc. Am.71, 95 ( 1981)]. We provide further discussion of the use of least-squares techniques for this purpose. We use the singular-value decomposition (SVD) of an appropriate matrix to explore the relationships among bandwidth, measurement noise, a priori constraints on the object, and the quality of the restoration. We show how the effects of ill conditioning, which arise as the bandwidth of the observation is reduced, can be mitigated by using an appropriate regularization technique. Finally, we describe a conjugate gradient descent (CGD) algorithm that yields a reconstruction nearly identical with that obtained by using the regularized SVD algorithm. The CGD algorithm has been adapted to two-dimensional objects for which the computational complexity of the SVD algorithm is impracticably high.

Journal ArticleDOI
TL;DR: It is shown that singular value decomposition (s.v.d.) is an excellent tool for studying the limit properties of a feasible solution for the inverse problem in electrocardiography and leads to a noise filtering algorithm, which at the same time results in useful data reduction.
Abstract: In the paper it is shown that singular value decomposition (s.v.d.) is an excellent tool for studying the limit properties of a feasible solution for the inverse problem in electrocardiography. When s.v.d. is applied to the transfer matrix, relating equivalent heart sources to the skin potentials, it provides a measure of the observability. In an example presented, a series of orthonormal potential patterns on a pericardial surface are found in an order of decreasing observability. When s.v.d. is applied to a data matrix, consisting of skin potentials as a function of time and position, one finds the normalised principal components both in time and space. An appropriate use of the singular values leads to a noise filtering algorithm, which at the same time results in useful data reduction. Comparison of spatial potential patterns derived from both the transfer matrix and the data matrix may, finally, be used to evaluate the assumptions on the transfer.

Journal ArticleDOI
TL;DR: The purpose of this paper is to discuss the problem of finding a lower dimension q p which in some sense best fits the range space generated by the matrix M and provide a partial solution.

Journal ArticleDOI
TL;DR: A method for computing the partial singular value decomposition of a matrix is described, appropriate to problems where the matrix is known to be of low rank and only the principal singular vectors are of interest.
Abstract: A method for computing the partial singular value decomposition of a matrix is described. The method is appropriate to problems where the matrix is known to be of low rank and only the principal singular vectors are of interest. The technique is simple, easy to implement in integer arithmetic, and places modest memory requirements. The convergence properties of the algorithm are investigated analytically and by simulation.

Journal ArticleDOI
Tony F. Chan1
TL;DR: The improved a lgor i thm first computes the QR factorizat ion of A using Householder t ransformat ions, and then uses the Go lub -Re insch a l Igor i Thm on R to solve the SSVDC problem.
Abstract: where U is an m × min(m,n) matr ix containing the left singular vectors, W is a diagonal mat r ix of size min(m, n) containing the singular values, and V is an n x min(m, n) mat r ix containing the r ight singular vectors. Note tha t m is al lowed to be greater t han or less t han n. For ease of presentat ion, we assume m to be greater than or equal to n in the following discussion. T h e a lgor i thm is an i m p r o v e m e n t of the Go lub-Re insch a lgor i thm [4], which is imp lemen ted in subrout ines SVD and M I N F I T in E I S P A C K [3] and in subrout ine SSVDC in L I N P A C K [2]. I t should be more efficient than the Go lub-Re insch a lgor i thm when m is approx imate ly larger tJ lan 2n, as is the case in m a n y least squares applications. T h e a lgor i thm has a hybr id nature. When m is a b o a t equal to n, the Golub-Re insch a lgor i thm is employed. When the rat io m/n is larger t han a threshold value, which is de te rmined by detai led opera t ion counts [1], the improved a lgor i thm is used. T h e improved a lgor i thm first computes the QR factorizat ion of A using Householder t ransformat ions , and then uses the Go lub -Re insch a lgor i thm on R. A fur ther i m p r o v e m e n t over the Golub-Re insch a lgor i thm is when the left singular

Book ChapterDOI
01 Jan 1982
TL;DR: A fourth orthogonal matrix decomposition, the Hessenberg Decomposition, has recently been put to good use in certain control theory applications and it is illustrated why this decomposition can frequently replace the much more costly decomposition of Schur.
Abstract: Orthogonal matrix techniques are gaining wide acceptance in applied areas by practitioners who appreciate the value of reliable numerical software. Quality programs that can be used to compute the QR Decomposition, the Singular Value Decomposition, and the Schur Decomposition are primarily responsible for this increased appreciation. A fourth orthogonal matrix decomposition, the Hessenberg Decomposition, has recently been put to good use in certain control theory applications. We describe some of these applications and illustrate why this decomposition can frequently replace the much more costly decomposition of Schur.

Proceedings ArticleDOI
28 Dec 1982
TL;DR: The mapping of the algorithms to the architecture of a specific architecture of the singular value decomposition is demonstrated and the algorithms and architecture together have been verified by functional level and register transfer level simulation.
Abstract: Linear time computation of the singular value decomposition (SVD) would be useful in many real time signal processing applications. Two algorithms for the SVD have been developed for implementation on a quadratic array of processors. A specific architecture is proposed and we demonstrate the mapping of the algorithms to the architecture. The algorithms and architecture together have been verified by functional level and register transfer level simulation.

01 Sep 1982
TL;DR: A systolic architecture for computing a singular value decomposition of an m x n matrix, where m \geq n, is proposed, which is stable and requires only $O(mn)$ time on a linear array of O(n) processors.
Abstract: We propose a systolic architecture for computing a singular value decomposition of an m x n matrix, where $m \geq n$. Our algorithm is stable and requires only $O(mn)$ time on a linear array of $O(n)$ processors. Extensions to algorithms for two-dimensional arrays are also discussed. Key Words and Phrases: Systolic arrays, singular value decomposition, Hestenes method, threshold Jacobi method, real-time computation.

Journal ArticleDOI
TL;DR: The problem of algebraic realization of noisy data still receives increasing attention as mentioned in this paper, and many attempts have been made in order to give a satisfactory solution by means of the approximate realizations.

Journal ArticleDOI
TL;DR: Extrapolation of the Fourier spectrum of an object of finite extent is treated as an algebraic restoration problem and simulation results for 1-D objects are presented.
Abstract: Extrapolation of the Fourier spectrum of an object of finite extent is treated as an algebraic restoration problem. Available samples of the spectrum or the image are modeled as arising due to a matrix transformation of the vector representing the object or the extrapolated part of the spectrum. A singular value decomposition of the matrix transformation is used to obtain a minimum norm estimate for the object. Simulation results for 1-D objects are presented.

Proceedings ArticleDOI
14 Jun 1982
TL;DR: In this article, structural information in the robustness analysis of control feedback systems is used to identify sensitive direction for perturbations to individual components (subsystems) of larger systems.
Abstract: Two topics on using structural information in the robustness analysis of control feedback systems are presented. First, we describe how to identify sensitive direction for perturbations to individual components (subsystems) of larger systems. This technique is then applied, to parameter variation analysis. Also we discuss the role of the control weighting matrix in the robustness analysis of the linear quadratic (LQ) desigjn, using a 2×2 example. In particular, it is shown that the structure of the control weighting matrix can be used to classify sensitivities of the control system to model uncertainties.

Proceedings ArticleDOI
28 Dec 1982
TL;DR: A two-dimensional systolic array testbed has been designed and fabricated, which will be used to test and evaluate algorithms and data paths for future implementation in VLSI/VHSIC technology.
Abstract: Parallel algorithms using systolic and wavefront processors have been proposed for a number of matrix operations important for signal processing; namely, matrix-vector multiplication, matrix multiplication/addition, linear equation solution, least squares solution via orthogonal triangular factorization, and singular value decomposition. In principle, such systolic and wavefront processors should greatly facilitate the application of VLSI/VHSIC technology to real-time signal processing by providing modular parallelism and regularity of design while requiring only local interconnects and simple timing. In order to validate proposed architectures and algorithms, a two-dimensional systolic array testbed has been designed and fabricated. The array has programmable processing elements, is dynamically reconfigurable, and will perform 16-bit and 32-bit integer and 32-bit floating point computations. The array will be used to test and evaluate algorithms and data paths for future implementation in VLSI/VHSIC technology. This paper gives a brief system overview, a description of the array hardware, and an explanation of control and data paths in the array. The software system and a matrix multiplication operation are also presented.

Journal Article
TL;DR: A system failure detection method -- Failure Projection Method (FPM) is proposed which provides a geometric pictures of the problem of failure detection in the presence of model uncertainties and noise and is demonstrated on a model of three machine power system.

Journal ArticleDOI
TL;DR: In this paper, the use of the singular value decomposition of a matrix in the analysis of cross-classifications having ordered categories is presented, utilizing some matrix properties of a two-way contingency table.
Abstract: The use of the singular value decomposition of a matrix in the analysis of cross-classifications having ordered categories la presented∗ Utilizing some matrix properties of a two-way contingency table, the singular value decomposition approach la applied on models such as the null association, uniform association and row-column effect models discussed recently in the literature. Some properties of estimates resulting from the singular value decomposition approach are discussed

Journal ArticleDOI
TL;DR: A current annihilation scheme is used to develop parallel algorithms for the Generalised Eigenvalue Problem (GEP) and the Singular Value Decomposition (SVD).

Proceedings ArticleDOI
01 Dec 1982
TL;DR: In this paper, two numerically stable Pisarenko type spectrum estimators based on a subspace approximation approach are presented, where a sinusoidal signal plus noise model is assumed.
Abstract: This paper presents two numerically stable Pisarenko type spectrum estimators based on a subspace approximation approach. A sinusoidal signal plus noise model is assumed. By using the singular value decomposition, the covariance matrix is decomposed into a signal subspace which represents the signal component; and a noise subspace which represents the noise contributions. The first method makes use of a signal subspace structure which characterizes the signal covariance matrix by a linear system triple (A, b, c). Then the frequencies of the signal sinusoids are solved as the eigenvalues of the A matrix. The second method utilizes a Toeplitz structure of the noise subspace. Then a subspace approximation procedure is taken to find an estimate of this noise subspace. The frequency estimates are then solved as the roots of the defining sequence of this Toeplitz noise subspace matrix. Simulation results are furnished to illustrate the advantages of these proposed new methods.


Journal ArticleDOI
TL;DR: In this article, the authors addressed the approximation in norm of stochastic processes of arbitrary dimension by reduced order models, based on a systematic review of the basic geometry of stationary process representation in Hilbert space.

01 Jan 1982
TL;DR: In this paper, a new procedure for 2-D Separable Denominator Recursive (SDR) filter aesign is introduced, based upon the minimization of mean-square error criteria between impulse responses.
Abstract: A new procedure for 2-D Separable Denominator Recursive (SDR) Filter aesign is introduce. It is based upon the minimization of mean-square error criteria between impulse responses. The algorithm is two fold. First the finite impulse response of the prototype is approximated by a finite sum of separable filters using the Singular Value Decomposition Theorem as described in TREITEL & SHANKS (1) - Second the finite sum is approximated by an SDR filter. In this part we develop a new approach based upon a single input-multi output 1-D filter approximation. In the last part of the paper, we present experimental results that compare our new algorithm to related previous ones.