scispace - formally typeset
Search or ask a question

Showing papers on "Singular value decomposition published in 1991"


Book
13 Mar 1991
TL;DR: In this paper, the authors present a directory of Symbols and Definitions for PCA, as well as some classic examples of PCA applications, such as: linear models, regression PCA of predictor variables, and analysis of variance PCA for Response Variables.
Abstract: Preface.Introduction.1. Getting Started.2. PCA with More Than Two Variables.3. Scaling of Data.4. Inferential Procedures.5. Putting It All Together-Hearing Loss I.6. Operations with Group Data.7. Vector Interpretation I : Simplifications and Inferential Techniques.8. Vector Interpretation II: Rotation.9. A Case History-Hearing Loss II.10. Singular Value Decomposition: Multidimensional Scaling I.11. Distance Models: Multidimensional Scaling II.12. Linear Models I : Regression PCA of Predictor Variables.13. Linear Models II: Analysis of Variance PCA of Response Variables.14. Other Applications of PCA.15. Flatland: Special Procedures for Two Dimensions.16. Odds and Ends.17. What is Factor Analysis Anyhow?18. Other Competitors.Conclusion.Appendix A. Matrix Properties.Appendix B. Matrix Algebra Associated with Principal Component Analysis.Appendix C. Computational Methods.Appendix D. A Directory of Symbols and Definitions for PCA.Appendix E. Some Classic Examples.Appendix F. Data Sets Used in This Book.Appendix G. Tables.Bibliography.Author Index.Subject Index.

3,534 citations


Journal ArticleDOI
TL;DR: The authors propose an alternative learning procedure based on the orthogonal least-squares method, which provides a simple and efficient means for fitting radial basis function networks.
Abstract: The radial basis function network offers a viable alternative to the two-layer neural network in many applications of signal processing. A common learning algorithm for radial basis function networks is based on first choosing randomly some data points as radial basis function centers and then using singular-value decomposition to solve for the weights of the network. Such a procedure has several drawbacks, and, in particular, an arbitrary selection of centers is clearly unsatisfactory. The authors propose an alternative learning procedure based on the orthogonal least-squares method. The procedure chooses radial basis function centers one by one in a rational way until an adequate network has been constructed. In the algorithm, each selected center maximizes the increment to the explained variance or energy of the desired output and does not suffer numerical ill-conditioning problems. The orthogonal least-squares learning strategy provides a simple and efficient means for fitting radial basis function networks. This is illustrated using examples taken from two different signal processing applications. >

3,414 citations


Book
01 Jan 1991
TL;DR: This paper focuses on Gaussian Elimination as a model for Iterative Methods for Linear Systems, and its applications to Singular Value Decomposition and Sparse Eigenvalue Problems.
Abstract: Gaussian Elimination and its Variants Sensitivity of Linear Systems Effects of Roundoff Errors Orthogonal Matrices and the Least Squares Problem Eigenvalues, Eigenvectors and Invariant Subspaces Other Methods for the Symmetric Eigenvalue Problem The Singular Value Decomposition Appendices Bibliography

1,077 citations


Journal ArticleDOI
TL;DR: In this article, the authors extend Takens' treatment, applying statistical methods to incorporate the effects of observational noise and estimation error, and derive asymptotic scaling laws for distortion and noise amplification.

505 citations


Journal ArticleDOI
TL;DR: Two direction finding algorithms are presented for nonGaussian signals, which are based on the fourth-order cumulants of the data received by the array, which seem to confirm the insensitivity of these algorithms to the (Gaussian) noise parameters.
Abstract: Two direction finding algorithms are presented for nonGaussian signals, which are based on the fourth-order cumulants of the data received by the array. The first algorithm is similar to MUSIC, while the second is asymptotically minimum variance in a certain sense. The first algorithm requires singular value decomposition of the cumulant matrix, while the second is based on nonlinear minimization of a certain cost function. The performance of the minimum variance algorithm can be assessed by analytical means, at least for the case of discrete probability distributions of the source signals and spatially uncorrelated Gaussian noise. The numerical experiments performed seem to confirm the insensitivity of these algorithms to the (Gaussian) noise parameters. >

311 citations


Journal ArticleDOI
TL;DR: It is proved that the generalized eigenequations on the optimal discriminant plane are stable in respect of eigenvalues and the generalized Eigenvectors are indeed the optimal discriminate directions, if the perturbation is subject to some certain conditions.

267 citations


Journal ArticleDOI
TL;DR: The singular value decomposition (SVD) is explored as the common structure in the three basic algorithms: direct matrix pencil algorithm, pro-ESPRIT, and TLS, and several SVD-based steps inherent in the algorithms are equivalent to the first-order approximation.
Abstract: Several algorithms for estimating generalized eigenvalues (GEs) of singular matrix pencils perturbed by noise are reviewed. The singular value decomposition (SVD) is explored as the common structure in the three basic algorithms: direct matrix pencil algorithm, pro-ESPRIT, and TLS-ESPRIT. It is shown that several SVD-based steps inherent in the algorithms are equivalent to the first-order approximation. In particular, the Pro-ESPRIT and its variant TLS-Pro-ESPRIT are shown to be equivalent, and the TLS-ESPRIT and its earlier version LS-ESPRIT are shown to be asymptotically equivalent to the first-order approximation. For the problem of estimating superimposed complex exponential signals, the state-space algorithm is shown to be also equivalent to the previous matrix pencil algorithms to the first-order approximation. The second-order perturbation and the threshold phenomenon are illustrated by simulation results based on a damped sinusoidal signal. An improved state-space algorithm is found to be the most robust to noise. >

262 citations


Journal ArticleDOI
TL;DR: This article presents a new approach to the problem of determining the minimum set of inertial parameters of robots based on numerical QR and SVD factorizations and on the scaling procedure of matrices, which can be applied to open loop, or graph-structured robots.
Abstract: This article presents a new approach to the problem of determining the minimum set of inertial parameters of robots. The calculation is based on numerical QR and SVD factorizations and on the scaling procedure of matrices. It proceeds in three steps: eliminate standard inertial parameters which have no effect on the dynamic model, determine the number of base parameters, and determine a set of base parameters by regrouping some standard parameters to some others in linear relations. Different models, linear in the inertial parameters are used: a complete dynamic model, a simplified dynamic model, and an energy model. The method is general, it can be applied to open loop, or graph-structured robots. The algorithms are easy to implement. An application for the PUMA 560 robot is given.

254 citations


Journal ArticleDOI
TL;DR: This paper derives a number of quantitative rules for reducing the rank of signal models that are used in signal processing algorithms.

244 citations


Journal ArticleDOI
TL;DR: In this article, the status of singular value loop-shaping as a design paradigm for multivariable feedback systems is reviewed and an alternate paradigm is discussed which overcomes these limitations.
Abstract: The status of singular value loop-shaping as a design paradigm for multivariable feedback systems is reviewed. It shows that this paradigm is an effective design tool whenever the problem specifications are spacially round. The tool can be arbitrarily conservative, however, when they are not. This happens because singular value conditions for robust performance are not tight (necessary and sufficient) and can severely overstate actual requirements. An alternate paradign is discussed which overcomes these limitations. The alternative includes a more general problem formulation, a new matrix function mu, and tight conditions for both robust stability and robust performance. The state of the art currently permits analysis of feedback systems within this new paradigm. Synthesis remains a subject of research.

235 citations


Proceedings ArticleDOI
11 Dec 1991
TL;DR: In this article, the authors derive a novel algorithm to consistently identify stochastic state space models from given output data without forming the covariance matrix and using only semi-infinite block Hankel matrices.
Abstract: The authors derive a novel algorithm to consistently identify stochastic state space models from given output data without forming the covariance matrix and using only semi-infinite block Hankel matrices. The algorithm is based on the concept of principle angles and directions. The authors describe how these can be calculated with only QR and QSVD decompositions. They also provide an interpretation of the principle directions as states of a non-steady-state Kalman filter. With a couple of examples, it is shown that the proposed algorithm is superior to the classical canonical correlation algorithms. >

Journal ArticleDOI
TL;DR: In this paper, a method for structural analysis of multivariate data is proposed that combines features of regression analysis and principal component analysis, which is based on the generalized singular value decomposition of a matrix with certain metric matrices.
Abstract: A method for structural analysis of multivariate data is proposed that combines features of regression analysis and principal component analysis. In this method, the original data are first decomposed into several components according to external information. The components are then subjected to principal component analysis to explore structures within the components. It is shown that this requires the generalized singular value decomposition of a matrix with certain metric matrices. The numerical method based on the QR decomposition is described, which simplifies the computation considerably. The proposed method includes a number of interesting special cases, whose relations to existing methods are discussed. Examples are given to demonstrate practical uses of the method.

Proceedings ArticleDOI
07 Oct 1991
TL;DR: In this article, the authors address the problem of motion segmentation using the singular value decomposition of a feature track matrix and show that, under general assumptions, the number of numerically nonzero singular values can be used to determine the count of motions.
Abstract: The authors address the problem of motion segmentation using the singular value decomposition of a feature track matrix. It is shown that, under general assumptions, the number of numerically nonzero singular values can be used to determine the number of motions. Furthermore, motions can be separated using the right singular vectors associated with the nonzero singular values. A relationship is derived between a good segmentation, the number of nonzero singular values in the input and the sum of the number of nonzero singular values in the segments. The approach is demonstrated on real and synthetic examples. The paper ends with a critical analysis of the approach. >


Journal ArticleDOI
TL;DR: The goal of the LAPACK project is to design and implement a portable linear algebra library for efficient use on a variety of high-performance computers, based on the widely used LINPACK and EISPACK packages, but extends their functionality in a number of ways.
Abstract: The goal of the LAPACK project is to design and implement a portable linear algebra library for etficient use on a variety of high-performance computers. The library is based on the widely used LINPACK and EISPACK packages for solving linear equations, eigenvalue problems, and linear least-squares problems, but extends their functionality in a number of ways. The major methodology for making the algorithms run faster is to restructure them to perform block matrix operations (e.g. matrix-matrix multiplication) in their inner loops. These block operations may be optimized to exploit the memory hierarchy of a specific architecture. In particular, we discuss algorithms and benchmarks for the singular value decomposition.

Journal ArticleDOI
TL;DR: In this paper, the authors extend the singular value decomposition to a path of matrices and develop an algorithm for computing analytic SVD's, which converges with the Euler-like method.
Abstract: This paper extends the singular value decomposition to a path of matricesE(t). An analytic singular value decomposition of a path of matricesE(t) is an analytic path of factorizationsE(t)=X(t)S(t)Y(t) T whereX(t) andY(t) are orthogonal andS(t) is diagonal. To maintain differentiability the diagonal entries ofS(t) are allowed to be either positive or negative and to appear in any order. This paper investigates existence and uniqueness of analytic SVD's and develops an algorithm for computing them. We show that a real analytic pathE(t) always admits a real analytic SVD, a full-rank, smooth pathE(t) with distinct singular values admits a smooth SVD. We derive a differential equation for the left factor, develop Euler-like and extrapolated Euler-like numerical methods for approximating an analytic SVD and prove that the Euler-like method converges.

Journal ArticleDOI
TL;DR: In this article, it was shown that the cost of Newton's method is reduced by a factor of two for arbitrary input matrices; for symmetric positive definite matrices, the factor is four.
Abstract: Pan and Reif have shown that Newton iteration may be used to compute the inverse of an $n \times n$, well-conditioned matrix in parallel time $O(\log ^2 n)$ and that this computation is processor efficient. Since the algorithm essentially amounts to a sequence of matrix–matrix multiplications, it can be implemented with great efficiency on systolic arrays and parallel computers.Newton's method is expensive in terms of the arithmetic operation count. In this paper the cost of Newton's method is reduced with several new acceleration procedures. A speedup by a factor of two is obtained for arbitrary input matrices; for symmetric positive definite matrices, the factor is four. It is also shown that the accelerated procedure is a form of Tchebychev acceleration, whereas Newton's method uses a Neumann series approximation.In addition, Newton-like procedures are developed for a number of important related problems. It is also shown how to compute the nearest matrices of lower rank to a given matrix A, the generalized inverses of these nearby matrices, their ranks (as a function of their distances from A), and projections onto subspaces spanned by singular vectors; such computations are impodrtant in signal processing applications. Furthermore, it is demonstrated that the numerical instability of Newton's method when applied to a singular matrix is absent from these improved methods. Finally, the use of these tools to devise new polylog time parallel algorithms for the singular value decomposition is explored.

Journal ArticleDOI
TL;DR: The restricted singular value decomposition (RSVD) as mentioned in this paper is a generalization of the ordinary singular decomposition with different inner products in row and column spaces, and its properties and structure are investigated in detail as well as its connection to generalized eigenvalue problems.
Abstract: The restricted singular value decomposition (RSVD) is the factorization of a given matrix, relative to two other given matrices. It can be interpreted as the ordinary singular value decomposition with different inner products in row and column spaces. Its properties and structure are investigated in detail as well as its connection to generalized eigenvalue problems, canonical correlation analysis and other generalizations of the singular value decomposition. Applications that are discussed include the analysis of the extended shorted operator, unitarily invariant norm minimization with rank constraints, rank minimization in matrix balls, the analysis and solution of linear matrix equations, rank minimization of a partitioned matrix and the connection with generalized Schur complements, constrained linear and total linear least squares problems, with mixed exact and noisy data, including a generalized Gauss-Markov estimation scheme. Two constructive proofs of the RSVD in terms of other generalizations of the ordinary singular value decomposition are provided as well.

Journal ArticleDOI
TL;DR: A unified statistical performance analysis using perturbation expansions is applied to subspace-based algorithms for direction-of-arrival (DOA) estimation in array signal processing using the MUSIC, Min-Norm, State-Space Realization, and ESPRIT algorithms.

Journal ArticleDOI
TL;DR: In this article, the concept of restricted singular values of matrix triplets is introduced, and the restricted singular value decomposition (RSVD) is introduced for matrix rank determination under restricted perturbation.
Abstract: In this paper the concept of restricted singular values of matrix triplets is introduced. A decomposition theorem concerning the general matrix triplet $( A,B,C )$, where $A \in \mathcal{C}^{m \times n} ,B \in \mathcal{C}^{m \times p} $, and $C \in \mathcal{C}^{q \times n} $, which is called the restricted singular value decomposition (RSVD), is proposed. This result generalizes the well-known singular value decomposition, the generalized singular value decomposition, and the recently proposed product-induced singular value decomposition. Connection of restricted singular values with the problem of determination of matrix rank under restricted perturbation is also discussed.

Journal ArticleDOI
TL;DR: It is shown how to generalize the ordinary singular value decomposition of a matrix into a combined factorization of any number of matrices, and proposed to call these factorizations generalized singularvalue decompositions.

Journal ArticleDOI
TL;DR: In this article, a singular value decomposition (SVD) of the triaxial data matrix produces an eigenanalysis of the covariance matrix and a rotation of the data onto the directions given by the eigen analysis, all in one step.
Abstract: Polarization analysis can be achieved efficiently by treating a time window of a single‐station triaxial recording as a matrix and doing a singular value decomposition (SVD) of this seismic data matrix. SVD of the triaxial data matrix produces an eigenanalysis of the data covariance (cross‐energy) matrix and a rotation of the data onto the directions given by the eigenanalysis (Karhunen‐Loeve transform), all in one step. SVD provides a complete principal components analysis of the data in the analysis time window. Selection of this time window is crucial to the success of the analysis and is governed by three considerations: the window should contain only one arrival; the window should be such that the signal‐to‐noise ratio is maximized; and the window should be long enough to be able to discriminate random noise from signal. The SVD analysis provides estimates of signal, signal polarization directions, and noise. An F‐test is proposed which gives the confidence level for the hypothesis of rectilinear pol...

Proceedings ArticleDOI
11 Dec 1991
TL;DR: The authors present a fast recursive implementation of the ordinary MIMO (multiple input, multiple output output-error state-space model identification, MOESP) algorithm with a key point in obtaining this reduction in order of complexity is the rank-one update of a tridiagonal matrix instead of a diagonal matrix.
Abstract: The authors present a fast recursive implementation for the ordinary MIMO (multiple input, multiple output output-error state-space model identification, MOESP) algorithm. The core of the implementation is a partial update of an LQ factorization followed by a rank-one update of a SVD (singular value decomposition) step. The computational complexity of a single measurement update is O(L/sup 2/), where L is related to the dimension of the matrices to be processed. When one would straightforwardly apply existing rank-one update schemes as presented by J.R. Bunch et al. (1978), the order of complexity would be O(L/sup 3/). The key point in obtaining this reduction in order of complexity is the rank-one update of a tridiagonal matrix instead of a diagonal matrix. The authors apply the proposed scheme to the identification of slowly time-variant systems and illustrate some of the potential of the MOESP scheme in estimating a nominal state-space model and error bounds when using error-affected measurements. >

Book
01 Mar 1991
TL;DR: Analysis of SVD-Based Algorithms and Architectures and applications to signal modeling and detection OSVD and QSVD in signal separation and enhanced sinusoidal and exponential data modeling.
Abstract: Parts: I. Survey Papers. The SVD and reduced-rank signal processing (L.L. Scharf). Parallel implementations of the SVD using implicit CORDIC arithmetic (J-M. Delosme). Neural networks for extracting pure/constrained/oriented principal components (S.Y. Kung et al.). Generalizations of the OSVD: structure, properties and applications (B. de Moor). Perturbation theory for the singular value decomposition (G.W. Stewart). II. Algorithms and Architectures.An accurate product SVD algorithm (A.W. Bojanczyk et al.). The hyperbolic singular value decomposition and applications (R. Onn et al.). Adaptive SVD algorithm with application to narrowband signal tracking (W. Ferzali, J.G. Proakis). Chebyshev acceleration techniques for solving slowly varying total least squares problems (S. van huffel). Combined Jacobi-type algorithms in signal processing (M. Moonan et al.). A modified non-symmetric Lanczos algorithm and applications (D. Boley, G. Golub). Parallel one sided Euler-Jacobi method for symmetric eigendecomposition and SVD (A.W. Bojanczyk et al.). Using UNITY to implement SVD on the connection machine (M. Kleyn, I. Chakravarty). A CORDIC processor array for the SVD of a complex matrix (J.R. Cavallaro, A.C. Elster). III. Analysis of SVD-Based Algorithms. Analytical performance prediction of subspace-based algorithms for DOA estimation (Fu Li, R.J. Vaccaro). Spatial smoothing and MUSIC: Further results (B.D. Rao, K.V.S. Hari). A performance analysis of adaptive algorithms in the presence of calibration errors (D.R. Farrier, D.J. Jeffries). Second order perturbation calculation of state space estimation (W.W.F. Pijnappel et al.). The threshold effect in signal processing algorithms which use an estimated subspace (D.W. Tufts et al.). IV. Applications to signal modeling and detection OSVD and QSVD in signal separation (D. Callaerts et al.). Enhanced sinusoidal and exponential data modeling (J.A. Cadzow, D.M. Wilkes). Enhancements to SVD-Based detection (J.H. Cozzens, M.J. Sousa).Resolution of closely spaced coherent plane waves via SVD (H. Krim et al.). Transient parameter estimation by an SVD-based Wigner distribution (M.F. Grifin, A.M. Finn). Signal/noise subspace decomposition for random transient detection (N.M. Marinovich, L.M. Roytman). Comparisons of truncated QR and SVD methods for AR spectral estimations (S.F. Hsieh et al.). Using singular value decomposition to recover periodic waveforms in noise and with residual carrier (B. Rice). Other Applications. SVD-based low-rank approximations of rational models (A-J. van der Veen, E.F. Deprettere). Computing the singular values and vectors of a Hankel operator (H. Ozbay). SVD analysis of probability matrices (J.A. Ramos). Fast matrix-vector multiplication using displacement rank approximation via an SVD (J.M. Speiser et al.). A new use of singular value decomposition in bioelectric imaging of the brain (D.J. Major, R.J. Sidman).

Journal ArticleDOI
TL;DR: In this article, a simple collocation procedure, when combined with the singular value decomposition, can yield accurate results for the numerical solution of the superposition integral equation of a complex radiator.
Abstract: In the method of wave superposition, the acoustic field, due to a complex radiator, is expressed in terms of a Fredholm integral equation of the first kind called the ‘‘superposition integral equation.’’ In general, Fredholm integral equations of the first kind are ill‐posed and therefore difficult to solve numerically. In this paper, it will be shown that a simple collocation procedure, when combined with the singular‐value decomposition, can yield accurate results for the numerical solution of the superposition integral. As an example of the application of the method, the acoustic radiation from a circular cylinder will be analyzed using this numerical procedure and compared to the exact solution. It is shown that, for this problem, the accuracy of the numerical solution can be judged by evaluating how well the superposition solution approximates the specified boundary condition on the surface of the radiator. An example is also given of a problem which has no exact solution. In this situation, it is suggested, without proof, that the accuracy of the numerical solution can be judged in a similar manner by evaluating the error in the superposition solution’s satisfaction of the boundary condition.

Journal ArticleDOI
01 Nov 1991
TL;DR: In this article, an analytic method of digital interpolator optimization is proposed to achieve performance approaching or equaling that of full polyphase filter designs through the generation of orthogonal filters via singular value decomposition (SVD).
Abstract: An analytic method of digital interpolator optimization is proposed. Performance approaching or equaling that of full polyphase filter designs is achieved through the generation of orthogonal filters via singular value decomposition (SVD). This approach preserves the magnitude, group delay, and composite filter response and can eliminate the traditional restriction to ratio of integer interpolation and decimation factors. >

Journal ArticleDOI
TL;DR: In this paper, the stability of the fundamental solution method applied to the Dirichlet problem of Laplace's equation has been studied and an asymptotic estimate of the stability with respect to the number of collocation points has been given.

Journal ArticleDOI
TL;DR: This paper presents reliable condition estimation procedures, based on Frechet derivatives, for polar decomposition and the matrix sign function, and related results for the stable Lyapunov equation and Newton's method for the matrix square root problem are discussed.
Abstract: This paper presents reliable condition estimation procedures, based on Frechet derivatives, for polar decomposition and the matrix sign function. For polar decomposition, the condition number for complex matrices is equal to the reciprocal of the smallest singular value, and rather surprisingly, for real matrices it is equal to the reciprocal of the average of the two smallest singular values. By using inverse power methods, both of these condition numbers can be evaluated at a fraction of the cost of finding the polar decomposition.Except for special cases, such as for normal matrices, the condition number of the matrix sign function does not have such a precise characterization. However, accurate condition estimates can be obtained by using explicit forms of the Frechet derivative, or its finite-difference approximation, with a matricial inverse power method. These methods typically require two extra sign function evaluations, and it is an open problem whether accurate estimates can be obtained for a fraction of a function evaluation, as is the case for the polar decomposition. Related results for the stable Lyapunov equation and Newton's method for the matrix square root problem are discussed.

Journal ArticleDOI
TL;DR: In this article, the TLS computations are generalized in order to maintain consistency of the solution in the following cases: first, some columns of A may be error-free and secondly, the errors on the remaining data may be correlated and not equally sized.
Abstract: The Total Least Squares (TLS) method has been devised as a more global fitting technique than the ordinary least squares technique for solving overdetermined sets of linear equations AX ≈ B when errors occur in all data. If the errors on the measurements A and B are uncorrelated with zero mean and equal variance, TLS is able to compute a strongly consistent estimate of the true solution of the corresponding unperturbed set A 0 X = B 0. In this paper the TLS computations are generalized in order to maintain consistency of the solution in the following cases: first of all, some columns of A may be error-free and secondly, the errors on the remaining data may be correlated and not equally sized. Hereto, a numerically reliable Generalized TLS algorithm GTLS, based on the Generalized Singular Value Decomposition (GSVD), is developed. Additionally, the equivalence between the GTLS solution and alternative expressions of consistent estimators, described in literature, is proven. These relations allow to deduce the main statistical properties of the GTLS solution.

Journal ArticleDOI
TL;DR: A comparative study on motion estimation from 3D line segments is presented, using both synthetic and real data obtained by a trinocular stereo, and it is observed that the extended Kalman filter with the rotation axis representation of rotation is preferable.