scispace - formally typeset
Search or ask a question

Showing papers on "Singular value decomposition published in 1984"


Journal ArticleDOI
TL;DR: In this article, a common type of inversion applies iterative damped linear least squares through use of the Marquardt-Levenberg method, which has been implemented by solving the associated normal equations in conventional ways.
Abstract: Geophysical inversion involves the estimation of the parameters of a postulated earth model from a set of observations. Since the associated model responses can be nonlinear functions of the model parameters, nonlinear least-squares techniques prove to be useful for performing the inversion. A common type of inversion applies iterative damped linear least squares through use of the Marquardt-Levenberg method. Traditionally, this method has been implemented by solving the associated normal equations in conventional ways. However, Singular Value Decomposition (SVD) produces significant improvements in computational precision when applied to the same system of normal equations. Iterative least-squares modeling finds application in a wide variety of geophysical problems. Two examples illustrate the approach: (1) seismic wavelet deconvolution, and (2) the location of a buried wedge from surface gravity data. More generally, nonlinear least-squares inversion can be used to estimate earth models for any set of geophysical observations for which an appropriate mathematical description is available.

624 citations


Journal ArticleDOI
TL;DR: An overview of ARMA spectral estimation techniques based on the modified Yule-Walker equations is presented in this article, where the importance of using order overestimation, as well as of using an overdetermined set of equations, is emphasized.
Abstract: An overview of ARMA spectral estimation techniques based on the modified Yule-Walker equations is presented. The importance of using order overestimation, as well as of using an overdetermined set of equations, is emphasized. The Akaike information criterion is proposed for determining the equation order. A procedure for removing spurious noise modes based on modal decomposition of the sample covariance matrix is derived. The role of the singular value decomposition method in solving the modified Yule-Walker equations is discussed. A number of techniques for estimating MA spectral parameters are presented.

241 citations


Journal ArticleDOI
TL;DR: State of the art and recent developments in computational linear algebra, including linear systems, least squares techniques, the singular value decomposition, and eigenvalue problems are reviewed briefly.

225 citations


Journal ArticleDOI
01 Feb 1984
TL;DR: This procedure has received only limited dissemination, but in preliminary tests, the performance of the method is close to that of the best available, more complicated, approaches which are based on maximum likelihood or on the use of eigenvector or singular value decompositions.
Abstract: Prony's method is a simple procedure for determining the values of parameters of a linear combination of exponential functions. Until recently, even the modern variants of this method have performed poorly in the presence of noise. We have discovered improvements to Prony's method which are based on low-rank approximations to data matrices or estimated correlation matrices [6]-[8], [15]-[27], [34]. Here we present a different, often simpler procedure for estimation of the signal parameters in the presence of noise. This procedure has received only limited dissemination [35]. It is very close in form and assumptions to Prony's method. However, in preliminary tests, the performance of the method is close to that of the best available, more complicated, approaches which are based on maximum likelihood or on the use of eigenvector or singular value decompositions.

165 citations


Journal ArticleDOI
TL;DR: In this article, the problem of rank-annihilation factor analysis is formulated as a generalized eigenvalue problem, and a direct solution is found by using singular value decomposition.

108 citations


Journal ArticleDOI
TL;DR: In this paper, singular value decomposition (SVD) is used to estimate the number of singular values needed to form the inverse of the Moore-Penrose inverse, which can be used to reduce the computational cost of SVD without degrading the extrapolation.
Abstract: The problem of extrapolating a band-limited signal in discrete time is viewed as one of solving an underdetermined system of linear equations. Choosing the minimum norm least-squares (MNLS) solution is one criterion for singling out an extrapolation from all the possible solutions to the linear system. Use of the Moore-Penrose inverse yields the MNLS solution, and singular value decomposition (SVD) provides a means for implementing the Moore-Penrose inverse. An expression for the mean-square error incurred in solving a linear system via SVD is derived. This can be used to estimate the number of singular values needed to form the inverse. The error expression also indicates that decimation can be applied in the extrapolation problem to reduce the high computational cost of SVD without degrading the extrapolation. The results developed for the one-dimensional case are extended to higher dimensions. Examples of the SVD approach to extrapolation are given, along with examples using other extrapolation techniques for comparison. The SVD approach compares favorably with known MNLS extrapolation methods.

85 citations


Journal ArticleDOI
TL;DR: Singular value decomposition is used to formulate a technique for the design of Luenberger observers when there are completely unknown system inputs in this paper, which is a technique similar to ours.
Abstract: Singular value decomposition is used to formulate a technique for the design of Luenberger observers when there are completely unknown system inputs.

80 citations


Journal ArticleDOI
01 Jul 1984
TL;DR: Specific applications (e.g., the solution of partial differential equations, adaptive noise cancellation, and optimal control) are described to typify the use of matrix processors in modern advanced signal processing.
Abstract: Architectures, algorithms, and applications for systolic processors are described with attention to the realization of parallel algorithms on various optical systolic array processors. Systolic processors for matrices with special structure and matrices of general structure, and the realization of matrix-vector, matrix-matrix, and triple-matrix products and such architectures are described. Parallel algorithms for direct and indirect solutions to systems of linear algebraic equations and their implementation on optical systolic processors are detailed with attention to the pipelining and flow of data and operations. Parallel algorithms and their optical realization for LU and QR matrix decomposition are specifically detailed. These represent the fundamental operations necessary in the implementation of least squares, eigenvalue, and SVD solutions. Specific applications (e.g., the solution of partial differential equations, adaptive noise cancellation, and optimal control) are described to typify the use of matrix processors in modern advanced signal processing.

50 citations


01 Jul 1984
TL;DR: A Jacobi-type algorithm is used to first triangularize the given matrix and then diagonalize the resultant triangular form, resulting in a triangular processor array for computing a singular value decomposition (SVD) of an $m \times n (m \geq n)$ matrix.
Abstract: A triangular processor array for computing a singular value decomposition (SVD) of an $m \times n (m \geq n)$ matrix is proposed. A Jacobi-type algorithm is used to first triangularize the given matrix and then diagonalize the resultant triangular form. The requirements are $O(m)$ time and $1/4 n^{2} + O(n)$ processors.

39 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the forward-backward data matrix arising in linear prediction is bisymmetric, which can be used to establish algebraic invariance properties possessed by the singular vectors associated with that matrices's SVD representation.
Abstract: It often happens that a signal processing application will involve a matrix operator that is invariant under specific preunitary and postunitary matrix multiplications. This invariance feature may be exploited to establish algebraic invariance properties possessed by the singular vectors associated with that matrice's SVD representation. This characterization can be of considerable theoretical as well as computational value. To illustrate the utility of this approach, the new class of bisymmetric matrices will be examined in detail. Interest in bisymmetric matrices arises from their frequent appearance in signal processing applications. For instance, it is shown that the forward-backward data matrix arising in linear prediction is bisymmetric. The more general class of exponential bisymmetric matrices, which includes among its members the discrete Fourier transform, is also briefly examined.

27 citations


Journal ArticleDOI
TL;DR: In this article, asymptotic expressions for singular value decompositions of a matrix some of whose columns approach zero were derived for the QR factorization of the matrix, and the expressions give insight into the method of weights for approximating the solutions of constrained least squares problems.
Abstract: Asymptotic expressions are derived for the singular value decompositon of a matrix some of whose columns approach zero. Expressions are also derived for the QR factorization of a matrix some of whose rows approach zero. The expressions give insight into the method of weights for approximating the solutions of constrained least squares problems.

Journal ArticleDOI
TL;DR: In this article, a least squares fit for a nonlinear model is proposed to fit a multiplicative term to the additive residuals, using the singular value decomposition, which provides insights about the impact of erroneous data values on the fit.
Abstract: An additive-plus-multiplicative model can describe both main effects and row x column interactions in two-way tables of data. When each cell contains exactly one observation, a least squares fit for this nonlinear model calculates the main effects, using means of rows and columns, and then fits a multiplicative term to the additive residuals, using the singular value decomposition. A natural extension of the hat matrix for a linear model yields a definition of leverage that provides insights about the impact of erroneous data values on the fit. Theoretical and numerical investigations reveal the complex nature of leverage for this nonlinear model.

Proceedings ArticleDOI
28 Nov 1984
TL;DR: Methods for computing the Generalized Singular Value Decomposition via a sequence of more familiar computations are discussed and the relation of the GSVD to the MUSIC algorithm of R. Schmidt is indicated.
Abstract: The ordinary Singular Value Decomposition (SVD) is widely used in statistical and signal processing computation, both for the insight it provides into the structure of a linear operator, and as a technique for reducing the computational word length required for least-squares solutions and certain Hermitian eigensystem decompositions by roughly a factor of two, via computing directly on a data matrix, rather than on the corresponding estimated correlation or covariance matrix. Although the SVD has long been utilized as a method of off-line or non-real-time computation, parallel computing architectures for its implementation in near real time have begun to emerge. The Generalized Singular Value Decomposition (GSVD) bears the same relationship to the computation of certain Hermitian generalized eigensystem decompositions that the ordinary SVD bears to the corresponding ordinary,eigensystem decompositions. This paper discusses methods for computing the GSVD via a sequence of more familiar computations and indicates the relation of the GSVD to the MUSIC algorithm of R. Schmidt.

Proceedings ArticleDOI
01 Mar 1984
TL;DR: An algorithm is given which can be used for either coherent or non-coherent case with d unknown, and a number p is calculated and used for determination of the maximum number of sources distinguishable by a given array system.
Abstract: In this paper we present a method, based on the decomposition of matrix, for determination of the number d and the directions of sources with making no assumption about d. We at first give the uniqueness of the decomposition. Then we give an algorithm which can be used for either coherent or non-coherent case with d unknown. In our procedure a number p is calculated and used for determination of the maximum number of sources distinguishable by a given array system.

Proceedings ArticleDOI
Ilse C. F. Ipsen1
28 Nov 1984
TL;DR: Systolic arrays for determining the singular value decomposition of a mxn, m n, matrix A of bandwidth w are presented in this paper, where the singular vectors are computed by rerouting the rotations through the arrays used for the reduction to bidiagonal form.
Abstract: Systolic arrays for determining the singular value decomposition of a mxn, m n, matrix A of bandwidth w are presented. After A has been reduced to bidiagonal form B by means of Givens plane rotations, the singular values of B are computed by the Golub-Reinsch iteration. The products of plane rotations form the matrices of left and right singular vectors. Assuming each processor can compute or supply a plane rotation, O(wn) processors accomplish the reduction to bidiagonal form in O(np) steps, where p is the number of superdiagonals. A constant number of processors then determines each singular value in about 6n steps. The singular vectors are computed by rerouting the rotations through the arrays used for the reduction to bidiagonal form, or else along the way by employing another rectangular array of O(wm) processors.

Proceedings ArticleDOI
01 Mar 1984
TL;DR: This paper shows how ill-conditioning arises in adaptive beam processing, derives optimum array weights in terms of generalized matrix inverse, and applies an eigenvalue preprocessor to correct the ill- conditioning.
Abstract: In many signal processing applications one must invert an estimated correlation matrix which can be ill-conditioned. Ill-conditioning arises in adaptive beamforming when the number of sensors in an array is greater than the number of point sources. Ill-conditioning amplifies estimation, arithmetic, and other system errors. It is well known in the numerical analysis literature, that singular value decomposition is the only reliable method for detection and correction of ill-conditioning. This paper shows how ill-conditioning arises in adaptive beam processing, derives optimum array weights in terms of generalized matrix inverse, and applies an eigenvalue preprocessor to correct the ill-conditioning. The eigenvalue preprocessor can be considered to be a beamformer that reduces the dimensionality of the array processing problem to the dimensions required to effectively process independent point sources.


Journal ArticleDOI
Robert E. Tarjan1
TL;DR: The problem of determining whether a given graph is decomposable is NP-complete, which makes the existence of a polynomial-time decomposition algorithm unlikely, and the possible benefits of having an input-output decomposition may be outweighed by the difficulty of finding one.
Abstract: Pichai, Sezar, and Siljak have studied a decomposition technique for acyclic graphs, called input-output decomposition, that simplifies the analysis of dynamic systems. We show that the problem of determining whether a given graph is decomposable is NP-complete. Since this makes the existence of a polynomial-time decomposition algorithm unlikely, the possible benefits of having an input-output decomposition may be outweighed by the difficulty of finding one.

Book ChapterDOI
01 Jan 1984
TL;DR: It is shown that the proposed method uses a totally different approach, in this sense that it manipulates geometrically all measurements at the same time, and can easily be applied under realistic clinical conditions.
Abstract: Most measurements of human electrical activity contain large amounts of electrical heart activity (electrocardiogram, ECG). Whenever other sources of lower energy are of interest, a need arises to eliminate this ECG. Some applications allow a frequency domain operation (e.g. gastro-intestinal slow wave detection), or even a time domain operation (e.g. blanking of QRS complex of an ECG). Usually none of these methods are adequate. The proposed method uses a totally different approach, in this sense that it manipulates geometrically all measurements at the same time. It decomposes the measurements on the basis of an oriented energy, by means of the singular value decomposition. After an introduction to the problem and a definition of some useful concepts, the basic idea of the method is presented in a low dimensional geometrical example, and generalized to higher dimensions. It is shown that the method can easily be applied under realistic clinical conditions. A discussion is given about : the influence of noise; considerations on correlations between source signals; number and location of electrodes; certain dynamical problems like interference. Results on real data are given, in order to illustrate and verify the main features of the method : the multidimensional approach to the estimation of equivalent dipole vector sources; the insensitivity of ECG elimination quality to actual electrode positions; the minimal rank representation of the source signals and the resulting signal to noise ratio improvement; 50 Hz or 60 Hz interference elimination.

Proceedings ArticleDOI
01 Mar 1984
TL;DR: A discrete-time, discrete-frequency model for image restoration using the singular value decomposition of the imaging matrix is studied and it is shown that the resulting singular vectors have many of the properties possessed by Slepian's discrete prolate sphroidal sequences.
Abstract: We study in this paper a discrete-time, discrete-frequency model for image restoration using the singular value decomposition of the imaging matrix. We show that the resulting singular vectors have many of the properties possessed by Slepian's discrete prolate sphroidal sequences (DPSS). They are doubly orthogonal, they are bandlimited, they satisfy an equation very similar to that satisfied by the DPSSs, and they possess an extremal energy-concentration property. These properties continue to hold with appropriate modification for bandpass as well as lowpass operations.

Proceedings ArticleDOI
01 Aug 1984
TL;DR: This paper concerns the computation of the singular value decomposition using systolic arrays and two different linear time algorithms are presented.
Abstract: This paper contains the computation of the singular value decomposition using systolic arrays. Two different linear time algorithms are presented.

Journal ArticleDOI
TL;DR: Singular value decomposition is applied to assess the influence of finite-accuracy data antt finite-resolution computing equipment on results of least-squares (LS) dynamic system identification as mentioned in this paper.
Abstract: Singular value decomposition is applied to assess the influence of finite-accuracy data antt finite-resolution computing equipment on results of least-squares (LS) dynamic system identification. Singular value decomposition analysis also suggests new, more practical definitions of irientif lability and persistent excitation.

Proceedings ArticleDOI
01 Mar 1984
TL;DR: The SVD approach is seen to yield the best extrapolation of the known minimum-norm least-squares methods and can be extended to higher dimensions, as illustrated by the two-dimensional case.
Abstract: The problem of extrapolating a band-limited signal in discrete-time is viewed as one of solving an underdetermined system of linear equations. The matrix to be inverted is also generally ill-conditioned. Singular value decomposition (SVD) provides both a means for implementing the inverse and a method for improving the numerical stability of the problem. An expression for the mean-square error incurred in solving a system of linear equations via SVD is derived. This expression can be used to estimate the number of singular values needed to form the inverse. Further examination of the expression indicates that, in the case of an oversampled signal, decimation can be applied without significantly degrading the extrapolation. The results developed for the one-dimensional case can be extended to higher dimensions, as illustrated by the two-dimensional case. Examples of the SVD approach to extrapolation are given, along with examples using other extrapolation techniques for comparison. The SVD approach is seen to yield the best extrapolation of the known minimum-norm least-squares methods.

Journal ArticleDOI
TL;DR: In this article, the singular value decomposition of a compact operator is defined as a measure of the maximum amount of recoverable information in such an inversion, terming it the essential dimension of the operator.
Abstract: Remote determination of the refractive index structure parameter from experimental data by Tatarski's integral equation requires numerical inversion of a compact linear operator. Such problems are known to be ill posed; the consequent ill conditioning of the inversions has led to a large degree of uncertainty in reported reconstructions. In this paper we use the singular value decomposition of a compact operator to define a measure of the maximum amount of recoverable information in such an inversion, terming it the essential dimension of the operator. We propose the use of filtered singular value decomposition as the numerical algorithm that will recover most of the information and minimize uncertainty. A detailed study of the operators appearing in determination of horizontal (wave front is spherically symmetric) and vertical (wave front is plane) profiles of the atmosphere is under-taken to determine their essential dimensions by both analysis and computation. The results indicate that for typical parameter values, both operators have small essential dimensions, with vertical profiles being harder to reconstruct than horizontal profiles.

Journal ArticleDOI
TL;DR: In this article, the authors divide a region of interest into a few layers and represent the perturbation of wave slowness in each layer by a series of Chebyshev polynomials.
Abstract: Summary. The computational effectiveness of travel-time inversion methods depends on the parameterization of a 3-D velocity structure. We divide a region of interest into a few layers and represent the perturbation of wave slowness in each layer by a series of Chebyshev polynomials. Then a relatively complex velocity structure can be dcscribed by a small set of parameters that can be accurately evaluated by a linearized inversion of travel-time residuals. This method has been applied to artificial and real data at small epicentral distances and in the teleseismic distance range. The corresponding matrix equations were solved using singular value decomposition. The results suggest that the method combines resolution with computational convenience.

Proceedings ArticleDOI
P. Ang1, M. Morf
01 Mar 1984
TL;DR: An algorithm is presented that can obtain the QR iterates at step 1, 2, 4, 8, 16, etc for every sweep over the matrix and can be implemented on a highly regular array of computing elements with only neighborhood communication between processors.
Abstract: We present in this paper an algorithm that doubles-up on Francis's QR algorithm. By this we mean that we can obtain the QR iterates at step 1, 2, 4, 8, 16, etc for every sweep over the matrix. We also show that the algorithm can be implemented on a highly regular array of computing elements with only neighborhood communication between processors. Simulations are presented which suggest that algorithm is stable.

01 Mar 1984
TL;DR: In this paper, a perturbation theory for the singular value decomposition was developed and used to analyze the total least squares problem, the Golub-Klema-Stewart subset selection algorithm, and the algebraic Riccati equation.
Abstract: The gist of the CS decomposition is that the blocks of a partitioned orthogonal matrix have related singular value decompositions. In this paper we develop a perturbation theory for the CS decomposition and use it to analyze (a) the total least squares problem, (b) the Golub-Klema-Stewart subset selection algorithm, (c) the algebraic Riccati equation, and (d) the generalized singular value decomposition.


Book ChapterDOI
01 Jan 1984
TL;DR: The method presented is a power method for calculating the largest triplets of the SVD of a matrix A when multiplication is "cheap" and it is more efficient than Golub's algorithm if only the dominant part of the LSTM of a long sequence of slowly varying matrices is needed.
Abstract: In this paper we present an algorithm ASVD for the computation of the singular value decomposition (SVD). The method presented is a power method for calculating the largest triplets of the SVD of a matrix A when multiplication is "cheap". The method used bears a lot of similitude with the power method for finding the eigenvalues of a symmetric matrix M. The triplets are found one after another and also some deflation techniques (orthogonalization) are used. The algorithm can take profit of the SVD of slightly different matrices and it is based on the geometric properties of the SVD. Tests have shown that it is more efficient than Golub's algorithm if only the dominant part of the SVD of a long sequence of slowly varying matrices is needed. Also storage efficiency is obtained whenever the matrices are structured.

Proceedings ArticleDOI
19 Mar 1984
TL;DR: This work considers the evaluation of the order of a linear system transfer function represented as an AR or an ARMA model based on the use of the singular value decomposition technique for the efficient determination of the rank of a matrix.
Abstract: We consider the evaluation of the order of a linear system transfer function represented as an AR or an ARMA model based on the use of the singular value decomposition technique for the efficient determination of the rank of a matrix. Inputs to the system are modeled as binary-valued random data and outputs of the system are observed in the presence of uncorrelated noises. Results are obtained for the case when the relevant statistics given in autocorrelation and crosscorrelation values are assumed to be available as well as the case when the required statistics are computed explicitly from the sequence sample values. Various numerical examples are considered and shown to be more efficient than the Woodside determinant ratio approach.