scispace - formally typeset
Search or ask a question

Showing papers on "Singular value decomposition published in 1977"


Journal ArticleDOI
TL;DR: In this paper, a geologically relevant layer sequence with parameters (ρj, dj) is adjusted to yield roughly the measured curve, and the resulting layer sequence is used as starting model for an iterative least squares procedure with singular value decomposition.
Abstract: The proposed system works as follows: 1 By a trial-and-error procedure using a graphic display terminal a geologically relevant layer sequence with parameters (ρj, dj) is adjusted to yield roughly the measured curve. 2 The resulting layer sequence is used as starting model for an iterative least squares procedure with singular value decomposition. Minimization of the sum of the squares of the logarithmic differences between measured and calculated values with respect to the logarithms of the resistivities and thicknesses as parameters linearizes the problem to a great extent, with two important implications: a) a considerable increase in speed (the number of iterations goes down), thus making it cheap to achieve the optimum solution; b) the confidence surfaces in parameter space are well approximated by the hyper-ellipsoids defined by the eigenvalues and eigenvectors of the normal equations. Since these are known from the singular value decomposition we do in fact know all possible solutions compatible with the measured curve and the geological concept. 3 It is possible to “freeze” any combination of parameters at predetermined values. Thus extra knowledge and/or hypotheses are easily incorporated and can be tested by rerunning step (2). The overall computing time for a practical case is of the order of 10 sec on a CDC 6400.

126 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of determining the correct number of terms to take in an exponential sum approximation to equally spaced data is considered, and the Prony algorithm is used, together with singular value decomposition techniques.
Abstract: The problem of determining the correct number of terms to take in an exponential sum approximation to equally spaced data is considered. The Prony algorithm is used, together with singular value decomposition techniques. A criterion is established which is useful in estimating the number of terms when the data are subject to normally distributed, zero mean noise.

26 citations


ReportDOI
TL;DR: In this paper, the problem of finding a set of linearly independent columns of A that span a good approximation to the column space of a matrix B whose rank is less than n is studied.
Abstract: In this paper we shall be concerned with the following problem. Let A be an m x n matrix with m being greater than or equal to n, and suppose that A is near (in a sense to be made precise later) a matrix B whose rank is less than n. Can one find a set of linearly independent columns of A that span a good approximation to the column space of B? The solution of this problem is important in a number of applications. In this paper we shall be chiefly interested in the case where the columns of A represent factors or carriers in a linear model which is to be fit to a vector of observations b. In some such applications, where the elements of A can be specified exactly (e.g. the analysis of variance), the presence of rank degeneracy in A can be dealt with by explicit mathematical formulas and causes no essential difficulties. In other applications, however, the presence of degeneracy is not at all obvious, and the failure to detect it can result in meaningless results or even the catastrophic failure of the numerical algorithms being used to solve the problem. The organization of this paper is the following. In the next section we shall give a precise definition of approximate degeneracy in terms of the singular value decomposition of A. In Section 3 we shall show that under certain conditions there is associated with A a subspace that is insensitive to how it is approximated by various choices of the columns of A, and in Section 4 we shall apply this result to the solution of the least squares problem. Sections 5, 6, and 7 will be concerned with algorithms for selecting a basis for the stable subspace from among the columns of A.

19 citations


Posted Content
TL;DR: In this paper, the problem of finding a set of linearly independent columns of A that span a good approximation to the column space of a matrix B whose rank is less than n is studied.
Abstract: In this paper we shall be concerned with the following problem. Let A be an m x n matrix with m being greater than or equal to n, and suppose that A is near (in a sense to be made precise later) a matrix B whose rank is less than n. Can one find a set of linearly independent columns of A that span a good approximation to the column space of B? The solution of this problem is important in a number of applications. In this paper we shall be chiefly interested in the case where the columns of A represent factors or carriers in a linear model which is to be fit to a vector of observations b. In some such applications, where the elements of A can be specified exactly (e.g. the analysis of variance), the presence of rank degeneracy in A can be dealt with by explicit mathematical formulas and causes no essential difficulties. In other applications, however, the presence of degeneracy is not at all obvious, and the failure to detect it can result in meaningless results or even the catastrophic failure of the numerical algorithms being used to solve the problem. The organization of this paper is the following. In the next section we shall give a precise definition of approximate degeneracy in terms of the singular value decomposition of A. In Section 3 we shall show that under certain conditions there is associated with A a subspace that is insensitive to how it is approximated by various choices of the columns of A, and in Section 4 we shall apply this result to the solution of the least squares problem. Sections 5, 6, and 7 will be concerned with algorithms for selecting a basis for the stable subspace from among the columns of A.

15 citations



Journal ArticleDOI
TL;DR: In this article, the method of singular value decomposition was applied to the analysis of boron NMR spectra in glass, which showed that there are a large number of solutions to any NMR spectral analysis problem.
Abstract: We have applied the method of singular-value decomposition to the analysis of boron NMR spectra in glass. This procedure, which is useful for handling least-squares problems, clearly shows that there are a large number of solutions to any boron NMR spectral analysis problem. This limits the technique as a structure tool.

7 citations


Journal ArticleDOI
TL;DR: In this paper, the singular value decomposition is used to analyze a set of data which relates to employment levels, and it is suggested that a previous Monte Carlo study of this problem is noninformative, misleading, and possibly irrelevant.
Abstract: Issue is taken with a previous article which examines Longley's analysis of computation error in regression models. Specifically, the singular value decomposition is used to analyze a set of data which relates to employment levels. This decomposition is used to analyze near multicollinearity in the data. It is suggested that a previous Monte Carlo study of this problem is noninformative, misleading, and possibly irrelevant.

6 citations


ReportDOI
05 Apr 1977
TL;DR: In this article, the authors demonstrate the effect of singular value decomposition with truncation, (SVDT) a Hankel transformation with damping, and the Tikhonov regularization procedure on noise in the data and demonstrate that in general, regularization is the most natural setting for mollifying the effects of such noise.
Abstract: : Consider ill-posed problems of the form g(t) =Integral from 0 to 1 of K(t,s)f(s)ds and their discrete approximations obtained by quadrature, Ax=b. Assume that our desired solution f is smooth and that our data g is measured experimently and contains highly oscillatory noise. These theorems and examples demonstrate the effect of each of these procedures, the singular value decomposition with truncation, (SVDT) a Hankel transformation with damping, and the Tikhonov regularization procedure, on such noise in the data. It is demonstrated that in general, regularization is the most natural setting for mollifying the effects of such noise. However, for certain problems SVDT is equally suitable and in fact may be better if the rate of convergence of the regularization procedure is too slow.

5 citations


01 Feb 1977
TL;DR: By first triangularizing the matrix A by Householder transformations before bidiagonalizing it, and accumulating some left transformations on a n x n array, the resulting algorithm is often more efficient than the Golub-Reinsch algorithm, especially for matrices with considerably more rows than columns (m << n), such as in least squares applications.
Abstract: The most well-known and widely-used algorithm for computing the Singular Value Decomposition (SVD) of an m x n rectangular matrix A nowadays is the Golub-Reinsch algorithm [1971]. In this paper, it is shown that by (1) first triangularizing the matrix A by Householder transformations before bidiagonalizing it, and (2) accumulating some left transformations on a n x n array instead of on an m x n array, the resulting algorithm is often more efficient than the Golub-Reinsch algorithm, especially for matrices with considerably more rows than columns (m << n), such as in least squares applications. The two algorithms are compared in terms of operation counts, and computational experiments that have been carried out verify the theoretical comparisons. The modified algorithm is more efficient even when m is only slightly greater than n, and in some cases can achieve as much as 50% savings when m << n. If accumulation of left transformations is desired, then $n^2$ extra storage locations are required (relatively small if m << n), but otherwise no extra storage is required. The modified algorithm uses only orthogonal transformations and is therefore numerically stable. In the Appendix, we give the Fortran code of a hybrid method which automatically selects the more efficient of the two algorithms to use depending upon the input values for m and n.

4 citations


Journal ArticleDOI
TL;DR: Application of the asymptotic singular decomposition for compacting tabular data to the OMEGA Propagation Correction Tables will be presented along with a discussion on the accuracy of the first order solution.
Abstract: A method known as asymptotic singular decomposition for compacting tabular data will be discussed. On considering a table as a matrix, the method proceeds to decompose the matrix into its associated singular values and singular vectors. Once decomposed, an element of the original matrix can be reconstructed to any degree of accuracy. Depending on accuracy requirements, most practical benefit of the method arises if the first-order solution is satisfactory. In this case, an M x N table can be decomposed into one of only M + N entries. The method can be applied to most any navigational table. In this paper, application of the method to the OMEGA Propagation Correction (PPC) Tables will be presented along with a discussion on the accuracy of the first order solution. The method should prove extremely useful in reducing the amount of memory needed to store a navigational table in either a mini- or micro-computer or possibly even in a hand-held calculator.

1 citations


Journal ArticleDOI
TL;DR: The generalized symmetric eigenproblem Ax = λBx is required in many multivariate statistical models, viz. canonical correlation, discriminant analysis, multivariate linear model, limited information maximum likelihoods.
Abstract: The solution of the generalized symmetric eigenproblem Ax = λBx is required in many multivariate statistical models, viz. canonical correlation, discriminant analysis, multivariate linear model, limited information maximum likelihoods. The problem can be solved by two efficient numerical algorithms: Cholesky decomposition or singular value decomposition. Practical considerations for implementation are also discussed.