scispace - formally typeset
Search or ask a question

Showing papers on "Non-linear least squares published in 1973"


01 Jan 1973
TL;DR: In this paper, the least square fit of nonlinear models of the form {(0t, Yi), l,, m, qgj, ti, and the modified functional r2( 0t (lY O(0 t)/(0)yl)22) is considered.
Abstract: For given data (t, Yi), l, , m, we consider the least squares fit ofnonlinear models of the form It is shown that by defining the matrix {(0t)}i, qgj(0t; ti), and the modified functional r2(0t (lY O(0t)/(0t)yl)22, it is possible to optimize first with respect to the parameters 0t, and then to obtain, a posteriori, the optimal parameters . The matrix (0t) is the Moore-Penrose generalized inverse of O(t). We develop formulas for the Fr6chet derivative of orthogonal projectors associated with and also for /(0t), under the hypothesis that O(0t) is of constant (though not necessarily full) rank. Detailed algorithms are presented which make extensive use ofwell-known reliable linear least squares techniques, and numerical results and comparisons are given. These results are generalizations of those of H. D. Scolnik (20) and Guttman, Pereyra and Scolnik (9).

1,083 citations


Journal ArticleDOI
TL;DR: It is demonstrated that, for convolution-type models of image restoration, special properties of the linear system of equations can be used to reduce the computational requirements.
Abstract: Constrained least squares estimation is a technique for solution of integral equations of the first kind The problem of image restoration requires the solution of an integral equation of the first kind However, application of constrained least squares estimation to image restoration requires the solution of extremely large linear systems of equations In this paper we demonstrate that, for convolution-type models of image restoration, special properties of the linear system of equations can be used to reduce the computational requirements The necessary computations can be carried out by the fast Fourier transform, and the constrained least squares estimate can be constructed in the discrete frequency domain A practical procedure for constrained least squares estimation is presented, and two examples are shown as output from a program for the CDC 7600 computer which performs the constrained least squares restoration of digital images

590 citations




Journal ArticleDOI
A. Buse1
TL;DR: Goodness of Fit in Generalized Least Squares Estimation The American Statistician: Vol 27, No 3, pp 106-108 as discussed by the authors, The American Statistical Journal
Abstract: (1973) Goodness of Fit in Generalized Least Squares Estimation The American Statistician: Vol 27, No 3, pp 106-108

239 citations


Journal ArticleDOI
TL;DR: In this article, a method for finding the coefficients of an nth-order linear recursive digital filter, which gives the best least squares approximation to a desired pulse response over a finite interval, is presented.
Abstract: A method for finding the coefficients of an nth-order linear recursive digital filter, which gives the best least squares approximation to a desired pulse response over a finite interval, is presented. A relationship is derived between the approximating error corresponding to an optimal set of numerator coefficients and the error produced by an overdetermined set of linear equations, which is a function of the denominator coefficients only. This relation provides a computational algorithm for calculating the optimal coefficients by iteratively solving weighted sets of linear equations in terms of the denominator coefficients only. Both theoretical and numerical results are presented. Also, bounds are found on the interval in which the norm of the optimum error must lie.

133 citations



Journal ArticleDOI
TL;DR: Two criteria for fitting a linear function to a set of points are considered, viz., least sum of absolute deviations and the least maximum absolute deviation, both of which give rise to a linear program.
Abstract: The problem considered here is that of fitting a linear function to a set of points. The criterion normally used for this is least squares. We consider two alternatives, viz., least sum of absolute deviations (called the L1 criterion) and the least maximum absolute deviation (called the Chebyshev criterion). Each of these criteria give rise to a linear program. We develop some theoretical properties of the solutions and in the light of these, examine the suitability of these criteria for linear estimation. Some of the estimates obtained by using them are shown to be counter-intuitive.

83 citations


Journal ArticleDOI
TL;DR: In this paper, the performance of the ML method, the MINQUE method and several other two-step Generalized Least Squares methods in estimating the slope coefficient in a variance components model was investigated by means of Monte Carlo experiments.
Abstract: The article investigates by means of Monte Carlo experiments the performance of the ML method, the MINQUE method and several other two-step Generalized Least Squares methods in estimating the slope coefficient in a variance components model. It concludes that in models with no lagged dependent variables there is nothing much to choose among these estimators.

76 citations


01 Jan 1973
TL;DR: In this article, an asymptotic expansion of the distribution function of the k-class estimate is given in terms of an Edgeworth or Gram-Charlier series (of which the leading term is the normal distribution).
Abstract: The limited information maximum likelihood and two-stage least squares estimates have the same asymptotic normal distribution; the ordinary least squares estimate has another asymptotic normal distribution. This paper considers more accurate approximations to the distributions of the so-called "k-class" estimates. An asymptotic expansion of the distribution of such an estimate is given in terms of an Edgeworth or Gram-Charlier series (of which the leading term is the normal distribution). The development also permits expression of the exact distribution in several forms. The distributions of the two-stage least squares and ordinary least squares estimates are transformed to doubly-noncentral F distributions. Numerical comparisons are made between the approximate distributions and exact distributions calculated by the second author. SEVERAL METHODS HAVE been proposed for estimating the coefficients of a single equation in a complete system of simultaneous structural equations, including limited information maximum likelihood (Anderson and Rubin [1]), two-stage least squares (Basmann [3] and Theil [9]), and ordinary least squares. Under appropriate general conditions the first two methods yield consistent estimates; the two sets of estimates normalized by the square root of the sample size have the same limiting joint normal distributions (Anderson and Rubin [2]). In special cases the exact distributions of the estimates have been obtained. In particular, when the predetermined variables are exogenous, two endogenous variables occur in the relevant equation, and the coefficient of one endogenous variable is specified to be one, the exact distribution of the estimate of the coefficient of one endogenous variable has been obtained by Richardson [7] and Sawa [8] in the case of twostage least squares and by Mariano and Sawa [6] in the case of limited information maximum likelihood. The exact distributions involve multiple infinite series and are hard to interpret, but Sawa has graphed some of the densities of the two-stage least squares estimate on the basis of calculations from an infinite series expression. The main result of this paper is to obtain an asymptotic expansion of the distribution function of the so-called k-class estimate (which includes the twostage least squares estimate and the ordinary least squares estimate) in the case of two endogenous variables. The density of the approximate distribution is a normal density multiplied by a polynomial. The first correction term to the normal distribution involves a cubic divided by the square root of the sample size.

73 citations


Journal ArticleDOI
TL;DR: In this article, a method of reducing the original governing differential equation to a set of equivalent system of first-order differential equations is proposed to overcome the high degree of inter-element continuity.
Abstract: This paper presents a new method of formulating the finite element relationships based on the least squares criterion. To overcome the high degree of inter-element continuity, a method of reducing the original governing differential equation to a set of equivalent system of first-order differential equations is proposed. The validity of the method is demonstrated by means of several numerical examples. In particular, application of the method to problems with unknown variational functionals is considered.

Journal ArticleDOI
John M. Chambers1
TL;DR: In this article, the authors give a critical review of current numerical techniques and relate these to some statistical needs, emphasizing the basic properties of methods of optimization and nonlinear least squares, cite some advantages and difficulties of the methods and suggest a basic library of fitting procedures.
Abstract: SUMMARY Computational methods for fitting nonlinear models have developed considerably in the last decade. Statistical use of these techniques has, as yet, lagged somewhat behind. The present paper gives a critical review of current numerical techniques and relates these to some statistical needs. We emphasize the basic properties of methods of optimization and nonlinear least squares, cite some advantages and difficulties of the methods and suggest a basic library of fitting procedures. A classified bibliography is included.

Journal ArticleDOI
TL;DR: In this paper, the p-variate joint distribution is derived for a subset of p of the n standardized least squares residuals from a general linear regression, and an application of this result to the problem of detection of outliers is discussed.
Abstract: In this article the p-variate joint distribution is derived for a subset of p of the n standardized least squares residuals from a general linear regression. The resulting distribution is a standardized version of the Inverted-Student Function [8, p. 259]. An application of this result to the problem of detection of outliers is discussed.

Journal ArticleDOI
TL;DR: In this paper, a lower bound for the condition number of an univariate linear model with full column rank is derived from the orthogonal triangular decomposition of the model, which is then used to compute the error sum of squares.
Abstract: We consider the usual univariate linear model In Part One of this paper X has full column rank. Numerically stable and efficient computational procedures are developed for the least squares estimation of y and the error sum of squares. We employ an orthogonal triangular decomposition of X using Householder Transformations. A lower bomd for the condition number of X is immediately obtained from this decomposition. Similar computational procedures are presented for the usual F-test of the general linear hypothesis L′γ=0; L′γ=m is also considered for m≠0. Updating techniques are given for adding to or removing from (X,y) a row, a set of rows or a column. In Part Two, X has less than full rank. Least squares estimates are obtained using generalized inverses. The function L′γ is estimable whenever it admits an unbiased estimator linear in y. We show how to computationally verify esthabiiity of L′γ and the equivalent testability of L′γ=0.

Journal ArticleDOI
TL;DR: In this paper, the efficiency of the least square estimate (LSE), the best linear unbiased estimate (BLUE), the Markov estimate (ME), and the relationship among them is investigated from the perspective of complex stochastic processes.
Abstract: Certain elementary properties of the theory of least squares are presented from the point of view of complex stochastic processes. The development parallels the real case. We consider the least squares estimate (LSE), the best linear unbiased estimate (BLUE), the Markov estimate (ME), and the relationships among them. In particular, we prove the Gauss-Markov theorem and give necessary and sufficient conditions that the LSE and ME be identical. The efficiency of the LSE is defined and a lower bound is obtained for the efficiency of a certain class of models. Estimation of the mean vector and variance for a complex normal population leads to the maximum likelihood estimates (MLE). We prove that the MLE of the mean vector is identical with the LSE, and deduce other analogous properties concerning the distribution of the MLE.

Book ChapterDOI
01 Jan 1973
TL;DR: In this article, some of the more useful computational techniques for solving the over-determined nonlinear least squares problem are discussed. But the authors do not discuss the advantages and disadvantages of the Gauss-Newton method.
Abstract: Publisher Summary This chapter discusses some computational techniques for the nonlinear least squares problem. It presents one way of looking at some of the more useful computational techniques for solving the over-determined nonlinear least squares problem. In particular, it shows that these methods can profitably be viewed as Newton-like methods. The chapter also discusses the advantages and disadvantages of the Gauss–Newton method. The large residual problems are of prime importance. These are usually the problems that have the added complication of a large number of equations. In the linear case, such problems have become fairly routine through the use of the so-called Kalman filter. Little is known about the adaptability of Kalman filtering to large nonlinear problems. Recently there has been a flurry of activity concerning the exploitation of any linearity in the problem.

Journal ArticleDOI
TL;DR: In this paper, the similarities and differences in the equations defining the full information maximum likelihood and three stage least squares estimators are analyzed, and it is shown that the two sets of equations are similar, the difference being that the estimators "purge" the jointly dependent variables differently.
Abstract: This paper deals with similarities and differences in the equations defining the full information maximum likelihood and three stage least squares estimators. It shows that the twp sets of equations are similar, the difference being that the two estimators "purge" the jointly dependent variables differently. Hence, even if three stage least squares is iterated, it will not give an estimator which is the same as the maximum likelihood one. On the other hand, it is quite apparently asymptotically equivalent to full information maximum likelihood. A number of other results are also obtained.

Journal ArticleDOI
TL;DR: In this paper, a method for determining least squares estimators for certain classes of non-linear models is discussed, which is an extension of a variable projection method of Scolnik (1970), and involves the minimization of a modified functional.
Abstract: A new method for determining least squares estimators for certain classes of non- linear models is discussed. The method is an extension of a variable projection method of Scolnik (1970), and involves the minimization of a modified functional. The feature of minimizing this modified functional is that for a certain class of non-linear models, called the constant-coefficients case, only one half the parameters are involved initially. To find the estimators of the remaining parameters is straight forward and relatively easy. This new two step-procedure is shown to be equivalent to the over-all least squares procedure. We also discuss the case of a class of models called the variable coefficients class. For this case, we formulate a new algorithm for determining the estimators which makes use of approximate confidence regions for the parameters.

Journal ArticleDOI
TL;DR: In this paper, the distribution function of the OLS distribution function for single-equation estimators in a simultaneous system of linear stochastic equations is approximated up to terms whose order of magnitude is 1/n/n, where n is the sample size.
Abstract: This paper deals with single-equation estimators in a simultaneous system of linear stochastic equations and approximates the distribution function of the two-stage least-squares estimators up to terms whose order of magnitude is 1//N_, where N is the sample size. For fixed N, an approximation to the OLS distribution function is also obtained up to terms whose order of magnitude 1/li, where p2 is what is referred to in the literature as the concentration parameter.

Journal ArticleDOI
TL;DR: An algorithm is given for calculation of the time-varying-coefficients of the linear least squares predictors for stationary processes that requires an order of magnitude fewer arithmetic operations than the best of the earlier known algorithms.
Abstract: An algorithm is given for calculation of the time-varying-coefficients of the linear least squares predictors for stationary processes. The algorithm requires an order of magnitude fewer arithmetic operations than the best of the earlier known algorithms.

Journal ArticleDOI
TL;DR: In this paper, a systematic and easily automated least square procedure, not using integral equations or special functions, is presented for approximating the solutions of general dual trigonometric equations, which is desirable, since current analytic methods apply only to special equations, require the use of integral equation and special function theory, and do not lend themselves easily to numerical work.
Abstract: A systematic and easily automated least squares procedure, not using integral equations or special functions, is presented for approximating the solutions of general dual trigonometric equations. This is desirable, since current analytic methods apply only to special equations, require the use of integral equation and special function theory, and do not lend themselves easily to numerical work; see, e.g. [1, 2, 6, 8, 9,10, 11, 12, 13, 14, 15, 16, 17].

Journal ArticleDOI
TL;DR: In this article, the author constructs general algorithms for the non-linear least squares of condition equations and shows the procedure of their application for the settlement of the problem of unfavorable condition equations in the adjustment of a quadrilateral.
Abstract: In this paper the author constructs general algorithms for the non-linear least squares of condition equations and shows the procedure of their application for the settlement of the problem of unfavorable condition equations in the adjustment of a quadrilateral.

Journal ArticleDOI
TL;DR: In this paper, an elimination method for solving the linear least squares problem is presented which can be considered a generalization of the Gaussian elimination for square, linear systems and operations counts are given indicating the greater efficiency of this method over all known methods (including the fast but poorly conditioned normal equations approach) when the systems are slightly overdetermined.
Abstract: An elimination method for solving the linear least squares problem is presented which can be considered a generalization of the Gaussian elimination method for square, linear systems. Operations counts are given indicating the greater efficiency of this method over all known methods (including the fast but poorly conditioned normal equations approach) when the systems are slightly overdetermined (i.e., the number of equations is nearly the number of unknowns). An extension of this method is given for the solution of the minimal least squares problem associated with rank deficient systems of equations.

Journal ArticleDOI
TL;DR: In this paper, nonlinear least squares techniques were used to determine effective thermal conductivity values from experimental data, which can be used with confidence in performing thermal protection system analyses, and compared the relative efficiencies of different minimizing techniques; techniques; the method of Peckham was the most efficient.
Abstract: Nonlinear least squares techniques can be used to determine effective thermal conductivity values from experimental data. Comparisons between measured and predicted conductivity values indicate that the analytically determined values can be used with confidence in performing thermal protection system analyses. A study was performed to compare the relative efficiencies of different minimizing techniques; techniques; the method of Peckham was the most efficient.

Journal ArticleDOI
TL;DR: An iterative method for the solution of the nonlinear least squares problem is used for the determination of the strength and halfwidth of pure rotational spectral lines of HCl and HF detected with spectrometers whose resolving power is limited.




Journal ArticleDOI
TL;DR: In this paper, it is shown that for such models, a considerable further reduction can be obtained in the necessary computations, under certain conditions that are generally satisfied in practice, the reduced model and sum of squares, considered as functions of the nonlinear parameters, have partial derivatives.
Abstract: In fitting partially linear statistical models by least squares, several authors have demonstrated that, for fixed values of the nonlinear parameters, optimum values of the linear parameters can be determined analytically. Thus, the linear parameters can be eliminated by substitution. This reduction in the problem's dimension seems to greatly facilitate its solution by nonlinear least squares algorithms. In many applications, the partially linear model includes a strictly linear portion. In the present article, it is shown that for such models a considerable further reduction can be obtained in the necessary computations. Simultaneously, the earlier results are extended to a much broader class of models and to weighted least squares, and full-rank assumptions are removed. Under certain conditions that are generally satisfied in practice, the reduced model and sum of squares, considered as functions of the nonlinear parameters, have partial derivatives. Analytical expressions for these partials are obtained.

Journal ArticleDOI
TL;DR: Symbolic formulation manipulation is a computer technique for manipulation of equations while still in symbolic form applied to the non-linear least-squares problem, and the resulting computer program is discussed in this paper.