scispace - formally typeset
Search or ask a question

Showing papers on "Non-linear least squares published in 1976"


Journal ArticleDOI
TL;DR: A method based on the Fourier convolution theorem is developed for the analysis of data composed of random noise, plus an unknown constant "base line," plus a sum of (or an integral over a continuous spectrum of) exponential decay functions.

531 citations


Journal ArticleDOI
TL;DR: In this article, the additive structure of data that can be measured at the nominal, ordinal or cardinal levels, may be obtained from either a discrete or continuous source, and may have known degrees of imprecision.
Abstract: A method is developed to investigate the additive structure of data that (a) may be measured at the nominal, ordinal or cardinal levels, (b) may be obtained from either a discrete or continuous source, (c) may have known degrees of imprecision, or (d) may be obtained in unbalanced designs. The method also permits experimental variables to be measured at the ordinal level. It is shown that the method is convergent, and includes several previously proposed methods as special cases. Both Monte Carlo and empirical evaluations indicate that the method is robust.

232 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used iterative weighted least squares (IWLS) to estimate the parameters in a nonlinear regression model, where the dependent variables are observations from a member of the regular exponential family.
Abstract: The method of iterative weighted least squares can be used to estimate the parameters in a nonlinear regression model. If the dependent variables are observations from a member of the regular exponential family, then under mild conditions it is shown that the IWLS estimates are identical to those obtained using the maximum likelihood principle. An application is provided to illustrate the results.

211 citations


01 Aug 1976
TL;DR: In this paper, it is shown that under certain conditions when A has numerical rank r there is a distinguished r dimensional subspace of the column space of A that is insensitive to how it is approximated by r independent columns of A. The consequences of this fact for the least square problem are examined.
Abstract: This paper is concerned with least squares problems when the least squares matrix A is near a matrix that is not of full rank. A definition of numerical rank is given. It is shown that under certain conditions when A has numerical rank r there is a distinguished r dimensional subspace of the column space of A that is insensitive to how it is approximated by r independent columns of A. The consequences of this fact for the least squares problem are examined. Algorithms are described for approximating the stable part of the column space of A.

200 citations


Journal ArticleDOI
TL;DR: In this article, an attempt is made to give a rule for choosing γ which permits a satisfactory convergence theorem to be proved, and is capable of satisfactory computer implementation, and a computer code is given which appears to be at least competitive with existing alternatives.
Abstract: One of the most succesful algorithims for nonlinear least squares calculations is that associated with the names of Levenberg, Marquardt, and Morrison. This algorithim gives a method which depends nonlinearly on a parameter γ for computing the correction to the current point. In this paper an attempt is made to give a rule for choosing γ which (a) permits a satisfactory convergence theorem to be proved, and (b) is capable of satisfactory computer implementation. It is beleieved that the stated aims have been met with reasonable success. The convergence theorem is both simple and global in character, and a computer code is given which appears to be at least competitive with existing alternatives.

156 citations


Journal ArticleDOI
TL;DR: The best-fit values of the Michaelis constant and the maximum velocity in the Michaeli-Menten equation can be obtained by the method of least squares with the Taylor expansion for the sum of squares of the absolute residual by calculation.
Abstract: The best-fit values of the Michaelis constant (Km) and the maximum velocity (V) in the Michaelis-Menten equation can be obtained by the method of least squares with the Taylor expansion for the sum of squares of the absolute residual, i.e., the difference between the observed velocity and the corresponding velocity by calculation. This method makes it possible to determine the values of Km and V not in a trial-and-error manner but in a deductive and unique manner after some iterative procedures starting from arbitrary approximate values of Km and V. These values can be said to be uniquely determined for a set of data as the finally converged values are no longer dependent upon the initial approximate values of Km and V. It is also very important to obtain initial approximate values of parameters for the application of the method described above. A simple method is proposed to estimate the approximate values of parameters involved in fractional functions. The method of rearrangement after canceling of denominator of a fractional function can be utilized to obtain approximate values, not only for cases of two unknown parameters such as the Michaelis-Menten equation, but also for cases with more than two unknowns.

148 citations


Journal ArticleDOI
TL;DR: GENCAT is a computer program which implements an extremely general methodology for the analysis of multivariate categorical data which produces minimum modified chi-square statistics, obtained by partitioning the sums of squares as in ANOVA.

124 citations


Journal ArticleDOI
TL;DR: In this paper, the improvement of Latent Root Regression over ordinary least squares is shown to depend on the orientation of the parameter vector with respect to a vector defining the multicollinearity.
Abstract: Miilticollinesrity among the columns of regressor variables is known to cause severe distortion of the least squares estimates of the parameters in a multiple linear regression model. An alternate method of estimating the parameters which was proposed by the authors in a previous paper is Latent Root Regression Analysis. In this article several comparisons between the two methods of estimation are presented. The improvement of Latent Root Regression over ordinary least squares is shown to depend on the orientation of the parameter vector with respect to a vector defining the multicollinearity. Despite this dependence on orientation, the authors conclude that witch multicollinear data Latent Root, Regression Analysis is preferable to ordinary least squares for parameter estimation and variable selectJion.

78 citations


Journal ArticleDOI
TL;DR: In this paper, the Lagrange multiplier approach is used to find the limits of various model parameters consistent with a set of experimental data, and the physical interpretation of these limits and those implied by the parameter covariance matrix are discussed.
Abstract: An important problem in geophysical modeling involves the attempt to find the limits of various model parameters consistent with a set of experimental data. When the agreement between model and data can be described in terms of a quadratic form in the residuals, as is the case whenever linear least squares methods are applicable, then the range of parameter values consistent with the data is easily computed by using a Lagrange multiplier approach. This method results in limits which are different from those implied by the covariance matrix for the least squares solution. The differences are simply calculated but may often be substantial in magnitude. In this paper I derive an expression for the limits, discuss the physical interpretation of these limits and those implied by the parameter covariance matrix, and discuss the extension of linear techniques to quasi-linear techniques.

72 citations


Journal ArticleDOI
TL;DR: In this paper, the statistical properties of the certainty equivalence control rule and of the least squares estimates generated by this rule are examined experimentally in a linear model with two unknown parameters.
Abstract: The statistical properties of the certainty equivalence control rule and of the least squares estimates generated by this rule are examined experimentally in a linear model with two unknown parameters. It is found that the least squares certainty equivalence rule converges to its true value with probability one and is asymptotically efficient, having an asymptotic distribution with a variance as small as any other strongly consistent rule. However, while a linear combination of the parameter estimates is consistent, the evidence does not confirm that the individual estimates themselves are consistent. If these converge to their true values at all, they do so very slowly (on the order of (log t)').

51 citations


01 Jan 1976
TL;DR: In this paper, the statistical properties of the certainty equivalence control rule and of the least squares estimates generated by this rule are examined experimentally in a linear model with two unknown parameters.
Abstract: The statistical properties of the certainty equivalence control rule and of the least squares estimates generated by this rule are examined experimentally in a linear model with two unknown parameters. It is found that the least squares certainty equivalence rule converges to its true value with probability one and is asymptotically efficient, having an asymptotic distribution with a variance as small as any other strongly consistent rule. However, while a linear combination of the parameter estimates is consistent, the evidence does not confirm that the individual estimates themselves are consistent. If these converge to their true values at all, they do so very slowly (on the order of (log t)- ').

Journal ArticleDOI
TL;DR: Some of the methods used in the resolution of mixed normal distributions are discussed under three headings: analytical, graphical, and numerical methods as mentioned in this paper, and their applicability in the analysis of grain-size data as derived from sieving.
Abstract: Some of the methods used in the resolution of mixed normal distributions are discussed under three headings: analytical, graphical, and numerical methods. Attention is given to their applicability in the analysis of grain-size data as derived from sieving. Comparisons are made by applying several methods to published data. It is concluded that the numerical methods offer most scope, especially the method of nonlinear least squares. Some analyses of beach sediments, using this method, are presented. The adoption of a convention for the number of individuals in the sample increases ease of interpretation.

Journal ArticleDOI
TL;DR: A new nonlinear least-squares fitting algorithm without matrix inversion is described, demonstrating a successful application of the algorithm on the optical absorption spectrum analysis in comparison with the modified Gauss-Newton algorithm.

Journal ArticleDOI
TL;DR: In this paper, a new least square solution for obtaining asymptotically unbiased and consistent estimates of unknown parameters in noisy linear systems is presented, which is in many ways more advantageous than generalized least squares algorithm.
Abstract: A new least squares solution for obtaining asymptotically unbiased and consistent estimates of unknown parameters in noisy linear systems is presented. The proposed algorithms are in many ways more advantageous than generalized least squares algorithm. Extensions to on-line and multivariable problems can be easily implemented. Examples are given to illustrate the performance of these new algorithms.

Journal ArticleDOI
TL;DR: New algorithms are presented for approximating the minimum of the sum of squares of real and differentiable functions over anN-dimensional space which give estimates which fluctuate about a minimum rather than converging to it.
Abstract: New algorithms are presented for approximating the minimum of the sum of squares ofM real and differentiable functions over anN-dimensional space. These algorithms update estimates for the location of a minimum after each one of the functions and its first derivatives are evaluated, in contrast with other least-square algorithms which evaluate allM functions and their derivatives at one point before using any of this information to make an update. These new algorithms give estimates which fluctuate about a minimum rather than converging to it. For many least-square problems, they give an adequate approximation for the solution more quickly than do other algorithms.

Proceedings ArticleDOI
01 Dec 1976
TL;DR: In this article, the parameter identification of nonlinear systems using Hammerstein model and in the presence of correlated output noise is considered and a noniterative four-stage least square solution procedure is proposed.
Abstract: This paper considers the parameter identification of nonlinear systems using Hammerstein model and in the presence of correlated output noise. Existing identification methods are all iterative. The proposed method, called MSLS, is a noniterative four-stage least square solution procedure. Therefore, it is computationally simpler. The estimates so obtained are statistically consistent. Two examples are included to demonstrate the utility of this method.


Journal ArticleDOI
TL;DR: In this paper, a procedure for estimating the rate constants of a two-compartment stochastic model for which the covariance structure over time of the observations is known is described.
Abstract: A procedure is described for estimating the rate constants of a two-compartment stochastic model for which the covariance structure over time of the observations is known. The proposed estimation procedure, by incorporating the known (as a function of the parameters to be estimated) covariance structure of the observations, produces regular best asymptotically normal (RBAN) estimators for the parameters. In addition, the construction of approximate confidence intervals and regions for the parameters is made possible by identification of the asymptotic covariance matrix of the estimators. The explicit form of the inverse of the covariance matrix, which is required in the estimation procedure, is presented. The procedure is illustrated by application to real as well as simulated data, and a comparison is made to the widely used nonlinear least squares procedure, which does not account for correlations over time.


Journal ArticleDOI
TL;DR: In this paper, it is shown that seismic deconvolution should be based either on autoregression theory or on recursive least squares estimation theory rather than on the normally used Wiener or Kalman theory.
Abstract: The least squares estimation procedures used in different disciplines can be classified in four categories: The recursive least squares estimator is the time average form of the Kalman filter. Likewise, the autoregressive estimator is the time average form of the Wiener filter. Both the Kalman and the Wiener filters use ensemble averages and can basically be constructed without having a particular measurement realisation available. It follows that seismic deconvolution should be based either on autoregression theory or on recursive least squares estimation theory rather than on the normally used Wiener or Kalman theory. A consequence of this change is the need to apply significance tests on the filter coefficients. The recursive least squares estimation theory is particularly suitable for solving the time variant deconvolution problem.

Journal ArticleDOI
TL;DR: In this article, it was shown that the Ben-Israel iteration has no position toward the minimum norm solution, but that any limit point of thesequence generated by the Ben Israel iteration is a least square solution.
Abstract: . Ben-Israel [ 1 ] proposed a method for the solution of the nonlinear leastsquares problem m'mx^j^\\F(x)\\2 where F: D C R —► R . This procedure takes theform xk,x = xk — F'(xk) F(xk) where F'(xk) denotes the Moore-Penrose generalizedinverse of the Fre'chet derivative of F. We give a general convergence theorem for themethod based on Lyapunov stability theory for ordinary difference equations. In thecase where there is a connected set of solution points, it is often of interest to determinethe minimum norm least squares solution. We show that the Ben-Israel iteration has nopredisposition toward the minimum norm solution, but that any limit point of thesequence generated by the Ben-Israel iteration is a least squares solution. I. Introduction. The use of least squares solutions to systems of equations is an important and practical tool in many applications. Given a function F: D C R" —>Rm where D is an open convex set, the nonlinear least squares problem is expressed asrnmxeDIIZr(x)||, where || ■ || here and henceforth denotes the l2 norm. Equivalently, iffiix) is the z'th component of F, then the problem can be stated as rnin^^^fjc), where$ = &££Lj/?(x). If, as we shall assume, Zms continuously Fre'chet differentiable, then


Journal ArticleDOI
TL;DR: In this article, the authors proposed an algorithm to solve a least square problem when the parameters we restricted to be nonnegative, and the algorithm does not we linear programming but utilizes the normal equations to solve the series of unrestricted problems.
Abstract: This note proposes all algorithm to solve a least squares problem when the parameters we restricted to be nonnegative. The algorithm does not we linear programming but utilizes the normal equations to solve a series of unrestricted problems.


ReportDOI
TL;DR: A straightforward diagnostic test procedure that provides numerical indexes whose magnitudes signify the presence of one or more near dependencies among columns of a data matrix X that provides a means for determining, within the linear regression model, the extent to which each such near dependency is degrading the least- squares estimation of each regression coefficient.
Abstract: This paper suggests and examines a straightforward diagnostic test procedure that 1) provides numerical indexes whose magnitudes signify the presence of one or more near dependencies among columns of a data matrix X, and 2) provides a means for determining, within the linear regression model, the extent to which each such near dependency is degrading the least- squares estimation of each regression coefficient. In most instances this latter information also enables the investigator to determine specifically which columns of the data matrix are involved in each near dependency. The diagnostic test is based on an interrelation between two analytic devices, the singular-value decomposition (closely related to eigensystems) and a matching regression-variance decomposition. Both these devices are developed in full. The test is successfully given empirical content through a set of experiments that examine its behavior when applied to several different series of data matrices having one or more known near dependencies that are weak to begin with and are made to became systematically more nearly perfectly collinear. The general diagnostic properties of the test that result from these experiments and the steps required to carry out the test are summarized, and then exemplified by application to real economic data.

Journal ArticleDOI
TL;DR: In this paper, it is shown that the least square estimator is asymptotically normal and possesses an (asymptotic) estimation error covariance matrix that bounds from below the set of covariance matrices of the class S.
Abstract: It is known [2],[3] that a large class of instrumental variable estimators for autoregressive moving average system parameters are strongly consistent. In this correspondence this class is described and is denoted by S . Then sufficient conditions are given for each member of the class S to be asymptotically normal. These conditions are as follows: 1) the unobserved noise process \upsilon disturbing the output measurements of the given system is a white noise process; and 2) \upsilon is independent of the observed input process u . It is further shown that under the same conditions the (strongly consistent) least squares estimator is asymptotically normal and possesses an (asymptotic) estimation error covariance matrix that bounds from below the set of covariance matrices of the class S .

Journal ArticleDOI
TL;DR: In this paper, the authors show necessary and sufficient conditions for a specified sum of squares decomposition to have this property in the case of the mixed model, and they also show that the condition is satisfied for a fixed effects linear model, where the terms of the decomposition are mutually independent and distributed as multiples of chi-square.
Abstract: A sum of squares can be partitioned into sums of quadratic forms whose kernels are projections. If these projections are mutually orthogonal and add to the identity, then, under the classical fixed effects linear model, the terms of the decomposition are mutually independent and are distributed as multiples of chi-square. In this paper we exhibit necessary and sufficient conditions for a specified sum of squares decomposition to have this property in the case of the mixed model.

01 Jan 1976
TL;DR: In this article, a new least square solntion for obtaining asymptotically unbiased and consistent estimates of unknown parameters in noisy linear systems is presented. But the proposed algorithms are in many ways more advantageous than generalized least squares algorithm.
Abstract: A new least squares solntion for obtaining asymptotically unbiased and consistent estimates of unknown parameters in noisy linear systems is presented. The proposed algorithms are in many ways more advantageous than generalized least squares algorithm. Extensions to on-line and multivariable problems can be easily implemented. Examples are given to illustrate the performance of these new algorithms.


Journal ArticleDOI
TL;DR: In this article, the authors studied the asymptotic properties of least squares estimates of parameters in a stochastic difference equation, where the difference equation is assumed to be linear with constant real coefficients and the roots of the associated characteristic polynomial are all assumed to have absolute value different from one.