scispace - formally typeset
Search or ask a question

Showing papers on "Non-linear least squares published in 1977"


Journal ArticleDOI
TL;DR: In this paper, perturbation theory for the pseudo-inverse (Moore-Penrose generalized inverse), for the orthogonal projection onto the column space of a matrix, and for the linear least squares is surveyed.
Abstract: This paper surveys perturbation theory for the pseudo–inverse (Moore–Penrose generalized inverse), for the orthogonal projection onto the column space of a matrix, and for the linear least squares ...

393 citations


Journal ArticleDOI
TL;DR: In this paper, a least square method for simultaneously optimizing simultaneously thermodynamic functions and phase diagrams from experimental data of both types of functions is described. But the success of the method depends mainly on the appropriate construction and weighting of the equations of error associated with each measurement.
Abstract: A least squares method for optimizing simultaneously thermodynamic functions and phase diagrams from experimental data of both is described. The success of the method depends mainly on the appropriate construction and weighting of the equations of error associated with each measurement.

383 citations


Journal ArticleDOI
TL;DR: Two regularization methods for ill-conditioned least squares problems are studied from the point of view of numerical efficiency and it is shown that if they are transformed into a certain standard form, very efficient algorithms can be used for their solution.
Abstract: Two regularization methods for ill-conditioned least squares problems are studied from the point of view of numerical efficiency. The regularization methods are formulated as quadratically constrained least squares problems, and it is shown that if they are transformed into a certain standard form, very efficient algorithms can be used for their solution. New algorithms are given, both for the transformation and for the regularization methods in standard form. A comparison to previous algorithms is made and it is shown that the overall efficiency (in terms of the number of arithmetic operations) of the new algorithms is better.

299 citations


Journal ArticleDOI
TL;DR: In this article, a nonlinear least square solution for the Hydrogeologic parameters, sources and sinks, and boundary fluxes contained in the equations approximately governing two-dimensional or radial steady state groundwater motion was developed through use of a linearization and iteration procedure applied to the finite element discretization of the problem.
Abstract: A new nonlinear least squares solution for the Hydrogeologic parameters, sources and sinks, and boundary fluxes contained in the equations approximately governing two-dimensional or radial steady state groundwater motion was developed through use of a linearization and iteration procedure applied to the finite element discretization of the problem. Techniques involving (1) use of an iteration parameter to interpolate or extrapolate the changes in computed parameters and head distribution at each iteration and (2) conditioning of the least squares coefficient matrix through use of ridge regression techniques were proven to induce convergence of the procedure for virtually all problems. Because of the regression nature of the solution for the parameter estimation problem, classical methods of regression analysis are promising as an aid to establishing approximate reliability of computed parameters and predicted values of hydraulic head. Care must be taken not to compute so many parameters that the stability of the estimates is destroyed. Reduction of the error variance by adding parameters is desirable provided that the number of degrees of freedom for error remains large.

167 citations



Journal ArticleDOI
TL;DR: In this article, a Gauss function and numerical corrections are incorporated into a non-linear least-squares fitting program used for the analysis of X-ray induced Xray fluorescence spectra.

106 citations



Journal ArticleDOI
TL;DR: In this article, a unified approach based on the ranks of the residuals iS was developed for testing and estimation in the linear model, the methods are robust and efficient relative to least squares methods.
Abstract: A unified approach based on the ranks of the residuals iS developed for testing and estimation in the linear model, The methods are robust and efficient relative to least squares methods. For ease of application, the strong analogy to least squares strategy is emphasized. The procedures are illustrated on examples from regression and analysis of covariance.

86 citations


Journal ArticleDOI
Jan Sternby1
TL;DR: In this paper, a necessary and sufficient condition for consistency almost everywhere is given under the assumption that the data are generated by a regression model with white and Ganssian noise.
Abstract: Least squares identification is considered from the Bayesian point of view. A necessary and sufficient condition for consistency almost everywhere is given under the assumption that the data are generated by a regression model with white and Ganssian noise.

82 citations


Journal ArticleDOI
TL;DR: In this paper, a method which uses orthogonal transformations to solve the Duncan and Horn problem is presented, which gives advantages in numerical accuracy over other related methods in the literature, and is similar in the number of computations required.
Abstract: Kalman [9] introduced a method for estimating the state of a discrete linear dynamic system subject to noise. His method is fast but has poor numerical properties. Duncan and Horn [3] showed that the same problem can be formulated as a weighted linear least squares problem. Here we present a method which uses orthogonal transformations to solve the Duncan and Horn formulation by taking advantage of the special structure of the problem. This approach gives advantages in numerical accuracy over other related methods in the literature, and is similar in the number of computations required. It also gives a straightforward presentation of the material for those unfamiliar with the area.

73 citations


Journal ArticleDOI
A. Buse1
TL;DR: In this article, the standard cubic spline regression method is shown to be a special case of the restricted least-squares estimator, and the equivalence of the two procedures under a common set of restrictions is proved.
Abstract: The standard cubic spline regression method is shown to be a special case of the restricted least-squares estimator. The equivalence of the two procedures under a common set of restrictions is proved. The greater flexibility of the restricted least-squares estimator in terms of the number of restrictions and tests of hypotheses that can be utilized is illustrated by an application to a set of data that has been previously analyzed by the spline method.

Journal ArticleDOI
Robert L. Obenchain1
TL;DR: The ASSOCIATFD PROBABILITY of a ridge estimate is defined using the usual, hyperellipsoidal confidence region centered at the least squares estimator, and it is argued that ridge estimates are of relatively little interest when they are so "extreme" that they lie outside of the least square region of say 90 percent confidence.
Abstract: For testing general linear hypotheses in multiple regression models it is shown that non-stochastically shrunken ridge estimators yield the same central F-ratios and t-statistics as does the least squares estimator Thus although ridge regression does produce biased point estimates which deviate from the least squares solution, ridge techniques do not generally yield “new” normal theory statistical inferences: in particular, ridging does not necessarily produce “shifted” confidence regions A concept, the ASSOCIATFD PROBABILITY of a ridge estimate, is defined using the usual, hyperellipsoidal confidence region centered at the least squares estimator, and it is argued that ridge estimates are of relatively little interest when they are so “extreme” that they lie outside of the least squares region of say 90 percent confidence

Journal ArticleDOI
TL;DR: Therotatingiterativeprocedure is a programming concept for non-linear least squares fitting of multiexponential equations to experimental data in pharmacokinetics and can be employed in modern electronic desk-top computers.
Abstract: Therotatingiterativeprocedure (RIP) is a programming concept for non-linear least squares fitting of multiexponential equations to experimental data in pharmacokinetics. The method is economical in its use of program and active register capacity and can be employed in modern electronic desk-top computers. The algorithms necessary for obtaining primary estimates of various logarithmic components and their subsequent correction are presented, with as little higher mathematics as appeared permissible. The procedure is described in the sequence that would actually be followed in a pharmacokinetic analysis, and an example is included, as well as a skeleton version of a program written in BASIC. Some instructions for obtaining overall statistical parameters are given.

Journal ArticleDOI
TL;DR: In this paper, a nonlinear least squares method of retrieving line intensities and halfwidths from spectra degraded by a finite instrumental resolving power is discussed, and examples of the applications of this technique to obtain the parameters of a single isolated line and of an overlapping pair of lines are presented.
Abstract: A nonlinear least squares method of retrieving line intensities and half-widths from spectra degraded by a finite instrumental resolving power is discussed. Examples of the applications of this technique to obtain the parameters of a single isolated line and of an overlapping pair of lines are presented.

Journal ArticleDOI
TL;DR: In this article, a numerical method for solving second-order elliptic partial differential equations is presented, where the authors use least squares techniques to solve the problem of lower-order PDEs.
Abstract: This paper analyzes a numerical method for solving second-order elliptic partial differential equations. The idea is to write the equation as a lower-order system and solve the system using least squares techniques. Error estimates are derived for a model problem.

Journal ArticleDOI
TL;DR: The ROKE program as discussed by the authors estimates the parameters of mixtures of normal or lognormal distributions from data available in histogram form, using a histogram-based histogram classifier.

Journal ArticleDOI
TL;DR: In this article, the Gauss-Newton method, the Levenberg-Marquardt method, and a quasi-newton method were compared with general-purpose nonlinear optimization methods.
Abstract: The problem of minimizing a sum of squares of nonlinear functions is studied. To solve this problem one usually takes advantage of the fact that the objective function is of this special form. Doing this gives the Gauss-Newton method or modifications thereof. To study how these specialized methods compare with general purpose nonlinear optimization routines, test problems were generated where parameters determining the local behaviour of the algorithms could be controlled. The order of 1000 test problems were generated for testing three algorithms: the Gauss-Newton method, the Levenberg-Marquardt method and a quasi-Newton method.

Journal ArticleDOI
TL;DR: In this paper, the classes of R- and M-estimates contain practical robust alternatives to least squares estimation in linear models and form the basis for a robust analysis of variance.
Abstract: The classes of R- and M-estimates contain practical robust alternatives to least squares estimation in linear models. These estimates form the basis for a robust analysis of variance. This inference procedure is described and its versatility demonstrated.

Journal ArticleDOI
TL;DR: This paper considers some specialised methods for the nonlinear least squares problem which seek to improve the Gauss—Newton estimate of the Hessian matrix without explicitly calculating second derivatives.
Abstract: Computational experiments by McKeown [11] have shown that specialised methods, based on the Gauss--Newton iteration, are not necessarily the best choice for minimising functions that are sums of squared terms. Difficulties arise when the Gauss--Newton approach does not yield a good approximation to the second derivative matrix of the function: and this is more likely to happen when the function value at the optimum is not near zero and the terms in the sum of squares are significantly nonlinear. This paper considers some specialised methods for the nonlinear least squares problem which seek to improve the Gauss--Newton estimate of the Hessian matrix without explicitly calculating second derivatives.




Journal ArticleDOI
TL;DR: In this paper, the Dunham potential coefficients are numerically determined by using a nonlinear least squares routine applied directly to the line experimental wave numbers, and the results are compared to the ones obtained when using the usual iterative process applied to the H81Br Yi0 and Yi1 equilibrium constants.
Abstract: In this paper, the Dunham potential coefficients are numerically determined by using a nonlinear least squares routine applied directly to the line experimental wave numbers.The results are compared to the ones obtained when using the usual iterative process applied to the H81Br Yi0 and Yi1 equilibrium constants.The al determination new method assumes a theoretical framework (B.O., adiabatic or non-adiabatic) to be valid. One can test this assumption by comparing the experimental data to the calculated ones.

Journal ArticleDOI
TL;DR: In this article, the scale factor for the covariance matrices being used in the collocation is estimated, and the methods of testing hypotheses and establishing confidence intervals for the parameters of the least square adjustment may be applied to the collocations.
Abstract: For the estimation of parameters in linear models best linear unbiased estimates are derived in case the parameters are random variables. If their expected values are unknown, the well known formulas of least squares adjustment are obtained. If the expected values of the parameters are known, least squares collocation, prediction and filtering are derived. Hence in case of the determination of parameters, a least squares adjustment must precede a collocation because otherwise the collocation gives biased estimates. Since the collocation can be shown to be equivalent to a special case of the least squares adjustment, the variance of unit weight can be estimated for the collocation also. This estimate gives the scale factor for the covariance matrices being used in the collocation. In addition, the methods of testing hypotheses and establishing confidence intervals for the parameters of the least squares adjustment may be applied to the collocation.

Journal ArticleDOI
TL;DR: In this paper, a two-step weighted least squares estimator for multiple factor analysis of dichotomized variables is discussed, which is based on the first and second order joint probabilities.
Abstract: A two-step weighted least squares estimator for multiple factor analysis of dichotomized variables is discussed. The estimator is based on the first and second order joint probabilities. Asymptotic standard errors and a model test are obtained by applying the Jackknife procedure.

Journal ArticleDOI
TL;DR: In this article, the authors show that a computational error in the application of multiple regression computer programs to two-stage least squares estimation of the standardized coefficients for nonrecursive models leads to the understatement of the relative size of reciprocal effects.
Abstract: A computational error in the application of multiple regression computer programs to two-stage least squares estimation of the standardized coefficients for nonrecursive models leads to the understatement of the relative size of reciprocal effects. Furthermore, an error in the computation of the disturbance variance invalidates significance tests for the metric coefficients. The sources of the errors are derived and an example is presented. The example reveals that not only do the incorrectly computed standardized coefficients underestimate reciprocal effects, they also may lead to specious conclusions regarding group differences in the relative impact of causal variables.

Book ChapterDOI
01 Jan 1977

Journal ArticleDOI
TL;DR: In this article, it was shown that a matrix over a field of characteristic two is a sum of two squares if and only if its trace is a square, unless the trace is not a square.
Abstract: How many squares are needed to represent elements in a matrix ring? A matrix over a field of characteristic two is a sum of two squares if and only if its trace is a square, otherwise it is not a sum of squares. Any proper matrix over a field of characteristic not two is always a sum of three squares. If the order of a matrix is even the matrix is a sum of two squares, but an odd order matrix which is q times the identity matrix is a sum of two squares if and only ifq is a sum of two squares in the field. Matrices of order 2,3 and 4 over the integers can always be written as the sum of three squares.


Journal ArticleDOI
TL;DR: In this article, the problem of least square state estimation for continuous linear stochastic systems having some noise-free outputs is reconsidered, and it is shown that the approach of Bryson and Johansen [1] can be used to provide a simple derivation of the observer estimator in a readily implementable form.
Abstract: The problem of least squares state estimation for continuous linear stochastic systems having some noise-free outputs is reconsidered. It is shown that the approach of Bryson and Johansen [1] can be used to provide a simple derivation of the stochastic observer estimator in a readily implementable form.