scispace - formally typeset
Search or ask a question

Showing papers on "Non-linear least squares published in 1968"


Journal ArticleDOI
Derek York1
TL;DR: In this paper, the fitting of a straight line when both variables are subject to crrors is generalized to allow for correlation of the z and y errors, illustrated by reference to lead isochron fitting.

2,237 citations



Book ChapterDOI
TL;DR: In this article, a basic linear model of stationary stochastic processes is proposed for the analysis of linear feedback systems, which suggests a simple computational procedure which gives estimates of the response characteristics of the system and the spectra of the noise source.
Abstract: A basic linear model of stationary stochastic processes is proposed for the analysis of linear feedback systems. The model suggests a simple computational procedure which gives estimates of the response characteristics of the system and the spectra of the noise source. These estimates are obtained through the estimate of the linear predictor of the process, which is obtained by the ordinary least squares method.

156 citations






Journal ArticleDOI
TL;DR: It is shown how many previous methods for the exact solution of systems of non-linear equations are all based upon simple cases of the generalized inverse of the matrix of first derivatives of the equations.
Abstract: It is shown how many previous methods for the exact solution (or best least squares solution) of systems of non-linear equations are all based upon simple cases of the generalized inverse of the matrix of first derivatives of the equations. The general case is given and algorithms for its application are suggested, especially in the case where the matrix of first derivatives cannot be calculated. Numerical tests confirm that these algorithms extend the range of practical problems which can be solved.

56 citations


01 Jan 1968
TL;DR: An algorithm is presented in ALGOL for iteratively refining the solution to a linear least squares problem with linear constraints and shows that a high degree of accuracy is obtained.
Abstract: An algorithm is presented in ALGOL for iteratively refining the solution to a linear least squares problem with linear constraints. Numerical results presented show that a high degree of accuracy is obtained.

51 citations



Journal ArticleDOI
TL;DR: In this article, a simple iterative process to solve minimization problems in chemistry when the number of parameters is small, as is the case in most empirical laws, is presented.
Abstract: The purpose of this paper is to recommend a simple iterative process to solve minimization problems in chemistry when the number of parameters is small, as is the case in most empirical laws.

Journal ArticleDOI
TL;DR: In this article, the authors compared the performance of the first three procedures and a procedure based on Gaussian elimination for solving an n X n system of equations and concluded that the modified Gram-Schmidt procedure is best for the linear least-squares problem.
Abstract: . Some numerical experiments were performed to compare the performance of procedures for solving the linear least-squares problem based on GramSchmidt, Modified Gram-Schmidt, and Householder transformations as well as the classical method of forming and solving the normal equations. In addition, similar comparisons were made of the first three procedures and a procedure based on Gaussian elimination for solving an n X n system of equations. The results of these experiments suggest that: (1) the Modified Gram-Schmidt procedure is best for the least-squares problem and that the procedure based on Householder transformations performed competitively; (2) all the methods for solving least-squares problems suffer the effects of the condition number of A TA, although in a different manner for the first three procedures than for the fourth; and (3) the procedure based on Gaussian elimination is the most economical and essentially, the most accurate for solving n X n systems of linear equations. Some effects of pivoting in each of the procedures are included.


Journal ArticleDOI
TL;DR: In this article, the small sample properties of certain estimators of the coefficients of systems of simultaneous nonlinear equations are investigated, including direct least squares, various forms of two-stage least squares and full-information maximum likelihood.
Abstract: : The small sample properties of certain estimators of the coefficients of systems of simultaneous nonlinear equations are investigated. Sampling experiments are used in connection with two specific nonlinear models. The estimating methods investigated comprise direct least squares, various forms of two-stage least squares and full-information maximum likelihood. The relative performances of the various methods are evaluated on the basis of informal comparisons of their respective mean absolute errors and root mean square errors and also by more formal tests of significance. Direct least squares is found to be, as expected, the worst estimating method. The other two methods are rather more comparable with full-information maximum likelihood holding the edge for both theoretical and experimental reasons. (Author)





Journal ArticleDOI
TL;DR: In this paper, it was shown that the computationally simple method of averages can yield a surprisingly good solution of an overdetermined system of linear equations, provided that the grouping of the equations is done in an appropriate way.
Abstract: It is shown that the computationally simple method of averages can yield a surprisingly good solution of an overdetermined system of linear equations, provided that the grouping of the equations is done in an appropriate way. The notion of angle between linear subspaces is applied in a general comparison of this method and the method of least squares. The optimal application of the method is treated for the test problem of fitting a polynomial of degree less than six.

Journal ArticleDOI
Tomaso Pomentale1
TL;DR: An approximation without poles and depending on a parameter is defined which gives an “acceptable” approximation when the least squares approximation does not exist, and it is shown that, if the discrete function to be fitted is sufficiently close to a rational function, then the most squares approximation exists.
Abstract: The paper deals with the finite rational least squares approximation to a discrete function. An approximation without poles and depending on a parameter? is defined which tends to the least squares approximation for? ? 0. It gives an "acceptable" approximation when the least squares approximation does not exist. Further it is shown that, if the discrete function to be fitted is sufficiently close to a rational function, then the least squares approximation exists.


Journal ArticleDOI
TL;DR: In this paper, the rate of convergence of the approximate solution, constructed by the least squares method, to the exact solution, is established subject to certain conditions, and a similar problem is investigated for a generalized Bubnov-Gal'erkin method.
Abstract: IN this paper the rate of convergence of the approximate solution, constructed by the least squares method, to the exact solution, is established subject to certain conditions. A similar problem is investigated for a generalized Bubnov-Gal'erkin method.



Proceedings Article
01 Jan 1968

Journal ArticleDOI
TL;DR: In this paper, the principles of least square adjustment of observations are described through the use of matrix algebra and the precision estimates of the adjusted variables are considered a part of the solution of each case.
Abstract: The principles of least squares adjustment of observations are described in this paper through the use of matrix algebra. Six possible cases of adjustment of observations are presented and completely solved. The precision estimates of the adjusted variables are being considered a part of the solution of each case. The general problem of least squares adjustment is then derived and solved through a simple approach. The solutions for two additional problems of adjustment are extracted from the solution of the general problem. In the same way, the solution of any particular case of adjustment can easily be found from that of the general problem. A summary for the equations which give the solution of most of the adjustment cases presented in this paper is given in tabulated form. A numerical example which solidifies the solution of the general problem of least squares is presented.

Journal ArticleDOI
TL;DR: In this paper, a simple graphical method is presented to transform co-ordinates from one plane co-ordinate system onto another, to examine the factors operating and to isolate inconsistent points.
Abstract: A simple graphical method is presented to transform co-ordinates from one plane co-ordinate system onto another, to examine the factors operating and to isolate inconsistent points. The transformation allows for the effects of orientation, magnification, and translation operating between the systems. The determination of the constants by this means is equivalent to a rigorous Least Squares solution.



Journal ArticleDOI
TL;DR: In this paper, it was shown that the processing of experimental results by the method of least squares should be accomplished in various cases by minimizing the sum of the squares of the relative deviations in the estimates of the regression coefficients, rather than those of the absolute deviations, as is usually the case.
Abstract: It is demonstrated that the processing of experimental results by the method of least squares should be accomplished in various cases by minimizing the sum of the squares of the relative deviations in the estimates of the regression coefficients, rather than those of the absolute deviations, as is usually the case.