scispace - formally typeset
Search or ask a question

Showing papers on "Non-linear least squares published in 1970"


Journal ArticleDOI
TL;DR: In this paper, a two-stage computer-based approach was used to solve the problem of petrologic mixing using linear programming and a conventional least square calculation, with the analyses represented by non-negative solution values as input to yield an optimum set of solution values.
Abstract: Problems of petrologic mixing have been solved using a two-stage computer-based calculation. First, linear programming is used to obtain an approximate solution and to identify non-negative solution values. Then a conventional least squares calculation is performed using the analyses represented by non-negative solution values as input to yield an optimum set of solution values. The error attached to each solution value is estimated by an empirical procedure. Petrologic application of the program has been demonstrated with three types of calculations: chemical mode, magma mixing, and liquid line of descent.

487 citations


Journal ArticleDOI
TL;DR: In this article, it is shown how to obtain convergence to a $D$-optimum measure by successively adding points to a given initial experimental design, which correspond to points of maximum variance of the usual least squares estimate of the response mean for the particular regression model at each stage.
Abstract: It is possible to obtain convergence to a $D$-optimum measure, as defined by Kiefer and Wolfowitz, by successively adding points to a given initial experimental design. The points added correspond to points of maximum variance of the usual least squares estimate of the response mean for the particular regression model at each stage. A new bound is given for the generalized variances involved and an example is worked out.

454 citations



Journal ArticleDOI
TL;DR: In this article, the authors give sufficient conditions for the least squares estimates to be consistent in the case of nonlinear regression, i.e., without the assumption of linearity of g with respect to the parameters.
Abstract: : This paper gives alternative sufficient conditions for the least squares estimates to be consistent in the case of nonlinear regression, i.e., without the assumption of linearity of g with respect to the parameters.

185 citations


Journal ArticleDOI
TL;DR: In this paper, a linear least square test based on fifth degree polynomials has been run on more than twenty different computer programs in order to assess their numerical accuracy, and it was found that those programs using orthogonal Householder transformations, classical Gram-Schmidt orthonormalization or modified GramSchmidt Orthogonalization were generally much more accurate than those using elimination algorithms.
Abstract: Linear least squares test problems based on fifth degree polynomials have been run on more than twenty different computer programs in order to assess their numerical accuracy. The programs tested, all in present-day use, included representatives from several statistical packages as well as some from the SHARE library. Essentially five different algorithms were used in the various programs to obtain the coefficients of the least squares fits. The tests were run on several different computers, in double precision as well as single precision. By comparing the coefficients reported, it was found that those programs using orthogonal Householder transformations, classical Gram-Schmidt orthonormalization or modified Gram-Schmidt orthogonalization were generally much more accurate than those using elimination algorithms. Programs using orthogonal polynomials (suitable only for polynomial fits) also proved to be superior to those using elimination algorithms. The most successful programs accumulated inner...

107 citations



Journal ArticleDOI
TL;DR: In this paper, the correct procedure is presented for estimating errors in quantities extracted from non-linear least squares analysis of angular correlation data, where the error is defined as the sum of the number of errors in the least square errors of the data points.

99 citations


Journal ArticleDOI
Robert F. Curl1
TL;DR: In this paper, a method for finding physically reasonable parameters and confidence limits for parameters is described based on parameter scaling and diagonalization of the matrix of the normal equations, which is based on the assumption that the relationships provided by the observations are not really linearly independent when the random errors in the observations were considered.

87 citations


Journal ArticleDOI
G. Peckham1
TL;DR: A new method for minimising a sum of squares of non-linear functions is described and is shown to be more efficient than other methods in that fewer function values are required.
Abstract: A new method for minimising a sum of squares of non-linear functions is described and is shown to be more efficient than other methods in that fewer function values are required.

83 citations


Book ChapterDOI
01 May 1970
TL;DR: In this article, the application of numerically stable matrix decompositions to minimization problems involving linear constraints is discussed and shown to be feasible without undue loss of efficiency, and the singular value decomposition is applied to the nonlinear least square problem and discusses related eigenvalue problems.
Abstract: The application of numerically stable matrix decompositions to minimization problems involving linear constraints is discussed and shown to be feasible without undue loss of efficiency. Part A describes computation and updating of the product-form of the LU decomposition of a matrix and shows it can be applied to solving linear systems at least as efficiently as standard techniques using the product-form of the inverse. Part B discusses orthogonalization via Householder transformations, with applications to least squares and quadratic programming algorithms based on the principal pivoting method of Cottle and Dantzig. Part C applies the singular value decomposition to the nonlinear least squares problem and discusses related eigenvalue problems.

73 citations



Journal ArticleDOI
TL;DR: In this article, a least square method for analyzing spectrophotometric data for a series first-order process is given for the case where the unknown parameters are k 1, k 2 and the extinction coefficient of B. Care is needed, since two solutions fit the experimental data equally well.
Abstract: A least squares method of analyzing spectrophotometric data for a series first-order process, [graphic ommitted], is given for the case where the unknown parameters are k1, k2 and the extinction coefficient of B. Care is needed, since two solutions fit the experimental data equally well. These solutions correspond to a faster formation of a more weakly absorbing intermediate B, and a slower formation of a more strongly absorbing species. Methods of resolving the ambiguity are discussed, and the problem is illustrated with data obtained by the stopped-flow method for the formation and decay of peroxynitrous acid in acidic solutions.

Journal ArticleDOI
TL;DR: In this article, the authors reviewed the technique of fitting experimental results to many-parameter formulas by least squares, with reference to the occurrence of linear dependences in the normal equations.


Journal ArticleDOI
TL;DR: In this article, the estimation of the parameters with estimates of their standard error, via regression analysis, is outlined for non-linear and least squares optimal estimates can be obtained by a step-by-step approximation.
Abstract: Under certain assumptions the stage-discharge relationship of a channel cross-section can be approximated by a logarithmic relationship. Observational pairs of stage and discharge plotted on log-log paper often cluster around a straight line and this suggests that the assumptions involved are often approximately satisfied. In such cases the parameters of the logarithmic relationship are usually estimated graphically from the position and slope of the straight line on the log-log paper. In this paper principles and methods are outlined for the estimation of the parameters with estimates of their standard error, via regression analysis. Because the water level of zero flows is usually one of the unknown parameters, the regression is non-linear and least squares optimal estimates can be obtained by a step-by-step approximation. The variances of the parameter estimates can be obtained from the dispersion matrix of the joint distribution of the least squares estimators via the likelihood function. An ...

Journal ArticleDOI
TL;DR: In this paper, the L1 norm is employed in two new estimating techniques, direct least absolute (DLA) and two-stage least absolute(TSLA), and these two are compared to direct least squares (DLS) and twostage least square (TSLS), and four experiments testing the normal distribution case, a multicollinearity problem, a heteroskedastic variance problem.
Abstract: In this paper a distribution sampling study consisting of four major experiments is described. The L1 norm is employed in two new estimating techniques, direct least absolute (DLA) and two-stage least absolute (TSLA), and these two are compared to direct least squares (DLS) and two-stage least squares (TSLS). Four experiments testing the normal distribution case, a multicollinearity problem, a heteroskedastic variance problem, and a misspecified model were conducted. Two small sample sizes were used in each experiment, one with N = 20 and one with N = 10. In addition, conditional predictions were made using the reduced form of the four estimators plus two direct methods, least squares no restrictions (LSNR) and another new method known as least absolute no restrictions (LANR). The general conclusion was that the L1 norm estimators should prove equal to or superior to the L2 norm estimators for models using a structure similar to the overidentified one specified for this study, with randomly distributed error terms and very small sample sizes. BEGINNING WITH the method developed by Haavelmo [11] for solving the problem of single equation bias, econometricians have devoted considerable effort to developing additional methods for estimating the structural parameters of simultaneous equation models [2,12,20,24]. While it has been fairly easy to develop the asymptotic properties of these estimators, a distinguishing characteristic of econometric models is that they are invariably based upon small samples of data and thus, the asymptotic properties of the various estimators are not


Journal ArticleDOI
TL;DR: In this paper, the authors apply the two-stage least squares principle to a nonlinear least squares estimation method, based on Marquardt's maximum neighborhood method, applied to the CES production functions of the Canadian manufacturing industries.
Abstract: T HE present study applies the two-stage least squares principle to a nonlinear least squares estimation method; the nonlinear least squares method is based on Marquardt's maximum neighborhood method. The method is applied to the CES production functions of the Canadian manufacturing industries. Section II explains the application of the nonlinear least squares method to the CES production functions; in section III the estimated results are presented; section IV gives some qualifications to the results obtained in the present study.


Journal ArticleDOI
TL;DR: Some of the intimate connections between discrete least squares processes and quadratures are explored and an algorithm to construct Gauss-type integration formulas is presented.
Abstract: The purpose of this paper is two-fold. Firstly, we explore some of the intimate connections between discrete least squares processes and quadratures. Secondly, we present an algorithm to construct Gauss-type integration formulas, and consider briefly the method proposed by Gautschi [2].


Journal ArticleDOI
TL;DR: In this paper, the authors identify the time series model from autocorrelation and partial correlation of the data, and estimate the, 6, ramp and random walk parameters using maximum likelihood and nonlinear least squares.
Abstract: The full text of the paper describes in detail: 1) the identification of the time series model from autocorrelation and partial correlation of the data; 2) the estimation of the , 6, ramp and random walk parameters using maximum likelihood and nonlinear least squares; and 3) the means by which one would deduce model adequacy through autocorrelation of the white noise residuals and confidence limit theory. As an example of the theory, consider Fig. 1, which shows a sample of normalized long term gyro drift rate. Since the process is nonstationary, the data is differenced as is shown in Fig. 2. The analysis indicated that the math model for this gyro drift rate sample is


Journal ArticleDOI
TL;DR: Two representations for a least squares inverse of a partitioned matrix are obtained and Penrose proves that if the system Ax = b is consistent, the general solution is given by x = A-b + (I-A-A)h, h C E n where A-is any l-inverse for A.
Abstract: For an m X n complex matrix A, a least squares inverse for A is used to characterize the set of all least squares solutions of an inconsistent system of equations AX = b, and two representations for a least squares inverse of a partitioned matrix are obtained. Let A be an m X n complex matrix, b an m X 1 complex vector, and S(A, b) the set of all least squares solutions for the system Ax = b. By a least squares inverse (/-inverse) for A is meant any n X m matrix A t such that for each b in E ~, A*b is in S(A, b). It is generally known that a matrix X is an /-inverse for A if and only if X satisfies AXA = A and (AX)* = AX, Penrose's first and third equations. (See [3, 5].) The purpose here is to characterize the set S(A, b) in terms of /-inverses and to present two representations of an/-inverse of a partitioned matrix. Although in general a matrix has many /-inverses, there are cases when the l-inverse is unique (and hence is A +, the generalized inverse). THEOREM 1. If A is m X n of rank r then A t is unique if and only if r = n < m. which has a unique solution if and only if r = n, and with r = n we cannot have m

Journal ArticleDOI
TL;DR: In this paper, a general strategy for attacking problems in nonlinear least squares is developed, which consists of transforming the functional expression so as to maximize the number of linear parameters and then solving the problem in a two-stage process.
Abstract: A general strategy for attacking problems in nonlinear least squares is developed. Parameters are classified as linear or nonlinear, depending on whether they appear linearly or nonlinearly in the functional expression being fitted to a set of data. Basically the strategy consists of transforming the functional expression so as to maximize the number of linear parameters and then solving the problem in a two-stage process. For given values of the nonlinear parameters the linear parameters are first defined as functions of the nonlinear parameters by the solution of a linear regression. The nonlinear parameters are then found by minimizing the usual quadratic form with the use of standard search techniques.


Journal ArticleDOI
TL;DR: In this paper, a numerical method is described that analyses spectra with nonlinear superposition, and the effect of noise on the accuracy of the determination of species concentrations is discussed.
Abstract: A numerical method is described that analyses spectra with nonlinear superposition. Examples of the superposition of up to five species with up to five pair-interference spectra are presented. The effect of noise on the accuracy of the determination of species concentrations is discussed. The numerical method is based on optimization (nonlinear least squares fit). A FORTRAN listing is appended.

Journal ArticleDOI
TL;DR: In this article, an iterative, weighted nonlinear least squares method of parameter estimation is formulated for the reduction of physiological data and obviates the need for visual fitting, which is subjective.
Abstract: In the estimation of physiological parameters, visual fitting of experimental data has the obvious drawback that a given “best-fit curve” is not equally satisfying to every observer. In this article an iterative, weighted nonlinear least squares method of parameter estimation is formulated. It provides a systematic procedure for the reduction of physiological data and obviates the need for visual fitting, which is subjective. The goodness of fit is evaluated in terms of a weighted least squares error criterion. This method is applied to estimate the parameters of a portion of the human respiratory control system. In particular, the subsystem examined is that relating tidal volume to alveolar CO2 fraction. It is modeled by a transfer function that involves five parameters: two gain constants, two time constants, and a pure time delay; and is of the same form as that determined in an earlier study involving a visual fit. The estimation is based on sinusoidal steady-state magnitude and phase data for two human subjects. The details of the numerical procedure are discussed and possible extensions are indicated. The present method improves the goodness of fit by a factor of 3 to 4. Values of the parameters changed slightly, but not insignificantly compared with those previously


Journal ArticleDOI
TL;DR: The application of the refined least squares method is presented, which makes it possible to solve problems with not only boundary, but also initial and non-continuous conditions.
Abstract: This paper presents the application of the refined least squares method. The refinement makes it possible to solve problems with not only boundary, but also initial and non-continuous conditions. Mathematica is used to develop algorithms and carry out computations. It enables us to extend fields of approximate analytical method applications and allow them to be regarded as computer ones. Mathematica makes it possible to solve unstable and ill-conditioned tasks which are too difficult for numerical methods.