scispace - formally typeset
Search or ask a question

Showing papers on "Non-linear least squares published in 1985"


Journal ArticleDOI
TL;DR: The weighted least squares (WLS) method as mentioned in this paper was shown to be an appropriate way of fitting variogram models, which automatically gives most weight to early lags and down-weights those lags with a small number of pairs.
Abstract: The method of weighted least squares is shown to be an appropriate way of fitting variogram models. The weighting scheme automatically gives most weight to early lags and down-weights those lags with a small number of pairs. Although weights are derived assuming the data are Gaussian (normal), they are shown to be still appropriate in the setting where data are a (smooth) transform of the Gaussian case. The method of (iterated) generalized least squares, which takes into account correlation between variogram estimators at different lags, offer more statistical efficiency at the price of more complexity. Weighted least squares for the robust estimator, based on square root differences, is less of a compromise.

988 citations


01 Jan 1985
TL;DR: In this article, the adaptive least square correlation (ALES) is used for image matching, which allows for simultaneous radiometric corrections and local geometrical image shaping, whereby the system parameters are automatically assessed, corrected, and thus optimized during the least squares iterations.
Abstract: The Adaptive Least Squares Correlation is a very potent and flexible technique for all kinds of data matching problems. Here its application to image matching is outlined. It allows for simultaneous radiometric corrections and local geometrical image shaping, whereby the system parameters are automatically assessed, corrected, and thus optimized during the least squares iterations. The various tools of least squares estimation can be favourably utilized for the assessment of the correlation quality. Furthermore, the system allows for stabilization and improvement of the correlation procedure through the simultaneous consideration of geometrical constraints, e.g. the collinearity condition. Some exciting new perspectives are emphasized, as for example multiphoto correlation, multitemporal and multisensor correlation, multipoint correlation, and simultaneous correlation/triangulation.

667 citations


Book ChapterDOI
TL;DR: This chapter highlights one of the methods for the analysis of experimental data, along with the assumptions, advantages, and disadvantages of the method.
Abstract: Publisher Summary This chapter highlights one of the methods for the analysis of experimental data, along with the assumptions, advantages, and disadvantages of the method. The most important experimental detail to understand for any parameter estimation procedure is the sources and magnitudes of the random and nonrandom experimental errors superimposed on the data. Moreover, computers are not oracles. The investigator needs to be continually aware that the output of any computer program is no better than what goes into it. It is important to realize that a computer is in essence the same as any other instrument in a laboratory. To obtain optimal results it must be used correctly. To use it correctly, the user must understand the assumptions which went into the development of the computer programs and their limitations.

436 citations


Journal ArticleDOI
TL;DR: In this paper, the authors compared the performance of ordinary, weighted, and generalized least squares estimators of the parameters of such regional hydrologic relationships in situations where the available Streamflow records at gaged sites can be of different and widely varying lengths and concurrent flows at different sites are cross-correlated.
Abstract: Streamflow gaging networks provide hydrologic information which is often used to derive relationships between physiographic variables and Streamflow statistics. This paper compares the performance of ordinary, weighted, and generalized least squares estimators of the parameters of such regional hydrologic relationships in situations where the available Streamflow records at gaged sites can be of different and widely varying lengths and concurrent flows at different sites are cross-correlated. A Monte Carlo study illustrates the performance of an ordinary least squares (OLS) procedure and an operational generalized least squares (GLS) procedure which accounts for and directly estimates the precision of the predictive model being fit. The GLS procedure provided (1) more accurate parameter estimates, (2) better estimates of the accuracy with which the regression model's parameters were being estimated, and (3) almost unbiased estimates of the model error. The OLS approach can provide very distorted estimates of the model's predictive precision (model error) and the precision with which the regression model's parameters are being estimated. A weighted least squares procedure which neglects the cross correlations among concurrent flows does as well as the GLS procedure when the cross correlation among concurrent flows is relatively modest. The Monte Carlo examples also explore the value of Streamflow records of different lengths in regionalization studies.

369 citations


Journal ArticleDOI
TL;DR: In this paper, a smoothing method was developed to estimate both the treatment effects and the unknown trend in a field plot experiment, assuming a smooth trend plus independent error model for the environmental effects in the yields of a fieldplot experiment.
Abstract: SUMMARY Assuming a smooth trend plus independent error model for the environmental effects in the yields of a field plot experiment, least squares smoothing methods are developed to estimate both the treatment effects and the unknown trend Treatment estimates are closely related to those resulting from a generalized least squares analysis in which the covariance structure for the environmental effects has a particular form However, the main emphases are on the accuracy of treatment estimates under a fixed smooth trend plus error model and the exploratory power of the basic method to isolate trend effects of unknown form Although the detailed development is for the one-dimensional case, generalizations of the smoothness concept and extensions to two dimensions are also discussed Application of the basic method is illustrated on three data sets and the results compared with other analyses

212 citations


Journal ArticleDOI
TL;DR: In this article, the partial least squares (PLS) prediction method is compared to the predictor based on principal component regression (PCR), both theoretical considerations and computations on artificial and real data are presented.
Abstract: In this paper we discuss the partial least squares (PLS) prediction method. The method is compared to the predictor based on principal component regression (PCR). Both theoretical considerations and computations on artificial and real data are presented.

164 citations


Journal ArticleDOI
TL;DR: In this article, a new combination of methods for solving nonlinear boundary value problems containing a parameter is discussed, combining methods of the continuation type with least squares formulations, preconditioned conjugate gradient algorithms and finite element approximations.
Abstract: We discuss in this paper a new combination of methods for solving nonlinear boundary value problems containing a parameter. Methods of the continuation type are combined with least squares formulations, preconditioned conjugate gradient algorithms and finite element approximations.We can compute branches of solutions with limit points, bifurcation points, etc.Several numerical tests illustrate the possibilities of the methods discussed in the present paper; these include the Bratu problem in one and two dimensions, one-dimensional bifurcation and perturbed bifurcation problems, the driven cavity problem for the Navier–Stokes equations.

121 citations


Journal ArticleDOI
TL;DR: In this paper, a weighted least squares method for the numerical solution of elliptic boundary value problems is presented, which uses a finite-dimensional space S of approximating functions, similar to the spaces used in finite-element methods.
Abstract: A weighted least squares method is given for the numerical solution of elliptic partial differential equations of Agmon-Douglis-Nirenberg type and an error analysis is provided. Some examples are given. 1. Introduction. The use of least squares methods for the approximate solution of equations dates back at least to Gauss. The modern theory of least squares methods in the numerical solution of elliptic boundary value problems starts, in 1970, with the papers of Bramble and Schatz (5), (6). This work uses a finite-dimensional space S of approximating functions, similar to the spaces used in finite-element methods. The approximate solution is defined to be the minimizer of a least squares functional that is a weighted sum of the least squares residual in the differential equation and the least squares residual in the boundary condition. The paper (5) has an historical importance for the following reason. It appeared during the time when numerical analysts were shifting attention from finite-difference methods to finite-element methods, and it provided, for the first time, a family of approximation methods for the solution of the Dirichlet problem whose order of accuracy could be made arbitrarily large. The paper (6) provided an extension to an elliptic equation of order 2m, and (3) gave important simplifications in the analysis. The principal advantages of the method are that one need not satisfy exactly the Dirichlet boundary condi- tions, and that the mathematical analysis dictates, in a natural way, the relative weights that are given to the boundary and interior terms in the least squares functional. Also, the method provides, in a quasioptimal sense, as good a solution as can be expected from the space S. On the other hand, the method requires that S consist of functions which are smooth enough to lie in the domain of the elliptic operator. Also, the method seems to produce matrices with large condition number. For various reasons, it is of interest to extend the theory of least squares methods to include elliptic systems. First, if a second-order elliptic equation is written as a first-order system, it would seem (and this is borne out by our analysis) that the smoothness requirements for the spaces of approximating functions would be reduced, thus eliminating one of the disadvantages of the method. A second motivation for extending the least squares method to elliptic systems is that elliptic systems occur frequently in applications. An example of an elliptic system is the system of equations for Stokes flow. For this system, the least squares method does not require the space of approximating vector fields to be incompressible. Instead,

116 citations


Journal ArticleDOI
TL;DR: In this paper, a numerically stable implementation of the Gauss-Newton method for computing least squares estimates of parameters and variables in explicit nonlinear models with errors in the variables is proposed.
Abstract: A numerically stable implementation of the Gauss-Newton method for computing least squares estimates of parameters and variables in explicit nonlinear models with errors in the variables is proposed. The algorithm uses only orthogonal transformations and exploits the special structure of the problem. Moreover, a partially regularized Marquardt-like version is described that works with a reasonable overhead of arithmetic operations and storage compared to the error-free case.

89 citations


Journal ArticleDOI
TL;DR: In this paper, nonlinear least squares estimation procedures are proposed for estimating the parameters of the generalized lambda distribution, which are compared with other methods by making Monte Carlo experiments and a numerical example is also given to illustrate the proposed method.
Abstract: Nonlinear least squares estimation procedures are proposed for estimating the parameters of the generalized lambda distribution. The procedures are compared with other methods by making Monte Carlo experiments. A numerical example is also given to illustrate the proposed method.

85 citations


Journal ArticleDOI
TL;DR: It is concluded that a simple Gauss-Newton/BFGS hybrid is both efficient and robust and illustrated by a range of comparative tests with other methods.
Abstract: We consider Newton-like line search descent methods for solving non-linear least-squares problems. The basis of our approach is to choose a method, or parameters within a method, by minimizing a variational measure which estimates the error in an inverse Hessian approximation. In one approach we consider sizing methods and choose sizing parameters in an optimal way. In another approach we consider various possibilities for hybrid Gauss-Newton/BFGS methods. We conclude that a simple Gauss-Newton/BFGS hybrid is both efficient and robust and we illustrate this by a range of comparative tests with other methods. These experiments include not only many well known test problems but also some new classes of large residual problem.

Journal ArticleDOI
TL;DR: In this article, the authors compared three statistical methods, namely linear least squares, nonlinear least squares and weighted linear least square, to calculate the Arrhenius parameters and the accuracy of the derived model, and found that the traditional two-step linear method was the least accurate.
Abstract: To minimize quality losses occurring during processing and storage and to predict shelf-life, quantitative kinetic models, expressing the functional relationship between composition and environmental factors on food quality, are required. The applicability of these models is based on the accuracy of the model and its parameters. In this paper, the calculation of the Arrhenius parameters and the accuracy of the derived model were compared, using three statistical methods, namely: linear least squares, nonlinear least squares and weighted nonlinear least squares. Results indicated that the traditional two-step linear method, was the least accurate and the derived energy of activation and the pre-exponential factor had the largest confidence interval. The latter was shown to have a profound effect on the precision of the calculated rate constant and the predicted shelf life. Based on previous reports that indexes of deterioration

Journal ArticleDOI
TL;DR: In this article, a model for repeated measurements designs where the measurements on the same unit are assumed to be correlated with known correlation coefficient is considered and it is shown that Latin squares with an additional balancing property are E-optimal for the weighted least squares estimate.
Abstract: SUMMARY We consider a model for repeated measurements designs where the measurements on the same unit are assumed to be correlated with known correlation coefficient. It is shown that Latin squares with an additional balancing property are E-optimal for the weighted least squares estimate.

Journal ArticleDOI
TL;DR: In this article, the authors explore the conjecture that, when the least square estimate is consistent for a linear combination of the regression parameter, it will be preferred to an errors-in-variables estimate, at least asymptotically.
Abstract: In an errors-in-variables regression model, the least squares estimate is generally inconsistent for the complete regression parameter but can be consistent for certain linear combinations of this parameter. We explore the conjecture that, when the least squares estimate is consistent for a linear combination of the regression parameter, it will be preferred to an errors-in-variables estimate, at least asymptotically. The conjecture is false, in general, but it is true for some important classes of problems. One such problem is a randomized two-group analysis of covariance, upon which we focus.

Journal ArticleDOI
TL;DR: This is the second in a series of tutorial articles discussing the analysis of pharmacokinetic data using parametric models, and primary emphasis is placed on point estimates of the parameters of the structural (pharmacokinetic) model.
Abstract: This is the second in a series of tutorial articles discussing the analysis of pharmacokinetic data using parametric models. In this article the basic issue is how to estimate the parameters of such models. Primary emphasis is placed on point estimates of the parameters of the structural (pharmacokinetic) model. All the estimation methods discussed are least squares (LS) methods: ordinary least squares, weighted least squares, iteratively reweighted least squares, and extended least squares. The choice of LS method depends on the variance model. Some discussion is also provided of computer methods used to find the LS estimates, identifiability, and robust LS-based estimation methods.

Journal ArticleDOI
TL;DR: It is shown that for the channel estimation problem considered here, LS algorithms converge in approximately 2N iterations where N is the order of the filter and the equivalence between an LS algorithm and a fast converging modified SG algorithm which uses a maximum length input data sequence is shown.
Abstract: The convergence properties of adaptive least squares (LS) and stochastic gradient (SG) algorithms are studied in the context of echo cancellation of voiceband data signals. The algorithms considered are the SG transversal, SG lattice, LS transversal (fast Kalman), and LS lattice. It is shown that for the channel estimation problem considered here, LS algorithms converge in approximately 2N iterations where N is the order of the filter. In contrast, both SG algorithms display inferior convergence properties due to their reliance upon statistical averages. Simulations are presented to verify this result, and indicate that the fast Kalman algorithm frequently displays numerical instability which can be circumvented by using the lattice structure. Finally, the equivalence between an LS algorithm and a fast converging modified SG algorithm which uses a maximum length input data sequence is shown.

Journal ArticleDOI
TL;DR: Two algorithms for solving nonlinear least squares problems with general linear inequality constraints are described and comparisons of the relative performance of the two algorithms on small problems and on a larger exponential data-fitting problem are presented.
Abstract: Two algorithms for solving nonlinear least squares problems with general linear inequality constraints are described. At each step, the problem is reduced to an unconstrained linear least squares problem in the subspace defined by the active constraints, which is solved using the Levenberg–Marquardt method. The desirability of leaving an active constraint is evaluated at each step, using a different technique for each of the two algorithms. Each step is constrained to be within a circular region of trust about the current approximate minimizes, whose radius is updated according to the quality of the step after each iteration. Comparisons of the relative performance of the two algorithms on small problems and on a larger exponential data-fitting problem are presented.

Journal ArticleDOI
TL;DR: The generalized least squares procedure is applied to sample tree data for which additive biomass tables are required and is proposed as an alternative to the ordinary weighted least squares.
Abstract: The generalized least squares procedure is applied to sample tree data for which additive biomass tables are required. This procedure is proposed as an alternative to the ordinary weighted least squares in order to account for the fact that several biomass components are measured on the same sample trees. The biomass tables generated by the generalized and the ordinary least squares are very similar, the confidence intervals are sometimes wider, sometimes narrower, but the prediction intervals are always narrower for the generalized least squares method.

Journal ArticleDOI
TL;DR: In this paper, a new derivation is given for the generalized singular value decomposition of two matrices X and F having the same number of rows, which reveals the structure of the general Gauss-Markov linear model ( y, X β, σ 2 FF′ ), and exhibits the structure and solution of the generalized linear least squares problem used to provide the best linear unbiased estimator for the model.

Journal ArticleDOI
TL;DR: The superiority of TLLS over LLS for system identification and system parameter estimation with respect to noise rejection in the data is demonstrated by analytical and experimental techniques.

Journal ArticleDOI
TL;DR: MULTIFIT, a program in BASIC for implementation on microcomputers, has been developed for non-linear least squares regression fitting of enzyme kinetic, pharmacokinetic and other data to specific models.


Journal ArticleDOI
TL;DR: In this paper, global analyses are given to continuous analogues of the Levenberg-Marquardt method and the Newton-Raphson-Ben-Israel method for solving an over and under-determined systemg(x)=0 of nonlinear equations.
Abstract: Global analyses are given to continuous analogues of the Levenberg-Marquardt methoddx/dt=−(Jt(x)J(x)+δI)−1Jt(x)g(x), and the Newton-Raphson-Ben-Israel methoddx/dt=−J+(x)g(x), for solving an over- and under-determined systemg(x)=0 of nonlinear equations. The characteristics of both methods are compared. Erros in some literature which dealt with related continous analogue methods are pointed out.

Journal ArticleDOI
TL;DR: The results of the refinement have been discussed in the light of the characteristics of the ES3TM program and of the reliability of the speciation models on copper(II) hydrolysis reported up to now.
Abstract: A computer program for two mass balance systems (in solution) has been written in FORTRAN IV. This program (ES3TM) refines the formation constants and some titration parameters (E0, analytical concentrations) from potentiometric data, using the Marquardt algorithm for the Gauss nonlinear least squares method. The program has been compared with some other programs reported in the literature. In order to test the ES3TM program and to obtain reliable values of formation constants for the species, [Cup(OH)q](2p−q)+, we studied the hydrolysis of copper(II) by pH-metric measurements at 37°C and I=0.15 mol dm−3 (NaNO3). The results of the refinement have been discussed in the light of the characteristics of the ES3TM program and of the reliability of the speciation models on copper(II) hydrolysis reported up to now.

Journal ArticleDOI
TL;DR: In this paper, the authors compare two computational methods of estimating kinetic parameters from thermoanalytical experiments, i.e., isothermal and dynamic thermogravimetry, for the isothermal runs, reaction order and activation energy were estimated using established methods.
Abstract: A study was undertaken to compare two computational methods of estimating kinetic parameters from thermoanalytical experiments. Examples illustrating the relationship between reaction complexity and validity of isothermalvs. non-isothermal kinetic analyses will be presented. Thermal decomposition of several compounds was studied both by isothermal and dynamic thermogravimetry (TG). For the isothermal runs, reaction order and activation energy were estimated using established methods. For the dynamic runs, the statistical method of nonlinear least squares was used to estimate all three kinetic parameters of the nth order decomposition reaction and their individual 95% confidence intervals. Both methods assumed Arrhenius temperature dependence.

Journal ArticleDOI
TL;DR: In this article, the authors extend the classical least squares method for estimating confidence intervals to the rank deficient case, stabilizing the estimate by means of a priori side constraints, and develop a suboptimal method which is in some ways similar to ridge regression but is quite different in that it provides an unambiguous criterion for the choice of the arbitrary parameter.
Abstract: We extend the classical least squares method for estimating confidence intervals to the rank deficient case, stabilizing the estimate by means of a priori side constraints. In order to avoid quadratic programming, we develop a suboptimal method which is in some ways similar to ridge regression but is quite different in that it provides an unambiguous criterion for the choice of the arbitrary parameter. We develop a method for choosing that parameter value and illustrate the procedure by applying it to an example problem.

Journal ArticleDOI
TL;DR: A new approach to nonlinear least-squares regression analysis using extended least squares (ELS) was compared with three conventional methods and was superior with regard to both bias and precision to less appropriate methods.


Journal ArticleDOI
TL;DR: An error analysis of the G-algorithm is presented which shows that it is as stable as any of the standard orthogonal decomposition methods for solving least squares problems.
Abstract: The G-algorithm was proposed by Bareiss [1] as a method for solving the weighted linear least squares problem. It is a square root free algorithm similar to the fast Givens method except that it triangularizes a rectangular matrix a column at a time instead of one element at a time.

Journal ArticleDOI
TL;DR: In this article, the variance of the generating white noise process is allowed to depend on time, and it is shown that ordinary least squares estimates are strongly consistent and with a proper scaling factor asymptotically normal.
Abstract: . We study nonstationary autoregressive processes, where the variance of the generating white noise process is allowed to depend on time. It is shown that ordinary least squares estimates are strongly consistent and with a proper scaling factor asymptotically normal, but, as can be expected, they are not efficient. Furthermore, AIC type order determination criteria, used as if the underlying process is stationary, are consistent, whereas identification of order in terms of the partial autocorrelation function may lead one astray.