scispace - formally typeset
Search or ask a question

Showing papers on "Non-linear least squares published in 1993"


Book
01 Jan 1993
TL;DR: In this paper, the authors propose a nonlinear regression model based on the Gauss-Newton Regression for least squares, and apply it to time-series data and show that the model can be used for regression models for time series data.
Abstract: 1. The Geometry of Least Squares 2. Nonlinear Regression Models and Nonlinear Least Squares 3. Inference in Nonlinear Regression Models 4. Introduction to Asymptotic Theory and Methods 5. Asymptotic Methods and Nonlinear Least Squares 6. The Gauss-Newton Regression 7. Instrumental Variables 8. The Method of Maximum Likelihood 9. Maximum Likelihood and Generalized Least Squares 10. Serial Correlation 11. Tests Based on the Gauss-Newton Regression 12. Interpreting Tests in Regression Directions 13. The Classical Hypothesis Tests 14. Transforming the Dependent Variable 15. Qualitative and Limited Dependent Variables 16. Heteroskedasticity and Related Topics 17. The Generalized Method of Moments 18. Simultaneous Equations Models 19. Regression Models for Time-Series Data 20. Unit Roots and Cointegration 21. Monte Carlo Experiments

4,912 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the least squares estimator of the threshold parameter is N consistent and its limiting distribution is related to a compound Poisson process, and the limiting distribution of the least square estimator is derived.
Abstract: It is shown that, under some regularity conditions, the least squares estimator of a stationary ergodic threshold autoregressive model is strongly consistent. The limiting distribution of the least squares estimator is derived. It is shown that the estimator of the threshold parameter is N consistent and its limiting distribution is related to a compound Poisson Process.

1,332 citations


Book ChapterDOI
TL;DR: Weighted averaging regression and calibration form a simple, yet powerful method for reconstructing environmental variables from species assemblages as discussed by the authors, which performs well with noisy, species-rich data that cover a long ecological gradient (>3 SD units).
Abstract: Weighted averaging regression and calibration form a simple, yet powerful method for reconstructing environmental variables from species assemblages. Based on the concepts of niche-space partitioning and ecological optima of species (indicator values), it performs well with noisy, species-rich data that cover a long ecological gradient (>3 SD units). Partial least squares regression is a linear method for multivariate calibration that is popular in chemometrics as a robust alternative to principal component regression. It successively selects linear components so as to maximize predictive power. In this paper the ideas of the two methods are combined. It is shown that the weighted averaging method is a form of partial least squares regression applied to transformed data that uses the first PLS-component only. The new combined method, ast squares, consists of using further components, namely as many as are useful in terms of predictive power. The further components utilize the residual structure in the species data to improve the species parameters (‘optima’) in the final weighted averaging predictor. Simulations show that the new method can give 70% reduction in prediction error in data sets with low noise, but only a small reduction in noisy data sets. In three real data sets of diatom assemblages collected for the reconstruction of acidity and salinity, the reduction in prediction error was zero, 19% and 32%.

904 citations


Journal ArticleDOI
TL;DR: It is shown that even a small pixel-level perturbation may override the epipolar information that is essential for the linear algorithms to distinguish different motions, indicating the need for optimal estimation in the presence of noise.
Abstract: The causes of existing linear algorithms exhibiting various high sensitivities to noise are analyzed. It is shown that even a small pixel-level perturbation may override the epipolar information that is essential for the linear algorithms to distinguish different motions. This analysis indicates the need for optimal estimation in the presence of noise. Methods are introduced for optimal motion and structure estimation under two situations of noise distribution: known and unknown. Computationally, the optimal estimation amounts to minimizing a nonlinear function. For the correct convergence of this nonlinear minimization, a two-step approach is used. The first step is using a linear algorithm to give a preliminary estimate for the parameters. The second step is minimizing the optimal objective function starting from that preliminary estimate as an initial guess. A remarkable accuracy improvement has been achieved by this two-step approach over using the linear algorithm alone. >

287 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of determining the circle of best fit to a set of points in the plane (or the obvious generalization ton-dimensions) is easily formulated as a nonlinear total least-squares problem which may be solved using a Gauss-Newton minimization algorithm.
Abstract: The problem of determining the circle of best fit to a set of points in the plane (or the obvious generalization ton-dimensions) is easily formulated as a nonlinear total least-squares problem which may be solved using a Gauss-Newton minimization algorithm. This straight-forward approach is shown to be inefficient and extremely sensitive to the presence of outliers. An alternative formulation allows the problem to be reduced to a linear least squares problem which is trivially solved. The recommended approach is shown to have the added advantage of being much less sensitive to outliers than the nonlinear least squares approach.

209 citations


Journal ArticleDOI
Linda Kaufman1
TL;DR: It is shown that the same scaled steepest descent algorithm can be applied to the least squares merit function, and that it can be accelerated using the conjugate gradient approach.
Abstract: The EM algorithm is the basic approach used to maximize the log likelihood objective function for the reconstruction problem in positron emission tomography (PET). The EM algorithm is a scaled steepest ascent algorithm that elegantly handles the nonnegativity constraints of the problem. It is shown that the same scaled steepest descent algorithm can be applied to the least squares merit function, and that it can be accelerated using the conjugate gradient approach. The experiments suggest that one can cut the computation by about a factor of 3 by using this technique. The results are applied to various penalized least squares functions which might be used to produce a smoother image. >

193 citations


Journal ArticleDOI
TL;DR: In this article, it is shown how structured and weighted total least squares and L 2 approximation problems lead to a nonlinear generalized singular value decomposition, and an inverse iteration scheme to find a (local) minimum is proposed.

153 citations


Journal ArticleDOI
TL;DR: The short T2 component could provide a direct method of measuring intact myelin, which would have a profound effect on the understanding of the evolution of pathology in multiple sclerosis.
Abstract: We have used the CPMG pulse sequence to measure proton T2 values and water content in spinal cord and brain samples from Hartley guinea pigs inoculated to produce experimental allergic encephalomyelitis (EAE). Relaxation data were fitted using minuit, a non-linear curve fitting routine. Three exponentials provided the best fit to spinal cord data (10 ms (13%), 76 ms (57%), 215 ms (30%)) and two exponentials for brain tissue (10 ms (4%), 92 ms (96%)). Least squares algorithms were also used to analyse the spinal cord data in terms of discrete and smooth distributions of relaxation times. The discrete least squares solutions consisted of three to five isolated spikes between 0.010 and 0.300 s. This type of solution was difficult to interpret in terms of water reservoirs. Smooth solutions consisted of two broad peaks, a small peak with a T2 near 0.010 s and a larger peak near 0.100 s. The integral ratio of the larger to the smaller peak was 7.092 +/- 1.782 for normal tissue, and increased to a maximum of 16 with increasing parenchymal cellular infiltration and demyelination. The short T2 peak has been assigned to water in the hydration layers of the myelin sheath. The width of the longer T2 peak was sensitive to tissue heterogeneity. The least squares and smooth distribution analysis models could be used to distinguish samples with extensive parenchymal infiltration from normal tissue, even though only a maximum of 60% of the tissue was affected.(ABSTRACT TRUNCATED AT 250 WORDS)

138 citations


Journal ArticleDOI
TL;DR: A new computationally efficient algorithm for re- cursive least squares filtering is derived, which is based upon an inverse QR decomposition, which solves directly for the time-recursive least squares filter vector, while avoiding the highly serial backsubstitution step required in previously direct QR approaches.
Abstract: A new computationally efficient algorithm for re- cursive least squares filtering is derived, which is based upon an inverse QR decomposition. The method solves directly for the time-recursive least squares filter vector, while avoiding the highly serial backsubstitution step required in previously de- rived direct QR approaches. Furthermore, the method employs orthogonal rotation operations to recursively update the filter, and thus preserves the inherent stability properties of QR ap- proaches to recursive least squares filtering. The results of sim- ulations over extremely long data sets are also presented, which suggest stability of the new time-recursive algorithm. Finally, parallel implementation of the resulting method is briefly dis- cussed, and computational wavefronts are displayed.

119 citations


Journal ArticleDOI
TL;DR: A method for computing the exact value of the LMS estimate in multiple linear regression based on the fact that each LQS estimate is the Chebyshev (or minimax) fit to some q element subset of the data is presented.
Abstract: The difficulty in computing the least median of squares (LMS) estimate in multiple linear regression is due to the nondifferentiability and many local minima of the objective function. Several approximate, but not exact, algorithms have been suggested. This paper presents a method for computing the exact value of the LMS estimate in multiple linear regression. The LMS estimate is a special case of the least quantile of squares (LQS) estimate, which minimizes the qth smallest squared residual for a given data set. For LMS, $q = [n/2] + [(p + 1)/2]$ where $[ \, ]$ is the greatest integer function, n is the sample size, and p is the number of columns in the X matrix. The algorithm can compute a range of exact LQS estimates in multiple linear regression by considering $\left( {\begin{array}{*{20}c} n \\ {p + 1} \\ \end{array} } \right)$ possible $\theta $ values. It is based on the fact that each LQS estimate is the Chebyshev (or minimax) fit to some q element subset of the data. This yields a surprisingly ea...

112 citations


Journal ArticleDOI
TL;DR: In this article, the authors examined the stability across estimation methods of incremental and non-incremental fit measures that use the information about the fit of the most restricted (null) model as a reference point in assessing the fitting of a more substantive model to the data.
Abstract: In a typical study involving covariance structure modeling, fit of a model or a set of alternative models is evaluated using several indicators of fit under one estimation method, usually maximum likelihood. This study examined the stability across estimation methods of incremental and non incremental fit measures that use the information about the fit of the most restricted (null) model as a reference point in assessing the fit of a more substantive model to the data. A set of alternative models for a large empirical dataset was analyzed by asymptotically distribution-free, generalized least squares, maximum likelihood, and ordinary least squares estimation methods. Four incremental and four nonincremental fit indexes were com pared. Incremental indexes were quite unstable across estimation methods—maximum likelihood and ordinary least squares solutions indicated better fit of a given model than asymptotically distribution-free and generalized least squares solu tions. The cause of this phenomenon is exp...

Journal ArticleDOI
TL;DR: In this article, a simple algorithm for the state estimation of stochastic singular linear systems is proposed based on the least square method, which can be used to estimate the state of the system.
Abstract: This simple algorithm for the state estimation of stochastic singular linear systems is based on the least squares method.

Journal ArticleDOI
TL;DR: In this paper, a new approximation to the weighting function in Zielke's equation is used in an improved implementation of Trikha's method for including frequency dependent friction in transient laminar flow calculations.
Abstract: A new approximation to the weighting function in Zielke's equation is used in an improved implementation of Trikha's method for including frequency dependent friction in transient laminar flow calculations. The new, five-term approximation was fitted to the weighting function using a nonlinear least squares approach. Transient results obtained using the new approximation function are nearly indistinguishable from results obtained using the exact expression for the weighting function.

Journal ArticleDOI
TL;DR: In this article, a new algorithm for computing high breakdown estimates in nonlinear regression that requires only a small number of least squares fits to p points is presented. But it is computationally infeasible to compute high breakdown estimators in linear regression, where p is the dimension of the X matrix.
Abstract: Most algorithms for estimating high breakdown regression estimators in linear regression rely on finding the least squares fit to many p-point elemental sets, where p is the dimension of the X matrix. Such an approach is computationally infeasible in nonlinear regression. This article presents a new algorithm for computing high breakdown estimates in nonlinear regression that requires only a small number of least squares fits to p points. The algorithm is used to compute Rousseeuw's least median of squares (LMS) estimate and Yohai's MM estimate in both simulations and examples. It is also used to compute bootstrapped and Monte Carlo Standard error estimates for MM estimates, which are compared with asymptotic Standard errors (ASE's). Using the PROGRESS algorithm for a two-parameter nonlinear model with sample size 30 would require finding the least squares fit to 435 two-point subsets of the data. In the settings considered in this article, the proposed algorithm performs just as well with 25 as with 435 ...

Proceedings ArticleDOI
15 Jun 1993
TL;DR: A shape and motion estimation algorithm based on nonlinear least squares applied to the tracks of features through time is presented, using an object-centered representation for faster and more accurate structure and motion recovery.
Abstract: A shape and motion estimation algorithm based on nonlinear least squares applied to the tracks of features through time is presented. While the authors' approach requires iteration, it quickly converges to the desired solution, even in the absence of a priori knowledge about the shape or motion. Important features of the algorithm include its ability to handle partial point tracks and true perspective, its ability to use line segment matches and point matches simultaneously, and its use of an object-centered representation for faster and more accurate structure and motion recovery. >

Journal ArticleDOI
TL;DR: The results show that these methods can take full advantage of the contribution from the fine temporal sampling data of modern tomographs, and thus provide statistically reliable estimates that are comparable to those obtained from nonlinear LS regression.
Abstract: With the advent of positron emission tomography (PET), a variety of techniques have been developed to measure local cerebral blood flow (LCBF) noninvasively in humans. A potential class of techniques, which includes linear least squares (LS), linear weighted least squares (WLS), linear generalized least squares (GLS), and linear generalized weighted least squares (GWLS), is proposed. The statistical characteristics of these methods are examined by computer simulation. The authors present a comparison of these four methods with two other rapid estimation techniques developed by Huang et al. (1982) and Alpert (1984), and two classical methods, the unweighted and weighted nonlinear least squares regression. The results show that these methods can take full advantage of the contribution from the fine temporal sampling data of modern tomographs, and thus provide statistically reliable estimates that are comparable to those obtained from nonlinear LS regression. These methods also have high computational efficiency, and the parameters can be estimated directly from operational equations in one single step. Therefore, they can potentially be used in image-wide estimation of local cerebral blood flow and distribution volume with PET. >


Journal ArticleDOI
TL;DR: In this article, the quality of the approximation of the elemental set algorithm for the least median of squares, least trimmed squares, and ordinary least squares criterion is studied. But the results are limited to the case of high breakdown regression and multivariate location/scale estimation.
Abstract: The elemental set algorithm involves performing many fits to a data set, each fit made to a subsample of size just large enough to estimate the parameters in the model. Elemental sets have been proposed as a computational device to approximate estimators in the areas of high breakdown regression and multivariate location/scale estimation, where exact optimization of the criterion function is computationally intractable. Although elemental set algorithms are used widely and for a variety of problems, the quality of the approximation they give has not been studied. This article shows that they provide excellent approximations for the least median of squares, least trimmed squares, and ordinary least squares criteria. It is suggested that the approach likely will be equally effective in the other problem areas in which exact optimization of a criterion is difficult or impossible.

Journal ArticleDOI
TL;DR: SENSOP is presented, a weighted nonlinear least squares optimizer, which is designed for fitting a model to a set of data where the variance may or may not be constant.
Abstract: Nonlinear least squares optimization is used most often in fitting a complex model to a set of data. An ordinary nonlinear least squares optimizer assumes a constant variance for all the data points. This paper presents SENSOP, a weighted nonlinear least squares optimizer, which is designed for fitting a model to a set of data where the variance may or may not be constant. It uses a variant of the Levenberg-Marquardt method to calculate the direction and the length of the step change in the parameter vector. The method for estimating appropriate weighting functions applies generally to 1-dimensional signals and can be used for higher dimensional signals. Sets of multiple tracer outflow dilution curves present special problems because the data encompass three to four orders of magnitude; a fractional power function provides appropriate weighting giving success in parameter estimation despite the wide range.

Journal ArticleDOI
TL;DR: In this paper, a Monte Carlo study is used to answer the question of which of several different parameterizations of an ambiguous equivalent circuit model lead to minimum correlation between fitting parameters, a desirable condition.

Proceedings ArticleDOI
15 Dec 1993
TL;DR: In this paper, a recursive partial least squares (PLS) regression is used for online system identification and circumventing the ill-conditioned problem in industrial process models, which is used to remove the correlation by projecting the original variable space to an orthogonal latent space.
Abstract: Industrial processes usually involve a large number of variables, many of which vary in a correlated manner. To identify a process model which has correlated variables, an ordinary least squares approach demonstrates ill-conditioned problem and the resulting model is sensitive to changes in sampled data. In this paper, a recursive partial least squares (PLS) regression is used for online system identification and circumventing the ill-conditioned problem. The partial least squares method is used to remove the correlation by projecting the original variable space to an orthogonal latent space. Application of the proposed algorithm to a chemical processing modeling problem is discussed. >

Journal ArticleDOI
TL;DR: The so-called mesh independence for non-linear least squares problems with norm constraints (NCNLLS) is proved and sufficient conditions for the mesh independence to hold are related to conditions guaranteeing convergence of the Gauss-Newton method.
Abstract: If one solves an infinite-dimensional optimization problem by introducing discretizations and applying a solution method to the resulting finite-dimensional problem, one often observes the very stable behavior of this method with respect to varying discretizations. The most striking observation is the constancy of the number of iterations needed to satisfy a given stopping criterion. In this paper an analysis of these phenomena is given and the so-called mesh independence for non-linear least squares problems with norm constraints (NCNLLS) is proved. A Gauss–Newton method for the solution of NCNLLS is discussed and its convergence properties are analyzed. The mesh independence is proven in its sharpest formulation. Sufficient conditions for the mesh independence to hold are related to conditions guaranteeing convergence of the Gauss-Newton method. The results are demonstrated on a two-point boundary value problem.

Proceedings ArticleDOI
02 May 1993
TL;DR: The use of an adaptive non-linear least squares algorithm to solve the inverse kinematic problem for robotic manipulators and the authors prove that the task space error function has no local minimizers.
Abstract: The use of an adaptive non-linear least squares algorithm to solve the inverse kinematic problem for robotic manipulators is proposed. The algorithm uses the Gauss-Newton model of the direct kinematic function with the Levenberg-Marquardt iteration. This first-order approximation is supplemented with a quadratic model in certain situations. If required the algorithm can converge to singular configurations, and hence is especially useful when the desired end-effector position is outside the reachable workspace of the manipulator. The authors prove that the task space error function has no local minimizers. >

Journal ArticleDOI
TL;DR: This paper addresses the question of how to construct a row relaxation method for solving large unstructured linear least squares problems, with or without linear constraints, and combines the Herman–Lent–Hurwitz scheme with the Lent–Censor–Hildreth method for solve linear constraints.
Abstract: This paper addresses the question of how to construct a row relaxation method for solving large unstructured linear least squares problems, with or without linear constraints. The proposed approach combines the Herman–Lent–Hurwitz scheme for solving regularized least squares problems with the Lent–Censor–Hildreth method for solving linear constraints. However, numerical experiments show that the Herman–Lent–Hurwitz scheme has difficulty reaching a least squares solution. This difficulty is resolved by applying the Riley–Golub iterative improvement process.

Journal ArticleDOI
TL;DR: In this paper, the consistency and asymptotic normality of the least square estimator were derived for a particular non-linear regression model, which does not satisfy the standard sufficient conditions of Jennrich (1969) or Wu (1981), under the assumption of normal errors.

Book ChapterDOI
TL;DR: In this article, a recent least squares algorithm is designed to adapt implicit models to given sets of data, especially models given by differential equations or dynamical systems, is reviewed and used to fit the Henon-Heiles differential equations to chaotic data sets.
Abstract: A recent least squares algorithm, which is designed to adapt implicit models to given sets of data, especially models given by differential equations or dynamical systems, is reviewed and used to fit the Henon-Heiles differential equations to chaotic data sets.

Journal ArticleDOI
TL;DR: In this article, the effect of spatial autocorrelation on inferences made using ordinary least squares estimation is considered and an alternative variance estimator that adjusts for any observed correlation is proposed.
Abstract: The effect of spatial autocorrelation on inferences made using ordinary least squares estimation is considered. It is found, in some cases, that ordinary least squares estimators provide a reasonable alternative to the estimated generalized least squares estimators recommended in the spatial statistics literature. One of the most serious problems in using ordinary least squares is that the usual variance estimators are severely biased when the errors are correlated. An alternative variance estimator that adjusts for any observed correlation is proposed. The need to take autocorrelation into account in variance estimation negates much of the advantage that ordinary least squares estimation has in terms of computational simplicity

Journal ArticleDOI
TL;DR: A novel systolic array is described for recursive least squares estimation based on the method of 'inverse updating' that achieves an O(n/sup 0/) throughput rate with O( n/sup 2/) parallelism.
Abstract: A novel systolic array is described for recursive least squares estimation based on the method of ‘inverse updating’. It achieves an O(n0) throughput rate with O(n2) parallelism.

Journal ArticleDOI
TL;DR: A trust region algorithm for solving constrained nonlinear least squares problems by finding a parameter vector Β* such that the contour f(x,Β*)=0 is a best fit to given data {zi}in=1 ⊂ ℝdin a least squares sense.
Abstract: Let a family of curves or surfaces be given in implicit form via the model equationf (x,Β)=0, wherex e ℝ d andΒ e ℝ m is a parameter vector. We present a trust region algorithm for solving the problem:find a parameter vector Β * such that the contour f(x,Β *)=0is a best fit to given data {zi} i n =1 ⊂ ℝ d in a least squares sense. Specifically, we seekΒ * and {x * } =1 such thatf (x i * ,Β *) = 0,i=1,...,n, and ∑ =1 ‖z i −x * ‖ 2 2 is minimal. The termorthogonal distance regression is used to describe such constrained nonlinear least squares problems.

Journal ArticleDOI
TL;DR: In this paper, rank-based methods are used to develop a theory for the multivariate linear model analogous to least squares, and three asymptotically equivalent test procedures are developed: quadratic, aligned rank, and drop in dispersion.
Abstract: Rank-based methods are used to develop a theory for the multivariate linear model analogous to least squares. Quadratic procedures for testing H[β0β′]′K = 0 are considered both with and without the assumption of Symmetrie errors. When testing the hypothesis HβK = 0, the reduced-model R estimate is shown to be asymptotically a linear function of the full-model R estimate. Three asymptotically equivalent test procedures are developed: quadratic, aligned rank, and drop in dispersion. An analysis of covariance example is considered using both rank and least squares procedures.