scispace - formally typeset
Search or ask a question
Topic

Non-linear least squares

About: Non-linear least squares is a research topic. Over the lifetime, 6667 publications have been published within this topic receiving 273089 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the Gauss-Newton method for calculating nonlinear least squares estimates generalizes easily to deal with maximum quasi-likelihood estimates, and a rearrangement of this produces a generalization of the method described by Nelder & Wedderburn (1972).
Abstract: SUMMARY To define a likelihood we have to specify the form of distribution of the observations, but to define a quasi-likelihood function we need only specify a relation between the mean and variance of the observations and the quasi-likelihood can then be used for estimation. For a one-parameter exponential family the log likelihood is the same as the quasi-likelihood and it follows that assuming a one-paramet'er exponential family is the weakest sort of distributional assumption t'hat can be made. The Gauss-Newton method for calculating nonlinear least squares estimates generalizes easily to deal wit'h maximum quasi-likelihood estimates, and a rearrangement of this produces a generalization of the method described by Nelder & Wedderburn (1972). This paper is mainly concerned wit'h fitting regression models, linear or nonlinear, in which the variance of each observation is specified to be either equal to, or proportional to, some function of its expectat'ion. If the form of distribution of the observations were specified, t'he method of maximum likelihood would give estimates of t'he parameters in t'he model. For instance, if it is specified that the observations have normally distributed errors with constant variance, then the method of least squares provides expressions for the variances and covariances of the estimates, exact for linear models and approximate for nonlinear ones, and these estimates and the expressions for their errors remain valid even if the observations are not normally distributed but merely have a fixed variance; thus, with linear models and a given error variance, the variance of least squares estimat'es is not affected by the distribution of the errors, and the same holds approximately for nonlinear ones. A more general situation is considered in this paper, namely the situation when there is a given relation between the variance and mean of the observations, possibly with an unknown constant of proportionality. A similar problem was considered from a Bayesian viewpoint by Hartigan (1969).We define a quasi-likelihood function, which can be used for estimation in the same way as a likelihood function. With constant variance this again leads to least squares estimation. When other mean-variance relationships are specified, the quasilikelihood sometimes turns out to be a recognizable likelihood function; for instance, for a constant coefficient of variation the quasi-likelihood function is the same as the likelihood obtained by treating the observations as if they had a gamma distribution.

2,063 citations

Book
16 Apr 2013
TL;DR: How to Construct Nonparametric Regression Estimates * Lower Bounds * Partitioning Estimates * Kernel Estimates * k-NN Estimates * Splitting the Sample * Cross Validation * Uniform Laws of Large Numbers
Abstract: Why is Nonparametric Regression Important? * How to Construct Nonparametric Regression Estimates * Lower Bounds * Partitioning Estimates * Kernel Estimates * k-NN Estimates * Splitting the Sample * Cross Validation * Uniform Laws of Large Numbers * Least Squares Estimates I: Consistency * Least Squares Estimates II: Rate of Convergence * Least Squares Estimates III: Complexity Regularization * Consistency of Data-Dependent Partitioning Estimates * Univariate Least Squares Spline Estimates * Multivariate Least Squares Spline Estimates * Neural Networks Estimates * Radial Basis Function Networks * Orthogonal Series Estimates * Advanced Techniques from Empirical Process Theory * Penalized Least Squares Estimates I: Consistency * Penalized Least Squares Estimates II: Rate of Convergence * Dimension Reduction Techniques * Strong Consistency of Local Averaging Estimates * Semi-Recursive Estimates * Recursive Estimates * Censored Observations * Dependent Observations

1,931 citations

Journal ArticleDOI
TL;DR: In this article, a general definition of the nonlinear least squares inverse problem is given, where the form of the theoretical relationship between data and unknowns may be general (in particular, nonlinear integrodierentia l equations).
Abstract: We attempt to give a general definition of the nonlinear least squares inverse problem. First, we examine the discrete problem (finite number of data and unknowns), setting the problem in its fully nonlinear form. Second, we examine the general case where some data and/or unknowns may be functions of a continuous variable and where the form of the theoretical relationship between data and unknowns may be general (in particular, nonlinear integrodierentia l equations). As particular cases of our nonlinear algorithm we find linear solutions well known in geophysics, like Jackson’s (1979) solution for discrete problems or Backus and Gilbert’s (1970) a solution for continuous problems.

1,800 citations

Book
01 Sep 1989
TL;DR: This latent variable path modeling with partial least squares%0D will actually offer you the smart idea to be successful and will add even more expertise to life and also work much better.
Abstract: Partial Least Squares (PLS) is an estimation method and an algorithm for latent variable path (LVP) models. PLS is a component technique and estimates the latent variables as weighted aggregates. The implications of this choice are considered and compared to covariance structure techniques like LISREL, COSAN and EQS. The properties of special cases of PLS (regression, factor scores, structural equations, principal components, canonical correlation, hierarchical components, correspondence analysis, three-mode path and component analysis) are examined step by step and contribute to the understanding of the general PLS technique. The proof of the convergence of the PLS algorithm is extended beyond two-block models. Some 10 computer programs and 100 applications of PLS are referenced. The book gives the statistical underpinning for the computer programs PLS 1.8, which is in use in some 100 university computer centers, and for PLS/PC. It is intended to be the background reference for the users of PLS 1.8, not as textbook or program manual.

1,695 citations

Journal ArticleDOI
TL;DR: Identification algorithms based on the well-known linear least squares methods of gaussian elimination, Cholesky decomposition, classical Gram-Schmidt, modified Gram- Schmidt, Householder transformation, Givens method, and singular value decomposition are reviewed.
Abstract: Identification algorithms based on the well-known linear least squares methods of gaussian elimination, Cholesky decomposition, classical Gram-Schmidt, modified Gram-Schmidt, Householder transformation, Givens method, and singular value decomposition are reviewed. The classical Gram-Schmidt, modified Gram-Schmidt, and Householder transformation algorithms are then extended to combine structure determination, or which terms to include in the model, and parameter estimation in a very simple and efficient manner for a class of multivariate discrete-time non-linear stochastic systems which are linear in the parameters.

1,620 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
86% related
Matrix (mathematics)
105.5K papers, 1.9M citations
82% related
Markov chain
51.9K papers, 1.3M citations
79% related
Robustness (computer science)
94.7K papers, 1.6M citations
79% related
Nonlinear system
208.1K papers, 4M citations
77% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202315
202245
202161
202078
2019101
2018101