scispace - formally typeset
Search or ask a question

Showing papers on "Non-linear least squares published in 1988"


Journal ArticleDOI
TL;DR: In this paper, it was shown that the two algorithms given in the literature for partial least squares regression are equivalent, and use this equivalence to give an explicit formula for the resulting prediction equation.
Abstract: We prove that the two algorithms given in the literature for partial least squares regression are equivalent, and use this equivalence to give an explicit formula for the resulting prediction equation. This in turn is used to investigate the regression method from several points of view. Its relation to principal component regression is clearified, and some heuristic arguments are given to explain why partial least squares regression often needs fewer factors to give its optimal prediction.

547 citations


Journal ArticleDOI
TL;DR: The PLS Program System: Latent Variables Path Analysis with Partial Least Squares Estimation as mentioned in this paper is a program that performs path analysis with partial least squares estimation (PLSE).
Abstract: (1988). The PLS Program System: Latent Variables Path Analysis with Partial Least Squares Estimation. Multivariate Behavioral Research: Vol. 23, No. 1, pp. 125-127.

260 citations


Reference BookDOI
TL;DR: Linear Least Squares Computations as mentioned in this paper is an excellent reference for industrial and applied mathematicians, statisticians, and econometricians, as well as atext for advanced undergraduate and graduate statistics, mathematics, and economics courses in computer programming, linear regression analysis, and applied statistics.
Abstract: Presenting numerous algorithms in a simple algebraic form so that the reader can easilytranslate them into any computer language, this volume gives details of several methodsfor obtaining accurate least squares estimates. It explains how these estimates may beupdated as new information becomes available and how to test linear hypotheses.Linear Least Squares Computations features many structured exercises that guidethe reader through the available algorithms, plus a glossary of commonly used terms anda bibliography of supplementary reading ... collects "ancient" and modem results onlinear least squares computations in a convenient single source . . . develops the necessarymatrix algebra in the context of multivariate statistics . .. only makes peripheral use ofconcepts such as eigenvalues and partial differentiation .. . interprets canonical formsemployed in computation ... discusses many variants of the Gauss, Laplace-Schmidt,Givens, and Householder algorithms ... and uses an empirical approach for the appraisalof algorithms.Linear Least Squares Computations serves as an outstanding reference forindustrial and applied mathematicians, statisticians, and econometricians, as well as atext for advanced undergraduate and graduate statistics, mathematics, and econometricscourses in computer programming, linear regression analysis, and applied statistics.

155 citations


Journal ArticleDOI
TL;DR: In this article, several parameter estimation methods for dealing with heteroscedasticity in nonlinear regression are described, including variations on ordinary, weighted, iteratively reweighted, extended and generalized least squares.
Abstract: Several parameter estimation methods for dealing with heteroscedasticity in nonlinear regression are described. These include variations on ordinary, weighted, iteratively reweighted, extended. and generalized least squares. Some of these variations are new, and one of them in particular, modified extended iteratively reweighted least squares (MEIRLS), allows parameters of an assumed heteroscedastic variance model to be estimated with an adjustment for bias due to estimation of the regression parameters. The context of the discussion is primarily that of pharmacokinetic-type data, although an example is given involving chemical-reaction data. Using simulated data from 21 heteroscedastic pharmacokinetic-type models, some of the methods are compared in terms of mean absolute error and 95% confidence-interval coverage. From these comparisons, MEIRLS and the variations on generalized least squares emerge as the methods of choice.

116 citations


Journal ArticleDOI
TL;DR: In this paper, a particular class of non-linear least-squares problems for which it is possible to take advantage of the special structure of the nonlinear model, is discussed.
Abstract: In this paper a particular class of non-linear least-squares problems for which it is possible to take advantage of the special structure of the non-linear model, is discussed. The non-linear models are of the ruled-type (Teunisson, 1985a). The proposed solution strategy is applied to the2D non-linear Symmetric Helmert transformation which is defined in the paper. An exact non-linear least-squares solution, using a rotational invariant covariance structure is given.

82 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the asymptotic distributions of the OLS and GLS estimators of β are identical for two-variate regression and extended fixed regressor theory developed by Grenander and Rosenblatt.
Abstract: It is shown that in the multiple regression model y 1 = x 1′β + u t, where u t, is a stationary autoregressive process and x 1 is an integrated m-vector process, the asymptotic distributions of the ordinary least squares (OLS) and generalized least squares (GLS) estimators of β are identical. This generalizes a result obtained by Kramer (1986) for two-variate regression and extends fixed regressor theory developed by Grenander and Rosenblatt (1957). Our approach uses a multivariate invariance principle and yields explicit representations of the asymptotic distributions in terms of functionals of vector Brownian motion. We also provide some useful asymptotic results for hypothesis tests of the model. Thus if x 1 is generated by a vector (autoregressive integrated moving average) ARIMA(r, l, s) model and u t is generated by an independent (autoregressive) AR(p) process, then and have the same limiting distribution (where and are the OLS and GLS estimators, respectively). This distribution is nonnor...

80 citations


Journal ArticleDOI
TL;DR: In this paper, the authors apply the notion of optimal scaling to sets of variables by using sums within sets, called OVERALS, which can be multiple or single transformations, with transformations consisting of three types: nominal, ordinal, and numerical.
Abstract: Homogeneity analysis, or multiple correspondence analysis, is usually applied tok separate variables. In this paper we apply it to sets of variables by using sums within sets. The resulting technique is called OVERALS. It uses the notion of optimal scaling, with transformations that can be multiple or single. The single transformations consist of three types: nominal, ordinal, and numerical. The corresponding OVERALS computer program minimizes a least squares loss function by using an alternating least squares algorithm. Many existing linear and nonlinear multivariate analysis techniques are shown to be special cases of OVERALS. An application to data from an epidemiological survey is presented.

77 citations


Journal ArticleDOI
TL;DR: In this paper, the effect of estimating weights in weighted least squares is investigated under the assumption that one has a parametric model for the variance function, and it is shown that a simple bootstrap operation resul...
Abstract: In weighted least squares, it is typical that the weights are unknown and must be estimated. Most packages provide standard errors assuming that the weights are known. This is fine for sufficiently large sample sizes, but what about for small-to-moderate sample sizes? The investigation of this article into the effect of estimating weights proceeds under the assumption typical in practice—that one has a parametric model for the variance function. In this context, generalized least squares consists of (a) an initial estimate of the regression parameter, (b) a method for estimating the variance function, and (c) the number of iterations in reweighted least squares. By means of expansions for the covariance, it is shown that each of (a)—(c) can matter in problems of small to moderate size. A few conclusions may be of practical interest. First, estimated standard errors assuming that the weights are known can be too small in practice. The investigation indicates that a simple bootstrap operation resul...

67 citations


Proceedings ArticleDOI
11 Apr 1988
TL;DR: The direction-of-arrival estimation of signal wavefronts in the presence of unknown noise fields is investigated and a related, relatively simple two-step least-squares estimate is constructed.
Abstract: The direction-of-arrival estimation of signal wavefronts in the presence of unknown noise fields is investigated. Generalizations of known criteria for both conditional and nonconditional maximum-likelihood estimates are developed. Numerical calculations show that the usual Gauss-Newton iteration for conditional maximum-likelihood estimates cannot give good results. Therefore, a related, relatively simple two-step least-squares estimate is constructed. Results of numerical experiments are presented and indicate that the two-step estimate has approximately the same power as the least-squares estimate using the exact noise correlation structure. >

59 citations


Journal ArticleDOI
TL;DR: This study proposes a Flexible Least Squares (FLS) method for state estimation when the dynamic equations are unknown but the process state evolves only slowly over time.

57 citations


01 Jan 1988
TL;DR: In this article, a review of recent contributions to statistical methodology based on the L 1 -norm as a robust alternative to that based on least squares, tests are developed using the medians instead of the means and least absolute deviations instead of least squares.
Abstract: This paper reviews some recent contributions to statistical methodology based on the L 1 -norm as a robust alternative to that based on the least squares, Tests are developed using the medians instead of the means and least absolute deviations instead of least squares. Analogues of Hotelling's T 2 and tests based on the roots of a determinantal equation are derived using medians. Asymptotic inference procedures on regression parameters in the univariate linear model are reviewed and some suggestions are made for the elimination of nuisance parameters which occur in the asymptotic distributions. The results are extended to the multivarite linear model. Recent work on the asymptotic theory of inference on the parametrrs of a generalized multivariate linear model based on the method of least distances is discussed. New tests are developed using least distances estimators.

Book ChapterDOI
01 Jan 1988
TL;DR: The paper shows that a simple transformation of the original problem and its subsequent solution by a general purpose sequential quadratic programming algorithm retains typical features of special purpose methods, i.e. a combination of a Gaus- newton and a quasi-Newton search direction.
Abstract: Nonlinear least squares problems are extremely important in many domains of mathematical programming applications, e.g. maximum likelihood estimations, nonlinear data fitting or parameter estimation, respectively. A large number of special purpose algorithms is available in the unconstrained case, but only very few methods were developed for the nonlinearly constrained case. The paper shows that a simple transformation of the original problem and its subsequent solution by a general purpose sequential quadratic programming algorithm retains typical features of special purpose methods, i.e. a combination of a Gaus-Newton and a quasi-Newton search direction. Moreover the numerical investigations indicate that the algorithm can be implemented very easily if a suitable sequential quadratic programming code is available, and that the numerical test results are comparable to that of special purpose programs.

Journal ArticleDOI
TL;DR: In this paper, the authors define the least squares estimator ϴ n,τ and show that under some regularity conditions, ϴn,τ is strongly consistent under the Gaussian assumption.

Journal ArticleDOI
TL;DR: In this article, the authors discuss the use of smoothing splines and least squares splines (LSS) in nonparametric regression on geomagnetic data, and illustrate their use in the removal of secular trends in long observatory records of geOMagnetic field components.



Journal ArticleDOI
TL;DR: In this article, an algorithm to compute the trimmed least squares estimator for nonlinear regression models is presented, and some simulations in the regression model ǫ(x, b)=b 1 (exp( −b 2 x)−exp(−b 3 x)) were realized.

Journal ArticleDOI
TL;DR: An observation-by-observation data transformation is derived for heteroscedastic error components regression models that permits generalized least squares or feasible GLS estimates to be obtained from an ordinary least squares program.

Journal ArticleDOI
TL;DR: Deconvolution analysis of simulated fluorescence data were carried out to show that the linear approximation method is generally better when one of the lifetimes is comparable to the time interval between data.

Journal ArticleDOI
TL;DR: In this paper, several methods of constructing semi-Latin squares are given, and several efficiency factors of these squares are discussed, and it is shown that Trojan squares and their derivatives have high efficiency factors in the bottom stratum.

Proceedings ArticleDOI
11 Apr 1988
TL;DR: The authors establish the large-sample accuracy properties of two nonlinear least-squares estimators (NLSEs) of sine waves parameters: the basic NLSE, which ignores the possible correlation of the noise; and the optimal NL SE, which estimates the noise correlation (appropriately parameterized).
Abstract: The authors establish the large-sample accuracy properties of two nonlinear least-squares estimators (NLSEs) of sine waves parameters: the basic NLSE, which ignores the possible correlation of the noise; and the optimal NLSE, which, besides the sine-wave parameters, also estimates the noise correlation (appropriately parameterized). It is shown that these two NLSEs have the same accuracy in large samples. This result provides complete justification for preferring the computationally less-expensive basic NLSE over the optimal NLSE. Both estimators are shown to achieve the Cramer-Rao bound (CRB) as the sample size increases. A simple explicit expression for the CRB matrix is provided which should be useful in studying the performance of sine-wave parameter estimators designed to work in the colored noise case. >

Journal ArticleDOI
TL;DR: In this article, the conditional variance of the error of an m-step non-linear least squares predictor is not necessarily a monotonic non-decreasing function of m. This fact has not been documented to the best of our knowledge.
Abstract: We first observe that the conditional variance of the error of an m-step non-linear leastsquares predictor is not necessarily a monotonic non-decreasing function of m. This fact has not been documented to the best of our knowledge. We have also studied methods of evaluating the conditional variance for non-linear autoregressive models and illustrated these with both real and simulated data. Bias correction is included. The facility afforded by Chapman-Kolmogorov's equation is highlighted. The possible role played by the skeleton is mentioned briefly. Moreover, the possibility of combinations of forecasts is explored, partly with a view to obtaining robust forecast against prospective influential data.


Book
30 Jun 1988
TL;DR: In this paper, parallel schemes are suggested for the orthogonal factorization of matrices in block angular form and for the associated backsubstitution phase of the least squares computations.
Abstract: Large scale least squares computations arise in a variety of scientific and engineering problems, including geodetic adjustments and surveys, medical image analysis, molecular structures, partial differential equations and substructuring methods in structural engineering. In each of these problems, matrices often arise which possess a block structure which reflects the local connection nature of the underlying physical problem. For example, such super-large nonlinear least squares computations arise in geodesy. Here the coordinates of positions are calculated by iteratively solving overdetermined systems of nonlinear equations by the Gauss-Newton method. The US National Geodetic Survey will complete this year (1986) the readjustment of the North American Datum, a problem which involves over 540 thousand unknowns and over 6.5 million observations (equations). The observation matrix for these least squares computations has a block angular form with 161 diagnonal blocks, each containing 3 to 4 thousand unknowns. In this paper parallel schemes are suggested for the orthogonal factorization of matrices in block angular form and for the associated backsubstitution phase of the least squares computations. In addition, a parallel scheme for the calculation of certain elements of the covariance matrix for such problems is described. It is shown that these algorithms are ideally suited formore » multiprocessors with three levels of parallelism such as the Cedar system at the University of Illinois. 20 refs., 7 figs.« less

ReportDOI
01 May 1988
TL;DR: In this paper, the authors survey numerical methods developed for problems in which sparsity in the derivatives of f is not taken into account in formulating algorithms, and present a survey of these methods.
Abstract: : This paper addresses the nonlinear least-squares problem which arises most often in data fitting applications. Much research has focused on the development of specialized algorithms that attempt to exploit the structure of the nonlinear least-squares objective. The author surveys numerical methods developed for problems in which sparsity in the derivatives of f is not taken into account in formulating algorithms. Keywords: Multivariate functions; Gauss- Newton methods; Levenberg Marquardt methods; Quasi-Newton methods; Quadratic programming; Unconstrained optimization methods. (KR)


Journal ArticleDOI
TL;DR: In this paper, the authors presented a procedure that greatly simplifies the fitting of the independent variable, given y=f(x) and x = f(x), and showed that the residuals change significantly when the quantity fitted is changed.
Abstract: When an equation is to be fitted to a set of data by least squares, it is often much more difficult to minimize the sum of squares of the residuals of the independent variable than those of the dependent variable. This article presents a procedure that greatly simplifies the fitting of the independent variable. Given y=f(x) it permits not only y and x, but also y2 , x2 , ln y, ln x, and similar simple functions to be fitted almost as easily as y. It is shown that the residuals change significantly when the quantity fitted is changed. Five different equations representing the R(T) relations of thermistors are examined and their ΔT residuals for the same set of experimental data are shown; three of these fit the data more closely than the often‐used cubic equation in ln R.



Journal ArticleDOI
TL;DR: A solution to the problem of estimating the frequencies, amplitudes and phases of the underlying sinusoidal components is offered for signals consisting of real sinusoids in additive white noise by interpolating randomly spaced samples with the aid of the singular value decomposition.
Abstract: A solution to the problem of estimating the frequencies, amplitudes and phases of the underlying sinusoidal components is offered for signals consisting of real sinusoids in additive white noise. In order to obtain the data required for frequency estimation, uniform sample points from the randomly spaced samples are first interpolated with the aid of the singular value decomposition. One can then estimate the line spectrum of the underlying sinusoidal signal using principal component, autoregressive modeling and determine the amplitudes and phases through linear least squares. This method is shown by simulation to compare favorably to modern frequency estimators and the Cramer-Rao lower bound. >