scispace - formally typeset
Search or ask a question

Showing papers on "Non-linear least squares published in 1980"


Journal ArticleDOI
TL;DR: The parameter concept in the term least squares mean is defined and given the more meaningful name population marginal mean; and its estimation is discussed in this article, where the estimation of its estimation was discussed.
Abstract: The parameter concept in the term least squares mean is defined and given the more meaningful name population marginal mean; and its estimation is discussed.

1,143 citations


Journal ArticleDOI
TL;DR: In this article, a new method to estimate the parameters of Tucker's three-mode principal component model is discussed, and the convergence properties of the alternating least squares algorithm to solve the estimation problem are considered.
Abstract: A new method to estimate the parameters of Tucker's three-mode principal component model is discussed, and the convergence properties of the alternating least squares algorithm to solve the estimation problem are considered. A special case of the general Tucker model, in which the principal component analysis is only performed over two of the three modes is briefly outlined as well. The Miller & Nicely data on the confusion of English consonants are used to illustrate the programs TUCKALS3 and TUCKALS2 which incorporate the algorithms for the two models described.

705 citations



Journal ArticleDOI
TL;DR: In this article, a maximum likelihood estimation procedure is presented through which two aspects of the streamflow measurement errors of the calibration phase are accounted for, and the proposed procedure first determines the anticipated correlation coefficient of the errors and then uses it in the objective function to estimate the best values of the model parameters.
Abstract: A maximum likelihood estimation procedure is presented through which two aspects of the streamflow measurement errors of the calibration phase are accounted for. First, the correlated error case is considered where a first-order autoregressive scheme is presupposed for the additive errors. This proposed procedure first determines the anticipated correlation coefficient of the errors and then uses it in the objective function to estimate the best values of the model parameters. Second, the heteroscedastic error case (changing variance) is considered for which a weighting approach, using the concept of power transformation, is developed. The performances of the new procedures are tested with synthetic data for various error conditions on a two-parameter model. In comparison with the simple least squares criterion and the weighted least squares scheme of the HEC-1 of the U.S. Army Corps of Engineers for the heteroschedastic case, the new procedures constantly produced better estimates. The procedures were found to be easy to implement with no convergence problem. In the absence of correlated errors, as theoretically expected, the correlated error procedure produces the exact same estimates as the simple least squares criterion. Likewise, the self-correcting ability of the heteroschedastic error procedure was effective in reducing the objective function to that of the simple least squares as data gradually became homoscedastic. Finally, the effective residual tests for detection of the above-mentioned error situations are discussed.

426 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that when the number of data d is greater than the size of parameters p, it is always possible to construct a set of least d - p equations that are independent of the values of the discrete part of the model.
Abstract: Some inverse problems are characterized by a model consisting of a piecewise continuous function and a set of discrete parameters. For linear problems of this general type, which we call mixed, we show that when the number of data d is greater than the number of parameters p, it is always possible to construct a set of a least d - p equations that are independent of the values of the discrete part of the model. These equations, which we call the annulled data set, can be used to estimate the continuous part of the model. The discrete part of the model can be estimated from a second set of p equations that relate the discrete and continuous parts of the model. The linearization of the nonlinear travel time functional that enter in the hypocenter location problem leads to a mixed inverse problem. The splitting procedure is natural to this problem if the hypocenters are estimated initially by conventional nonlinear least squares by using travel times calculated from some initial estimate of the velocity model. The annulled data are a set of linear combinations of the residuals that are unbiased by that initial location, and as a result, they can be used directly to estimate a perturbation to the velocity model by a Backus-Gilbert procedure. This makes an iterative algorithm possible that consists of a conventional hypocenter location followed by estimating a perturbation of the velocity model from the annulled data set. The uniqueness of the final velocity model is assessed via the linear resolution analysis of Backus and Gilbert (1968, 1970). We also construct a set of Frechet derivatives that relate perturbations of each hypocenter component to perturbations of the velocity model. These kernels are used to assess the possible error of the hypocenters due to inadequate knowledge of the velocity structure by an application of the generalized prediction approach of Backus (1970a). Good results are obtained when the procedure is applied to a simple synthetic data set.

319 citations


Journal ArticleDOI
TL;DR: In this article, the theory of the linear least squares problem with a quadratic constraint is presented. And theorems characterizing properties of the solutions are given. And a numerical application is discussed.
Abstract: We present the theory of the linear least squares problem with a quadratic constraint. New theorems characterizing properties of the solutions are given. A numerical application is discussed.

279 citations


Journal ArticleDOI
TL;DR: In this paper, an iterative Gauss-Newton algorithm for solving nonlinear least squares problems is proposed, where the variables are separated into two sets in such a way that in each iteration, optimization with respect to the first set is performed first, and corrections to those of the second after that.
Abstract: Iterative algorithms of Gauss–Newton type for the solution of nonlinear least squares problems are considered. They separate the variables into two sets in such a way that in each iteration, optimization with respect to the first set is performed first, and corrections to those of the second after that. The linear-nonlinear case, where the first set consists of variables that occur linearly, is given special attention, and a new algorithm is derived which is simpler to apply than the variable projection algorithm as described by Golub and Pereyra, and can be performed with no more arithmetical operations than the unseparated Gauss–Newton algorithm. A detailed analysis of the asymptotical convergence properties of both separated and unseparated algorithms is performed. It is found that they have comparable rates of convergence, and all converge almost quadratically for almost compatible problems. Simpler separation schemes, on the other hand, converge only linearly. An efficient and simple computer impleme...

273 citations


Journal ArticleDOI
TL;DR: This approach allows full exploitation of sparsity, and permits the use of a fixed (static) data structure during the numerical computation, allowing for the convenient use of auxiliary storage and updating operations.

179 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that Kariya and Eaton's test for multivariate spherical symmetry is UMP invariant against elliptically symmetric distributions and that both the null and alternative distributions of the test statistic are the same as those which occur when the sample is normally distributed.
Abstract: Invariance is used to show that Kariya and Eaton's test for multivariate spherical symmetry is UMP invariant against elliptically symmetric distributions. Also both the null and alternative distributions of the test statistic are found to be the same as those which occur when the sample is normally distributed. UMP and UMPU tests for serial correlation derived assuming normality are found to be even more robust against departure from this assumption than was recently demonstrated by Kariya. When applied to the linear regression model, these results give useful robustness properties for Kadiyala's $T1$ test and the Durbin-Watson test.

170 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of estimating spatially varying aquifer transmissivities on the basis of steady water level and flux data is posed in terms of log transmissivity instead of transmissitives and is solved by a Fletcher-Reeves conjugate gradient algorithm in conjunction with Newton's method for determining the step size to be taken at each iteration.
Abstract: Paper 1 of this sequence presented a new statistically based approach to the problem of estimating spatially varying aquifer transmissivities on the basis of steady water level and flux data. Paper 2 described a case study in which the new method had been applied to actual field data from the Cortaro Basin in Southern Arizona. The purpose of paper 3 is to introduce a new efficient method of solution which works under a much wider range of conditions than the method employed in papers 1 and 2. The new method is based on a variational theory developed by Chavent (1971), which is extended here to the case of generalized nonlinear least squares. The method is implemented numerically be a finite element scheme. The inverse problem is posed in terms of log transmissivities instead of transmissivities and is solved by a Fletcher-Reeves conjugate gradient algorithm in conjunction with Newton's method for determining the step size to be taken at each iteration. The method does not require computing sensitivity coefficients, and one may therefore expect it to result in considerable savings of both computer storage and computer time. Posing the problem in terms of log transmissivities is shown to have important advantages over the traditional approach, not the least of which is guaranteeing that the computed transmissivities will always be positive. The paper includes a theoretical analysis of the effect that various errors corrupting the data and the model may have on the final log transmissivity estimates. This analysis shows that small errors in the model and in the flow rate and sink/source data have only a minor influence on the log transmissivity estimates and therefore can often be disregarded. On the other hand, low-amplitude noise in the water level data may cause these estimates to become unstable and therefore must always be filtered out during the solution of the inverse problem. Two theoretical examples are included to demonstrate the ability of the new method to deal with artificial noise of a relatively large amplitude, derived from a given stochastic model. The results demonstrate that the inverse method may be capable of computing log transmissivity estimates with an error variance which is significantly smaller than that of the original (or prior) log transmissivity data. The variance reduction achieved in this manner is shown to depend on the quantity and quality of data describing the flow regime. Finally, it is shown that undercalibration (when the variance of the computed residuals exceeds the error variance of the ‘Observed’ water levels) and/or overcalibration (when the variance of the residuals is less than the error variance of the ‘Observed’ water levels) may lead to relatively poor results, whose error variance can be so large as to render the inverse solution useless.

164 citations


Journal ArticleDOI
TL;DR: In this paper, a weighted joint least squares estimator is developed that is asymptotically equivalent to the maximum likelihood estimator for the linear model y = Xβ + e with unknown diagonal covariance matrix G, where the diagonal elements of G are assumed to be known functions of the explanatory variables X and an unknown parameter vector Θ.
Abstract: Estimation for the linear model y = Xβ + e with unknown diagonal covariance matrix G is considered. The diagonal elements of G are assumed to be known functions of the explanatory variables X and an unknown parameter vector Θ, where Θ is permitted to contain elements of β. A weighted joint least squares estimator is developed that is asymptotically equivalent to the maximum likelihood estimator. Asymptotic properties of the simple least squares estimator and of the weighted joint least squares estimator are obtained. A sampling experiment is used to compare the estimators.

Journal ArticleDOI
TL;DR: This article demonstrates the application of least squares for the estimation of system parameters and solutions are discussed for the case of white noise and correlated noise corrupting the useful output signal of the system.

Journal ArticleDOI
TL;DR: Asymptotic expansions of the distributions of the maximum likelihood estimator and the OLS estimator in a linear functional relationship model as the sample size increases infinitely were derived in this paper.
Abstract: We derive asymptotic expansions of the distributions of the maximum likelihood (ML) estimator and the ordinary least squares (OLS) estimator in a linear functional relationship model as the sample size increases infinitely. These expansions are equivalent to the asymptotic expansions of the distributions of the limited information maximum likelihood (LIML) estimator when the covariance is known to within a proportionality constant and the two-stage least squares (TSLS) estimator as the number of excluded exogenous variables increases in a simultaneous equations system.

Journal ArticleDOI
TL;DR: A perturbation theory for the linear least squares problem with linear equality constraints (problem LSE) is presented in this paper, which is based on the concept of the weighted pseudoinverse.
Abstract: A perturbation theory for the linear least squares problem with linear equality constraints (problem LSE) is presented. The development of the theory is based on the concept of the weighted pseudoinverse. A general formula for the solution of problem of LSE is given. Condition numbers are defined and a perturbation theorem is proved.


Journal ArticleDOI
TL;DR: In this article, an alternating least squares algorithm was proposed to fit both an individual difference model and a replications component model to three-way data which may be defined at the nominal, ordinal, interval, ratio, or mixed measurement level.
Abstract: A review of the existing techniques for the analysis of three-way data revealed that none were appropriate to the wide variety of data usually encountered in psychological research, and few were capable of both isolating common information and systematically describing individual differences. An alternating least squares algorithm was proposed to fit both an individual difference model and a replications component model to three-way data which may be defined at the nominal, ordinal, interval, ratio, or mixed measurement level; which may be discrete or continuous; and which may be unconditional, matrix conditional, or row conditional. This algorithm was evaluated by a Monte Carlo study. Recovery of the original information was excellent when the correct measurement characteristics were assumed. Furthermore, the algorithm was robust to the presence of random error. In addition, the algorithm was used to fit the individual difference model to a real, binary, subject conditional data set. The findings from this application were consistent with previous research in the area of implicit personality theory and uncovered interesting systematic individual differences in the perception of political figures and roles.

Journal ArticleDOI
TL;DR: In this paper, the authors compared OLS and weighted least squares (WLS) regression with a model of the form Q50 =αAβ1, where Q50 is the 50-year peak discharge, A is drainage area, and α and β1 are regional parameters estimated from a regression of observed 50 year peaks at gaging stations, and the results indicate that OLS has a larger expected standard error of prediction than WLS when the following weighting function is used: for i = 1, 2, 3, N, where ĉ0 and ĉ
Abstract: Ordinary least squares (OLS) regression and weighted least squares (WLS) regression are compared by simulating a model of the form Q50 =αAβ1, where Q50 is the 50-year peak discharge, A is drainage area, and α and β1 are regional parameters estimated from a regression of observed 50-year peaks at gaging stations. Results indicate that OLS has a larger expected standard error of prediction than WLS when the following weighting function is used: for i = 1, 2,…, N, where ĉ0 and ĉ1 are constants estimated from sample data, ni, is the record length of station i, N is the number of stations, and ŵi, is the weight given to data for station i.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the relative efficiency of the OLS as compared with the Gauss-Markov estimator depends to a great extent on the X matrix observed.
Abstract: In the standard linear regression model y = X β + u, with errors following a first-order stationary autoregressive process, it is shown that the relative efficiency of the ordinary least squares (OLS) as compared with the Gauss-Markov estimator depends to a great extent on the X matrix observed. In particular, it is seen that the relative efficiency of OLS increases with increasing correlation for certain cases important in practice. Since this seems to run contrary to what one should expect on the basis of previous Monte Carlo studies, some additional sampling experiments are also briefly discussed that keep the X matrix fixed in repeated runs.


Journal ArticleDOI
TL;DR: In this paper, some testing procedures for a possible change in the regression slope occurring at an unknown time point are considered, based on least squares estimators and aligned rank order statistics.
Abstract: Based on least squares estimators and aligned rank order statistics, some testing procedures for a possible change in the regression slope occurring at an unknown time point are considered. The asymptotic theory of the proposed tests rests on certain invariance principles relating to least squares estimators and aligned rank order statistics, and these are developed here.

Journal ArticleDOI
TL;DR: A direct method based on an LU decomposition of the rectangular coefficient matrix for the solution of sparse linear least squares problems and a general updating scheme for modifying the solution when extra equations are added is described.

Journal ArticleDOI
01 Jun 1980-Talanta
TL;DR: A non-linear least-squares program based on Marquardt's modification of the Newton-Gauss method is discussed and its performance in the calculation of equilibrium constants is exemplified.


Proceedings ArticleDOI
01 Dec 1980
TL;DR: In this paper, principal eigenvalues and eigenvectors of a sample correlation matrix are used to improve the signal to noise ratio (SNR) in the data and to increase the resolution capability of nonlinear least squares at low SNR and linear prediction based frequency estimation methods.
Abstract: Principal component (eigenvalue-eigenvector) analysis is applied to processing of narrow band signals in noise. The amount of data available is assumed to be limited. Principal eigenvalues and eigenvectors of a sample correlation matrix are used to improve the signal to noise ratio (SNR) in the data and to increase the resolution capability of nonlinear least squares at low SNR and linear prediction based frequency estimation methods. Relation to Pronylike methods is explored. Performance of different methods is compared experimentally among themselves and to the Cramer-Rao (CR) bound.

Journal ArticleDOI
TL;DR: This work discusses the more promising approaches in these two areas ofGauss–Newton based algorithms corresponding to methods which implicitly take account of the second term in the Hessian of the function and methods which explicitly take Account of the first term.
Abstract: Gauss–Newton based algorithms are widely used for solving nonlinear least squares problems. However, for reasons we shall discuss, they can be expected to perform poorly in certain circumstances. This leads to two other classes of algorithms corresponding to methods which implicitly take account of the second term in the Hessian of the function and methods which explicitly take account of the second term. We discuss the more promising approaches in these two areas and illustrate our discussion with a set of test results.

Journal ArticleDOI
TL;DR: In this article, a characterization in terms of a convex moment cone is used, to develop a globally convergent self-starting algorithm, and sensitivity of the results to errors in the data and during the computations is also discussed.
Abstract: Least squares and maximum likelihood fitting of a positive sum of exponentials to an empirical data series is discussed. A characterization in terms of a convex moment cone is used, to develop a globally convergent self-starting algorithm. The sensitivity of the results to errors in the data and during the computations is also discussed. Numerical tests are reported.

Journal ArticleDOI
TL;DR: In this paper, an algorithm for finding optimal least squares exponential sum approximations to sampled data subject to the constraint that the coefficients appearing in the exponential sum are positive is given, which is suitable for data sampled at noninteger times.
Abstract: An algorithm is given for finding optimal least squares exponential sum approximations to sampled data subject to the constraint that the coefficients appearing in the exponential sum are positive. The algorithm employs the divided differences of exponentials to overcome certain problems of ill-conditioning and is suitable for data sampled at noninteger times.

Journal ArticleDOI
TL;DR: In this article, it was shown that if n = 1 and A is integrally closed, the answer to both questions are "yes" for any n. This follows from classical number theory (e.g., [27, Chap 7]).

Journal ArticleDOI
TL;DR: In this article, strong consistency of the least squares estimator of the linear regression model with stochastic regressor matrix was proved in the case of martingale difference errors and predetermined regressors, leading to strong consistency if the errors are quasi-independent up to the fourth order.
Abstract: For the linear regression model $y = X \beta + u$ with stochastic regressor matrix, strong consistency of the least squares estimator of $\beta$ is proved in the case of martingale difference errors and predetermined regressors and for the case where errors and regressors are orthogonal up to the second order. The results obtained are applied to parameter estimation in autoregressive processes, leading to strong consistency if the errors are quasi-independent up to the fourth order.

Journal ArticleDOI
01 Jan 1980
TL;DR: Asymptotic results for approximated weighted least squares estimators in nonlinear regression with independent but not necessarily identically distributed errors were given in this article, where the estimators were defined as weighted least square estimators.
Abstract: Asymptotic results are given for approximated weighted least squares estimators in nonlinear regression with independent but not necessarily identically distributed errors .