scispace - formally typeset
Search or ask a question

Showing papers on "Non-linear least squares published in 1990"


Journal ArticleDOI
TL;DR: Hinkley as discussed by the authors showed that principal components regression and least squares multiple regression occupy opposite ends of a continuous spectrum, with partial least squares lying in between, with two adjustable "parameters" controlling the procedure: alpha, in the continuum [0, 11, and oomega, the number of regressors finally accepted.
Abstract: [Read before The Royal Statistical Society at a meeting organized by the Research Section on Wednesday, October 25th, 1989, Professor D. V. Hinkley in the Chair] SUMMARY The paper addresses the evergreen problem of construction of regressors for use in least squares multiple regression. In the context of a general sequential procedure for doing this, it is shown that, with a particular objective criterion for the construction, the procedures of ordinary least squares and principal components regression occupy the opposite ends of a continuous spectrum, with partial least squares lying in between. There are two adjustable 'parameters' controlling the procedure: 'alpha', in the continuum [0, 11, and 'omega', the number of regressors finally accepted. These control parameters are chosen by crossvalidation. The method is illustrated by a range of examples of its application.

445 citations


Journal ArticleDOI
TL;DR: In this article, a new application of factor analysis in a nonlinear least-squares fitting algorithm is presented, which is used to handle the wealth of data and to extract all information.
Abstract: The introduction of fast scanning and diode-array spectrophotometers facilitates the acquisition of large series of absorption spectra as a function of reaction time (kinetics), elution time (chromatography), or added reagent (equilibrium investigations). It is important to develop appropriate programs that are able to handle the wealth of data and to extract all information. In this contribution a new application of factor analysis in a nonlinear least-squares fitting algorithm is presented

411 citations


Journal ArticleDOI
TL;DR: This paper developed a general approach to robust, regression-based specification tests for (possibly) dynamic econometric models, which is robust to departures from distributional assumptions that are not being tested, while maintaining asymptotic efficiency under ideal conditions.
Abstract: This paper develops a general approach to robust, regression-based specification tests for (possibly) dynamic econometric models. A useful feature of the proposed tests is that, in addition to estimation under the null hypothesis, computation requires only a matrix linear least-squares regression and then an ordinary least-squares regression similar to those employed in popular nonrobust tests. For the leading cases of conditional mean and/or conditional variance tests, the proposed statistics are robust to departures from distributional assumptions that are not being tested, while maintaining asymptotic efficiency under ideal conditions. Moreover, the statistics can be computed using any √T-consistent estimator, resulting in significant simplifications in some otherwise difficult contexts. Among the examples covered are conditional mean tests for models estimated by weighted nonlinear least squares under misspecification of the conditional variance, tests of jointly parameterized conditional means and variances estimated by quasi-maximum likelihood under nonnormality, and some robust specification tests for a dynamic linear model estimated by two-stage least squares.

327 citations


01 Jan 1990

240 citations


Journal ArticleDOI
TL;DR: Powerful new features have been added to the author's compex nonlinear least squares (CNLS) fitting program, and the results of a Monte Carlo simulation study of bias and statistical uncertainty in CNLS fitting of equivalent circuit data are discussed.

229 citations


Book ChapterDOI

203 citations


Journal ArticleDOI
TL;DR: Solving Newton’s linear system using updated matrix factorizations or the (unpreconditioned) conjugate gradient iteration gives the most effective algorithms.
Abstract: Several variants of Newton’s method are used to obtain estimates of solution vectors and residual vectors for the linear model $Ax = b + e = b_{true} $ using an iteratively reweighted least squares criterion, which tends to diminish the influence of outliers compared with the standard least squares criterion. Algorithms appropriate for dense and sparse matrices are presented. Solving Newton’s linear system using updated matrix factorizations or the (unpreconditioned) conjugate gradient iteration gives the most effective algorithms. Four weighting functions are compared, and results are given for sparse well-conditioned and ill-conditioned problems.

153 citations


Journal ArticleDOI
TL;DR: In this paper, it is shown that the projected gradient of the objective function on the manifold of constraints usually can be formulated explicitly, which gives rise to the construction of a descent flow that can be followed numerically.
Abstract: The problems of computing least squares approximations for various types of real and symmetric matrices subject to spectral constraints share a common structure. This paper describes a general procedure in using the projected gradient method. It is shown that the projected gradient of the objective function on the manifold of constraints usually can be formulated explicitly. This gives rise to the construction of a descent flow that can be followed numerically. The explicit form also facilitates the computation of the second-order optimality conditions. Examples of applications are discussed. With slight modifications, the procedure can be extended to solve least squares problems for general matrices subject to singular-value constraints.

138 citations


Journal ArticleDOI
TL;DR: McDonald et al. as mentioned in this paper applied a partially adaptive technique to estimate the parameters of William F. Sharpe's market model based on a generalized t-distribution and included as special cases least squares, least absolute deviation, and L(superscript "p"), as well as some estimation procedures that have bounded and redescending influence functions.
Abstract: It is well known that least squares estimates can be very sensitive to departures from normality. Various robust estimators, such as least absolute deviations, L(superscript "p") estimators or M-estimators provide possible alternatives to least squares when such departures occur. This paper applies a partially adaptive technique to estimate the parameters of William F. Sharpe's market model. This methodology is based on a generalized t-distribution and includes as special cases least squares, least absolute deviation, and L(superscript "p"), as well as some estimation procedures that have bounded and redescending influence functions. Coauthors are James B.McDonald, Ray D. Nelson, and Steven B. White. Copyright 1990 by MIT Press.

101 citations


Journal ArticleDOI
TL;DR: In this article, a separable age-structured separable sequential population analysis (SSPA) model is proposed, and the resulting models are examined using Monte Carlo simulation, with the mean square error of modeled biomass estimates used as the evaluation criterion.
Abstract: Modern statistical methods are being used more often to perform age-structured separable sequential population analysis (SSPA). This paper describes how some of these methods can be easily understood from a unified point of view. The approach is to begin with the now standard separable age-structured model, and modify some of the basic assumptions. The resulting models are examined using Monte Carlo simulation, with the mean square error of modeled biomass estimates used as the evaluation criterion. Simulation results indicate that nonlinear least squares and multinomial maximum likelihood are both capable of fitting lognormally and multinomially distributed catch-at-age data. It also appears that errors in modeling results introduced by ageing error may be minor, provided ageing error is of modest magnitude and is normally distributed. However, use of a somewhat incorrect functional form for the selectivities can cause greatly increased error in the modeling results, indicating that caution should be exe...

55 citations


Journal ArticleDOI
TL;DR: In this article, a general method of characterizing microwave test fixtures for the purpose of determining the parameters of devices embedded in the fixture is discussed, and the technique is used to investigate deembedding under the assumptions that all measurement errors are random and normally distributed and that the standards are distributed uniformly around the Smith chart.
Abstract: A general method of characterizing microwave test fixtures for the purpose of determining the parameters of devices embedded in the fixture is discussed. The technique was used to investigate deembedding under the assumptions that all measurement errors are random and normally distributed and that the standards are distributed uniformly around the Smith chart. It was shown that for any given number of standards, the greatest accuracy under these assumptions is achieved by utilizing a large set of known reflective loads. When the propagation constant and the reflection coefficients of the standards are not known, then equal numbers of thru lines and reflective loads give the highest accuracy, although not as high as when the propagation constant and reflection coefficients are known. The accuracy of the technique was studied and compared with that of the common open-short-load (OSL) and thru-reflect-line methods. The OSL technique was found to be considerably less accurate than using sets of offset reflective loads. >

Journal ArticleDOI
TL;DR: The method proposed thus uses partial separability and partitioned quasi-Newton updating techniques for handling the cost function, while more classical tools as variable partitioning and specialized data structures are used in handling the network constraints.
Abstract: Partial separability and partitioned quasi-Newton updating have been recently introduced and experimented with success in large scale nonlinear optimization, large nonlinear least squares calculations and in large systems of nonlinear equations. It is the purpose of this paper to apply this idea to large dimensional nonlinear network optimization problems. The method proposed thus uses these techniques for handling the cost function, while more classical tools as variable partitioning and specialized data structures are used in handling the network constraints. The performance of a code implementing this method, as well as more classical techniques, is analyzed on several numerical examples.

Journal ArticleDOI
TL;DR: In this paper, an explicit determinantal formula for the least square solution of an over-determined system of linear equations was derived, and it was shown that the least squares solution lies in the convex hull of the solutions to the square subsystems of the original system.

Journal ArticleDOI
TL;DR: The model, containing parameters for year-class abundance, age selectivity, full-recruitment fishing mortality, and catchability, is fitted to data with a nonlinear least squares algorithm and produces estimates with relatively high precision.
Abstract: We review techniques for estimating the abundance of migratory populations and develop a new technique based on catch-age data from geographic regions and our earlier technique, catch-age analysis with auxiliary information (Deriso et al. 1985, 1989). Data requirements are catch-age data over several years, some auxiliary information, and migration rates among regions. The model, containing parameters for year-class abundance, age selectivity, full-recruitment fishing mortality, and catchability, is fitted to data with a nonlinear least squares algorithm. We present a measurement error model and a process error model and favor the process error model because all model parameters can be jointly estimated. By application to data on Pacific halibut, the process error model converges readily and produces estimates with no significant bias. These estimates have relatively high precision compared to those from analyses which did not incorporate migration information. The error structure used in a model has a mo...


Journal ArticleDOI
TL;DR: Non-linear least absolute deviation fitting of data has been shown to be superior to least squares where the data errors are unevenly distributed about the function.

Journal ArticleDOI
TL;DR: In this paper, the authors used least squares fitting of a spherical harmonic model to a selection of Magsat data to determine the practical limits of this technique with modern computers, and obtained a condition number of 115 for the solution matrix, with a resulting precision of 11 significant figures.
Abstract: Numerical tests were made, using least squares fitting of a spherical harmonic model, to a selection of Magsat data to determine the practical limits of this technique with modern computers. The resulting (M102189) model, whose coefficients were adjusted up to n = 50, was compared with M07AV6, a previous model which used least squares (on vector data) for coefficients up to n = 29, and Gauss-Legendre quadrature (on Z residuals) to adjust the coefficients up to n = 63. For the new least squares adjustment to n = 50 a condition number of 115 was obtained for the solution matrix, with a resulting precision of 11 significant figures. The M102189 model shows a lower and more Gaussian residual distribution than did M07AV6, though the Gaussian envelope fits to the residual distributions, even for the scalar field, gives "standard deviations' never lower than 6 nT, a factor of three higher than the estimated Magsat observational errors. Ionospheric currents are noted to have a significant effect on the coefficients of the internal potential functions.

Journal ArticleDOI
TL;DR: It is shown that the TLS-LP (total least squares-linear prediction) method and the SVD-Prony method are equivalent to the first-order perturbation approximation.
Abstract: It is shown that the TLS-LP (total least squares-linear prediction) method and the SVD-Prony method are equivalent to the first-order perturbation approximation. In practice, it means that the two methods yield the same estimation variances when the signal-to-noise ratio (SNR) is above a threshold. >

Journal ArticleDOI
TL;DR: The multistage least squares (MSLS) technique presented in this paper relaxes this iid assumption by extending 3SLS to a more generalized error structure.
Abstract: Forestry problems are frequently represented by a system of equations rather than a single equation. Many authors have used the two- and three-stage least squares (2SLS and 3SLS) and seemingly unrelated regressions techniques of econometrics to estimate the coefficients of these forestry systems. The assumption in using these techniques is that the error terms of individual equations are independent and identically distributed (iid), even though the equations may be contemporaneously correlated. For forestry systems, this assumption is frequently not met. The results are that the estimated variances of the coefficients are statistically biased and not consistent, and confidence intervals for the estimated coefficients and mean predicted values are inaccurate. The multistage least squares (MSLS) technique presented in this paper relaxes this iid assumption by extending 3SLS to a more generalized error structure. MSLS is a linear least squares technique that will result in consistent estimates of the coeffi...

Journal ArticleDOI
TL;DR: In this paper, necessary and sufficient conditions are established for the product AB-C to have its rank invariant with respect to the choice of a generalized inverse B-C. In particular, these conditions coincide with the results of Mitra.

Journal ArticleDOI
TL;DR: In this article, a robust model for forecasting power system hourly load is developed, which exploits the convenience of the autocorrelation function (ACF) and the partial autocorerelation functions (PACF) of the resulting differenced previous load data in identifying a suboptimal model.

Journal ArticleDOI
TL;DR: In this paper, a non-linear least-squares method is used for deriving spectral characteristics (frequency, amplitude, damping factor and phase) from time-domain NMR data (FID).

Journal ArticleDOI
TL;DR: The parameter estimation procedure is presented to show that the computer program can be used not only to generate phase diagrams with characteristic shapes but also to numerically estimate the lipid-lipid pair interactions between the mixed and the like pairs in the two-dimensional plane of the bilayer in both the gel and liquid-crystalline states.


Journal ArticleDOI
TL;DR: In this paper, the effectiveness of five different numerical methods has been assessed for estimation of both the equivalent time and the equivalent temperature for the equivalent point method (EPM) of thermal evaluation, the performance of each method was tested with three simulated data sets.

Journal ArticleDOI
TL;DR: In this article, the relative merits of the maximum likelihood and the minimum chi-square estimators for a single realization are considered, and a nonlinear least squares estimator is proposed when only macro data are available.
Abstract: Raftery (1985) proposed a higher order Markov model that is parsimonious in terms of number of parameters. The model appears to be useful in many real life situations. However, many important properties of the model have not been investigated. In particular, estimation methods under various sampling situations have not been studied. In this paper the relative merits of the maximum likelihood and the minimum chi-square estimators for a single realization are considered. For other sampling situations, a nonlinear least squares estimator is proposed when only macro data are available. Its small sample properties are studied by simulation. An empirical Bayes estimator for panel data is also considered.

Proceedings ArticleDOI
Victor Y. Pan1
01 May 1990
TL;DR: This work uses O(log2 n) parallel arithmetic steps and n2 processors to compute the least-squares solution x = A+b to a linear system of equations, Ax = b, given a g x h matrix A and a vector b, both filled with integers or with rational numbers.
Abstract: We use O(log2 n) parallel arithmetic steps and n2 processors to compute the least-squares solution x = A+b to a linear system of equations, Ax = b, given a g x h matrix A and a vector b, both filled with integers or with rational numbers, provided that g + h 5 2n and that A is given with its displacement generator of length T = O(1) and thus has displacement rank O(1). For a vector b and for a general p x Q matrix A (with p + Q 5 n), we compute A+ and A+b by using O(log2 n) parallel arithmetic steps and n2.851 processors, and we may also compute A+b by using O(n2.376) arithmetic operations.

Journal ArticleDOI
TL;DR: In this paper, the amplitude density function is derived from an inversion of the spectral representation of the series and is used in conjunction with non-linear least squares regression in an iterative procedure to yield precise estimates of the frequencies in the model.
Abstract: In this paper we present a novel methodology for estimating harmonic models in time series. The amplitude density function is derived from an inversion of the spectral representation of the series and is found to have the property of strong consistency. It is used in conjunction with non-linear least squares regression in an iterative procedure to yield precise estimates of the frequencies in the model. Information criteria are then used in determining how many frequencies to include in the model. The optimal procedure was tested using Monte Carlo techniques, and proved successful at correctly estimating the underlying harmonic model. The methodology was applied to a series of magnitudes of a variable star, to a series of signed sunspot numbers, and to a series of temperature data. A class of iterative procedures was investigated, and an optimal procedure proposed, which involves the use of non-linear least squares and an update of the amplitude density function after each new frequency has been estimated...

Journal ArticleDOI
TL;DR: In this article, the authors extend the results to cover more general cases when b ∉ R(A) and A is not of full rank, where A = A + δA and b = b + ǫb are perturbed from A, b, respectively.

Journal ArticleDOI
TL;DR: In this article, the error in parameters derived from the least-squares method of fitting nonlinear models to experimental data is calculated using the well-known result for the case of a linear least square fit, which differs from a method for calculating the error that is often employed for the nonlinear case.
Abstract: This article presents a calculation of the error in parameters derived from the least‐squares method of fitting nonlinear models to experimental data. The formula reduces to the well‐known result for the case of a linear least‐squares fit. It differs, however, from a method for calculating the error that is often employed for the nonlinear case. The difference between the current result and that method’s is illustrated with examples from least‐squares fits to spectroscopic data.