scispace - formally typeset
Search or ask a question

Showing papers on "Non-linear least squares published in 1984"


Journal ArticleDOI
TL;DR: In this article, an effective variance weighted least squares solution to the mass balance receptor model is derived from the theory of maximum likelihood, which is one which contains the effects of random uncertainties in both the receptor concentrations and the source compositions.

376 citations


Journal ArticleDOI
TL;DR: In this paper, the eigenvalue, logarithmic least squares, and least squares methods are compared to derive estimates of ratio scales from a positive reciprocal matrix, and the criteria for comparison are the measurement of consistency, dual solutions, and rank preservation.

359 citations


Journal ArticleDOI
TL;DR: For models similar to those used in econometric work, under suitable regularity conditions, the bootstrap is shown to give asymptotically valid approximations to the distribution of errors in coefficient estimates as discussed by the authors.
Abstract: For models similar to those used in econometric work, under suitable regularity conditions, the bootstrap is shown to give asymptotically valid approximations to the distribution of errors in coefficient estimates.

305 citations


Journal ArticleDOI
TL;DR: It is often difficult to specify weights for weighted least squares nonlinear regression analysis of pharmacokinetic data, and extended least squares regression provides a possible solution to this problem by allowing the incorporation of a general parametric variance model.
Abstract: It is often difficult to specify weights for weighted least squares nonlinear regression analysis of pharmacokinetic data. Improper choice of weights may lead to inaccurate and/or imprecise estimates of pharmacokinetic parameters. Extended least squares nonlinear regression provides a possible solution to this problem by allowing the incorporation of a general parametric variance model. Weighted least squares and extended least squares analyses of data from a simulated pharmacokinetic experiment were compared. Weighted least squares analysis of the simulated data, using commonly used weighting schemes, yielded estimates of pharmacokinetic parameters that were significantly biased, whereas extended least squares estimates were unbiased. Extended least squares estimates were often significantly more precise than were weighted least squares estimates. It is suggested that extended least squares regression should be further investigated for individual pharmacokinetic data analysis.

151 citations


Journal ArticleDOI
TL;DR: The problems of input sensitivity, structure detection, model validation and input signal selection are discussed in the non-linear context.
Abstract: Least squares parameter estimation algorithms for non-linear systems are investigated based on a non-linear difference equation model. A modified extended least squares algorithm, an instrumental variable algorithm and a new suboptimal least squares algorithm are considered. The problems of input sensitivity, structure detection, model validation and input signal selection are also discussed in the non-linear context.

126 citations





Journal ArticleDOI
TL;DR: The method of fundamental solutions as discussed by the authors is a form of indirect boundary integral equation method with adaptivity, gained through the use of an auxiliary boundary that is chosen automatically by a least squares procedure.

90 citations


Book
01 Apr 1984
TL;DR: 1 Large sparse systems of linear equations, 2 large sparse linear least squares, 3 Large sparse linear programming and 4 Nonlinear equations and nonlinear least squares.
Abstract: 1 Large sparse systems of linear equations.- 2 Large sparse linear least squares.- 3 Large sparse linear programming.- 4 Nonlinear equations and nonlinear least squares.- 5 Large unconstrained optimization problems.- 6 Large sparse quadratic programs.

59 citations


Journal ArticleDOI
TL;DR: In this article, a nonlinear and 3 linearized form of the integrated Michaelis-Menten equation were evaluated for their ability to provide reliable estimates of uptake kinetic parameters, when the initial substrate concentration (S0) is not error-free.
Abstract: The nonlinear and 3 linearized forms of the integrated Michaelis-Menten equation were evaluated for their ability to provide reliable estimates of uptake kinetic parameters, when the initial substrate concentration (S0) is not error-free. Of the 3 linearized forms, the one where t/(S0−S) is regressed against ln(S0/S)/(S0−S) gave estimates ofV max and Km closest to the true population means of these parameters. Further, this linearization was the least sensitive of the 3 to errors (±1%) in S0. Our results illustrate the danger of relying on r2 values for choosing among the 3 linearized forms of the integrated Michaelis-Menten equation. Nonlinear regression analysis of progress curve data, when S0 is not free of error, was superior to even the best of the 3 linearized forms. The integrated Michaelis-Menten equation should not be used to estimateV max and Km when substrate production occurs concomitant with consumption of added substrate. We propose the use of a new equation for estimation of these parameters along with a parameter describing endogenous substrate production (R) for kinetic studies done with samples from natural habitats, in which the substrate of interest is an intermediate. The application of this new equation was illustrated for both simulated data and previously obtained H2 depletion data. The only means by whichV max, Km, and R may be evaluated from progress curve data using this new equation is via nonlinear regression, since a linearized form of this equation could not be derived. Mathematical components of computer programs written for fitting data to either of the above nonlinear models using nonlinear least squares analysis are presented.

Journal ArticleDOI
TL;DR: In this article, a micro-or personal computer based system for frequency dispersion measurements in solid state ionics is described, which is capable of measuring the complex admittance or impedance in the frequency range from less than 1 mHz to 100 kHz.

Journal ArticleDOI
TL;DR: In this paper, four different measures of inefficiency of the simple least squares estimator in the general Gauss-Markoff model are considered, and new bounds are obtained for a particular measure.


Proceedings ArticleDOI
01 Mar 1984
TL;DR: This paper provides a quantitative analysis of the tracking characteristics of least squares algorithms and a comparison is made with the tracking performance of the LMS algorithm.
Abstract: This paper provides a quantitative analysis of the tracking characteristics of least squares algorithms. A comparison is made with the tracking performance of the LMS algorithm. Other algorithms that are similar to least squares algorithms, such as the gradient lattice algorithm and the Gram-Schmidt orthogonalization algorithm are also considered. Simulation results are provided to reinforce the analytical results and conclusions.

Journal ArticleDOI
TL;DR: An equilibrium model of the salt-induced dimerization process is described and used to fit the experimental degree-ofaggregation vs. salt-concentration data by nonlinear least squares procedures.
Abstract: Absorption and resonance Raman difference spectra of uroporphyrin I and several of its metal derivatives are given for monomeric and aggregated forms in aqueous solution. These spectral changes are examined in detail herein. An equilibrium model of the salt-induced dimerization process is described and used to fit the experimental degree-of-aggregation vs. salt-concentration data by nonlinear least squares procedures. The dimerization model is able to accurately fit the experimental data and reasonable values of the parameters of the model result. Variation in one of the parameters quantifies the metal-dependent differences in the observed aggregation curves. Acid-induced aggregation is also quantitatively understood in terms of the dimerization model.

Journal ArticleDOI
TL;DR: This paper develops an updating algorithm for the solution of linear least squares problems which are sparse except for a small subset of dense equations, which can be divided into sparse and dense subsets.
Abstract: Linear least squares problems which are sparse except for a small subset of dense equations can be efficiently solved by an updating method. Often the least squares solution is also required to satisfy a set of linear constraints, which again can be divided into sparse and dense subsets. This paper develops an updating algorithm for the solution of such problems. The algorithm is completely general in that no restrictive assumption on the rank of any subset of equations or constraints is made.

Journal ArticleDOI
TL;DR: In this paper, the ground electronic state electric-dipole-moment function of carbon monoxide valid in the range of nuclear oscillation (0.87 to 1.01 A) of about the V = 38th vibrational level was computed.
Abstract: Experimental intensity information is combined with numerically obtained vibrational wave functions in a nonlinear least squares fitting procedure to obtain the ground electronic state electric-dipole-moment function of carbon monoxide valid in the range of nuclear oscillation (0.87 to 1.01 A) of about the V = 38th vibrational level. Mechanical anharmonicity intensity factors, H, are computed from this function for delta V + = 1, 2, 3, with or = to 38.

Journal ArticleDOI
TL;DR: In this article, the asymptotic bias of the least square estimator for multivariate autoregressive models is derived and the formulas for low-order univariate models are given in terms of the simple functions of parameters.
Abstract: The asymptotic bias of the least squares estimator for the multivariate autoregressive models is derived. The formulas for the low order univariate autoregressive models are given in terms of the simple functions of parameters. Our results are useful to the bias correction method of the least squares estimation.

Book ChapterDOI
01 Jan 1984
TL;DR: A unified approach to the use of linear models and matrix least squares is presented with the intention of providing a better understanding of the techniques themselves and of the statistics that arise from these techniques as they are used in clinical chemistry.
Abstract: Clinical chemists are often required to fit mathematical models to experimental data. In some studies, the individual model parameters and their uncertainties are of primary importance; for example, the initial reaction rate (slope with respect to time) of a kinetic method of analysis (1). In other studies, the graph of the whole model (and the associated uncertainty) is of interest—e.g., a calibration curve relating measured values of response to a property of a material (2). In still other studies, statistical measures of how well the model fits the data are desired; the correlation coefficient obtained in methods comparison studies is an example (3).

Journal ArticleDOI
TL;DR: The systems of linear equations satisfied by the descent direction and the Lagrange multipliers in the minimization algorithm are solved by direct methods based on QR decompositions or iterative preconditioned conjugate gradient methods.
Abstract: A computational procedure is developed for determining the solution of minimal length to a linear least squares problem subject to bounds on the variables. In the first stage, a solution to the least squares problem is computed and then in the second stage, the solution of minimal length is determined. The objective function in each step is minimized by an active set method adapted to the special structure of the problem. The systems of linear equations satisfied by the descent direction and the Lagrange multipliers in the minimization algorithm are solved by direct methods based on QR decompositions or iterative preconditioned conjugate gradient methods. The direct and the iterative methods are compared in numerical experiments, where the solutions are sought to a sequence of related, minimal least squares problems subject to bounds on the variables. The application of the iterative methods to large, sparse problems is discussed briefly.

Journal ArticleDOI
TL;DR: In this paper, it is shown that if the least square formulation is done in appropriately weighted space, then optimal convergence results in unweighted spaces like L(2) and that mesh refinement or the use of special singular elements do not improve matters.
Abstract: Elliptic problems with corner singularities are discussed. Finite element approximations based on variational principles of the least squares type tend to display poor convergence properties in such contexts. Moreover, mesh refinement or the use of special singular elements do not appreciably improve matters. It is shown that if the least squares formulation is done in appropriately weighted space, then optimal convergence results in unweighted spaces like L(2).

Journal ArticleDOI
TL;DR: In this paper, the convergence rate of the least square estimator in a non-linear regression model with errors forming either a φ-mixing or strong mixing process is analyzed.

Journal Article
TL;DR: An approximative probability density for the least squares estimates of (0U . . . , 6m) is proposed and the level of approximation depends on the probability that the sample goes beyond the nearest center of curvature of the mean-values manifold.
Abstract: The nonlinear regression model yi = f}t(0u . . . . 0,„) + et with (e1, . . . , eN) ~ A (0, J)) a r , d with //,(.) twice continuously differentiable is considered. Under the assumption that the maximal curvature of the mean-values manifold {ii(0) :0eU}cR is bounded, an approximative probability density for the least squares estimates of (0U . . . , 6m) is proposed. This density depends on the first form ( = the information matrix) and on the second form of the mean-values manifold (Eq. (9)). The level of approximation depends on the probability that the sample goes beyond the nearest center of curvature of the mean-values manifold and it is expressed in the paper (Theorem 1).

Journal ArticleDOI
TL;DR: The convergence properties of the gradient algorithm are analyzed under the assumption that the gain tends to zero, and a main result is that the convergence conditions for the gradient algorithms are the same as those for the recursive least squares algorithm.
Abstract: Parameter estimation problems that can be formulated as linear regressions are quite common in many applications. Recursive (on-line, sequential) estimation of such parameters can be performed using the recursive least squares (RLS) algorithm or a stochastic gradient version of this algorithm. In this paper the convergence properties of the gradient algorithm are analyzed under the assumption that the gain tends to zero. The technique is the same as the so-called ordinary differential equation approach, but the treatment here is self-contained and includes a proof of the boundedness of the estimates. A main result is that the convergence conditions for the gradient algorithm are the same as those for the recursive least squares algorithm.


Journal ArticleDOI
01 Jan 1984
TL;DR: In the model is a zero mean random vector with the convienience matrix σ2I(σ2 unknown) and where is the whole paper as discussed by the authors, where is a whole paper
Abstract: In the Model is a zero mean random vector with the convienience matrix σ2I(σ2 unknown), and where is the whole paper

Journal ArticleDOI
TL;DR: In this article, the results of identification methods are very sensitive to measurement errors in data and a generalized least square (GLS) technique is proposed to reduce the effect of correlated errors.
Abstract: This paper concerns the methods of estimating aquifer transmissivities on the basis of unsteady state hydraulic head data. Traditionally, the criterion of minimizing the sum of the squares of errors has been used to match the observed data with the model response. The data used for optimization usually contain noise that is not necessarily uncorrelated. It is well understood that the results of identification methods are very sensitive to measurement errors in data. In this study, the ordinary least squares (OLS) technique is carried out along with a generalized least squares (GLS) technique specifically designed to reduce the effect of correlated errors. The trace of the covariance matrix is used as a measure of overall accuracy and reliability of the estimated parameters. The effectiveness of the OLS and GLS techniques in dealing with noisy data is demonstrated by using a hypothetical example. The results of numerical experiments suggest that GLS offers a promising approach in efficiently improving the reliability of the estimated parameters.

Journal ArticleDOI
TL;DR: In this article, necessary and sufficient conditions for linear unbiased estimators are described, and the covariance matrix of a given linear model is checked to see whether the estimator is also the best estimator.
Abstract: Two often-quoted necessary and sufficient conditions for ordinary least squares estimators to be best linear unbiased estimators are described. Another necessary and sufficient condition is described, providing an additional tool for checking to see whether the covariance matrix of a given linear model is such that the ordinary least squares estimator is also the best linear unbiased estimator. The new condition is used to show that one of the two published conditions is only a sufficient condition.

Journal ArticleDOI
TL;DR: In this article, the covariance matrix for the residuals of a regression process is written as the identity matrix plus a matrix V. The matrix V is bounded from above, and the corresponding set of generalized least squares estimates is identified.
Abstract: The covariance matrix for the residuals of a regression process is written as the identity matrix plus a matrix V. The matrix V is bounded from above, and the corresponding set of generalized least squares estimates is identified. The extreme estimates in this set are functions of the usual t statistics; in particular the number ((T - k)/8)1/2/|t| measures the influence of reweighting extreme observations, where T - k gives the degrees of freedom, t is the t statistic, and the weights on observations are allowed to vary by a factor of at most 2.