scispace - formally typeset
Search or ask a question

Showing papers on "Non-linear least squares published in 1975"


Journal ArticleDOI
TL;DR: In this paper, an algorithm is given for acting the biasing paramatar, k, in RIDGE regrassion, which has the following properties: (i) it produces an aberaged squared error for the regression coafficiants that is les than least squares, (ii) the distribuction of squared arrots for the regressors has a smallar variance than does that for last squares, and (iii) regradless of he signal-to-noiss retio the probability that RIDge produces a smaller squared error than least square is
Abstract: An algorithm is given for selacting the biasing paramatar, k, in RIDGE regrassion. By means of simulaction it is shown that the algorithm has the following properties: (i) it produces an aberaged squared error for the regrassion coafficiants that is les than least squares, (ii) the distribuction of squared arrots for the regression coafficiants has a smallar variance than does that for last squares, and (iii) regradless of he signal-to-noiss retio the probability that RIDGE producas a smaller squared error than least squares is greatar than 0.50.

879 citations


Journal ArticleDOI
TL;DR: In this paper, the asymptotic distribution theory of least squares estimators in regression models having different analytical forms in different regions of the domain of the independent variable is discussed.
Abstract: This paper deals with the asymptotic distribution theory of least squares estimators in regression models having different analytical forms in different regions of the domain of the independent variable. An important special case is that of broken line regression, in which each segment of the regression function is a different straight line. The residual sum of squares function has many corners, and so classical least squares techniques cannot be directly applied. It is shown, however, that the problem can be transformed into a new problem in which the sum of squares function is locally smooth enough to apply the classical techniques. Asymptotic distribution theory is discussed for the new problem and it is shown that the results are also valid for the original problem. Results related to the usual normal theory are derived.

209 citations


Journal ArticleDOI
TL;DR: In this paper, a new method for determining the optimal α which computationally has proved more efficient than the Golub-Pereyra scheme was proposed, which can be reduced to a nonlinear least square problem involving α only, and a linear least squares problem involvinga only.
Abstract: Consider the separable nonlinear least squares problem of findinga eR n and α eR k which, for given data (y i ,t i ),i=1,2,...m, and functions ϕ j (α,t),j=1,2,...,n(m>n), minimize the functional $$r(a,\alpha ) = \left\| {y - \Phi (\alpha )a} \right\|_2^2$$ where θ(α) ij =ϕ j (α,t i ). Golub and Pereyra have shown that this problem can be reduced to a nonlinear least squares problem involvingα only, and a linear least squares problem involvinga only. In this paper we propose a new method for determining the optimalα which computationally has proved more efficient than the Golub-Pereyra scheme.

204 citations


Journal ArticleDOI
TL;DR: In this article, two criteria are set up to judge the relative performance of the least squares estimator and the best linear unbiased estimator of, in the linear model y = X/, + u, where E(u) = 0, E(uu') = F.
Abstract: SUMMARY Two criteria are set up to judge the relative performance of the least squares estimator and the best linear unbiased estimator of , in the linear model y = X/, + u, where E(u) = 0, E(uu') = F. The matrices X and r are found so that the relative performance of least squares is worst. Both criteria give the same least favourable situation: when X(X) is any one of the 2k manifolds (Y1 +? Yn, ***., Yk ? Yn-k+l), where Fyi = fiyi andf1 < ... < fn are fixed, ,/(. ) denoting the subspace spanned by the columns of the relevant matrix. The case where allfi may be chosen in a preassigned interval is also discussed. The practical implications of the various results are mentioned.

131 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that a separable least square problem can be transformed to a minimization problem involving the nonlinear parameters only, which can be interpreted as a generalization of the classical technique of Prony.
Abstract: A least squares problem is called separable if the fitting function can be written as a linear combination of functions involving further parameters in a nonlinear manner. Here it is shown that a separable problem can be transformed to a minimization problem involving the nonlinear parameters only. This result can be interpreted as a generalization of the classical technique of Prony, and it also shows how the numerical difficulties associated with Prony’s method can be overcome. The transformation is then worked out in detail for two examples (exponential fitting to equispaced data and rational fitting). In both cases the condition for a stationary value of the transformed problem leads to a nonlinear eigenvalue problem. Two algorithms are then suggested and illustrated by numerical examples.

111 citations



Journal ArticleDOI
TL;DR: Bloomfield and Watson as discussed by the authors gave a proof for the minimum efficiency of least squares inequality, which up to the present has remained a conjecture, and Bloomfield & Watson have independently obtained a proof of the same inequality.
Abstract: Watson (1955) gave an inequality for the minimum efficiency of least squares as measured by the ratio of the generalized variances of the efficient and the inefficient estimators of the vector of regression coefficients. The inequality had been conjectured by J. Durbin. The proof given by Watson was faulty and was acknowledged as such by Watson (1967). This paper gives a proof for the inequality, which up to the present has remained a conjecture. The paper owes much to conversations over several months with J. Durbin and K. G. Binmore. Bloomfield & Watson (1975) have independently obtained a proof of the same inequality. The inequality asserts that

71 citations


Journal ArticleDOI
TL;DR: In this paper, a summary of the various analytic and graphical methods used for assigning errors to individual parameters determined in linear and non-linear least squares fitting procedures, with emphasis on cases with few degrees of freedom (e.g. the multipole mixing ratio problem in gamma ray spectroscopy).

66 citations


ReportDOI
TL;DR: In this article, a preliminary investigation of two specification error problems in truncated dependent variable models is reported, and an appropriate nonlinear least squares regression model is derived for situations when the micro-level model fits a tobit framework but only aggregate data are available.
Abstract: A preliminary investigation of two specification error problems in truncated dependent variable models is reported. It is shown that heteroscedasticity in a tobit model results in biased estimates when the model is misspecified. This differs from the OLS model where estimates are still consistent though inefficient. The second problem examined is aggregation. An appropriate nonlinear least squares regression model is derived for situations when the micro-level model fits a tobit framework but only aggregate data are available.

62 citations


Journal ArticleDOI
TL;DR: In this article, a general class of estimation procedures for the factor model is considered, and the procedures are shown to yield estimates possessing the same asymptotic sampling properties as those from estimation by maximum likelihood or generalized least squares, both of which are special members of the class.
Abstract: A general class of estimation procedures for the factor model is considered. The procedures are shown to yield estimates possessing the same asymptotic sampling properties as those from estimation by maximum likelihood or generalized least squares, both of which are special members of the class. General expressions for the derivatives needed for Newton-Raphson determination of the estimates are derived. Numerical examples are given, and the effect of the choice of estimation procedure is discussed.

53 citations


Journal ArticleDOI
TL;DR: In this article, a simple expression for the difference between the least squares and minimum variance linear unbiased estimators obtained in linear models in which the covariance operator of the observation vector is nonsingular was developed.
Abstract: A simple expression is developed for the difference between the least squares and minimum variance linear unbiased estimators obtained in linear models in which the covariance operator of the observation vector is nonsingular. Bounds and series expansion for this difference are obtained, and bounds for the efficiency of least squares estimates are also obtained.

Posted Content
TL;DR: In this article, a preliminary investigation of two specification error problems in truncated dependent variable models is reported, and an appropriate nonlinear least squares regression model is derived for situations when the micro-level model fits a tobit framework but only aggregate data are available.
Abstract: A preliminary investigation of two specification error problems in truncated dependent variable models is reported. It is shown that heteroscedasticity in a tobit model results in biased estimates when the model is misspecified. This differs from the OLS model where estimates are still consistent though inefficient. The second problem examined is aggregation. An appropriate nonlinear least squares regression model is derived for situations when the micro-level model fits a tobit framework but only aggregate data are available.

Journal ArticleDOI
TL;DR: A more precise determination of Q and e values for vinyl monomers can be obtained through the use of a linear least squares technique as mentioned in this paper, which can be used to obtain a more accurate determination of the value of a monomer.
Abstract: A more precise determination of Q and e values for vinyl monomers can be attained through the use of a linear least squares technique.


Journal ArticleDOI
TL;DR: In this article, a nonlinear least squares method for the calculation of spin-lattice relaxation times is described, which offers considerable time savings compared with current methods, as repetition times only up to the order of T1 are required.


Journal ArticleDOI
R. De Meersman1
TL;DR: In this article, an orthogonalization procedure for a sequence of vectors having the special feature that consecutive vectors are related by a unitary operator is given for the least squares solution of linear equations with a cyclic rectangular coefficient matrix.


Journal ArticleDOI
TL;DR: In this paper, the a3Πu and b3Σg− states in C2 (Ballik-Ramsay system) were determined by a nonlinear least squares fit directly to the observed wavelengths.
Abstract: Molecular parameters for the a3Πu and b3Σg− states in C2 (Ballik–Ramsay system) are determined by a nonlinear least squares fit directly to the observed wavelengths. No satellite lines are observed...

Journal ArticleDOI
TL;DR: In this paper, a sequence of approximate solutions is constructed which converges to the unique least squares solution of minimal norm, which does not require knowing the null spaces of the differential operator L or its adjoint L*.
Abstract: For an nth order linear boundary value problem Lf = go in the Hilbert space L2[a, bl, a sequence of approximate solutions is constructed which converges to the unique least squares solution of minimal norm. The method is practical from a computational viewpoint, and it does not require knowing the null spaces of the differential operator L or its adjoint L*.

Journal ArticleDOI
TL;DR: Information is given on known and new methods for improving the least squares estimators, the usual confidence regions and the F-test.
Abstract: Statistical inference for linear parameters in linear models is treated under additional information on the parameters as order restrictions, etc. The paper gives a survery on known and new methods for improving the least squares estimators, the usual confidence regions and the F-test.

Proceedings ArticleDOI
01 Dec 1975
TL;DR: In this paper, a star-product formalism in scattering theory is shown to be applicable to discrete-time linear least-squares estimation problems and several other applications of the scattering framework are presented, including doubling formulas for the error covariance.
Abstract: A certain "star-product" formalism in scattering theory as developed by Redheffer is shown to also be naturally applicable to discrete-time linear least-squares estimation problems. The formalism seems to provide a nice way of handling some of the well-known algebraic complications of the discrete-time case, e.g., the distinctions between time and measurement updates, predicted and filtered estimates, etc. Several other applications of the scattering framework are presented, including doubling formulas for the error covariance, a change of initial conditions formula, equations for a backwards Markov state model, and a new derivation of the Chandrasekhar-type equations for the constant parameter case. The differences between the discrete-time and continuous-time are noted.

Journal ArticleDOI
Robert L. Obenchain1
TL;DR: In this paper, it was shown that Gauss-Markov residuals and biased residuals are inferior to ordinary least squares residuals as estimators of possible lack-of-fit in the model.
Abstract: In the general linear model with observations not necessarily uncorrelated or homoscedastic, Gauss-Markov regression coefficients are superior to ordinary unweighted least squares in the well known BLU sense if the model is correct. However, it is shown that there is a weaker, but always applicable, minimum overall mean squared error sense in which Gauss-Markov residuals and biased residuals are inferior to ordinary least squares residuals as estimators of possible lack-of-fit in the model. This optimality of ordinary least squares is further illustrated by three other types of results about residuals.


Journal ArticleDOI
TL;DR: Second derivatives of function and first Derivatives of constraints required Second derivatives of both tune, ion and constraints required Integer Programming input/Output Binary octal/Hexadecimal Decimal Character string Graphics, Plotting Batch Interactive Internal File Manipulations Copy or Move File(s) Create a file Sequential Library (-Partltioned Data Set") other.
Abstract: Second derivatives of function and first Derivatives of constraints required Second derivatives of both tune, ion and constraints required Integer Programming input/Output Binary octal/Hexadecimal Decimal Character string Graphics, Plotting Batch Interactive Internal File Manipulations Copy or Move File(s) Create a file Sequential Library (-Partltioned Data Set") Other Destroy a file Compare Two Files update a File File Maintenance Program Library Maintenance Language Processors Assemblers Macro Assemblers Other Assemblers (which produce code for the same machine) Cross-Assemblers (i.e., one which runs on one computer but produces code for another computer.) L2.

Journal ArticleDOI
TL;DR: This article traces the development of the algorithm from that described by Bjorck and Golub to the present and shows how, with a slight change in algebra, the Householder triangularization may be replaced equally successfully by the simpler method of Cholesky factorization.
Abstract: An iterative algorithm for solving linear least squares problems has been developed and tested on an IBM 1620 computer. This article traces the development of the algorithm from that described by Bjorck and Golub [1] to the present, and shows how, with a slight change in algebra, the Householder triangularization may be replaced equally successfully by the simpler method of Cholesky factorization. The algorithm appears accurate and efficient even for highly ill-conditioned problems. Use of the residual vector in the iteration process is the main source of the algorithm's success.

Journal ArticleDOI
TL;DR: For the linear model, it was shown in this article that the generalized least squares estimator is minimax with respect to a normed quadratic risk, and that generalized least square estimators are minimax in general.
Abstract: For the linear model it is proven, that the generalized least squares estimator is minimax with respect to a normed quadratic risk.

Journal ArticleDOI
TL;DR: In this article, results from programs based on the square root procedure are compared with results based on some other algorithms, and remarkable good results are obtained using the Square root procedure, however, very few programs use the Square Root procedure.
Abstract: Summary Various algorithms are in use on computers to solve least squares problems. Apparently very few programs use the square root procedure. In this paper, results from programs based on the square root procedure are compared with results based on some other algorithms. Remarkably good results are obtained using the square root procedure.

Journal ArticleDOI
TL;DR: It is shown that appropriate formulation of the servo problem guarantees a stable numerical solution, even when the Galerkin simulation itself is unstable, a situation not uncommon with certain hyperbolic partial differential equations.
Abstract: This paper addresses the application of linear optimal control theory to the least squares functional approximation of linear initial-boundary value problems. The method described produces the optimal approximate solution by the realization of a linear quadratic servo configuration imposed on the Galerkin simulation for the problem. It is shown that appropriate formulation of the servo problem guarantees a stable numerical solution, even when the Galerkin simulation itself is unstable, a situation not uncommon with certain hyperbolic partial differential equations. Theoretical least squares and Galerkin properties are comparatively discussed, and numerical examples demonstrating least squares convergence in the face of Galerkin divergence are presented.