scispace - formally typeset
Search or ask a question

Showing papers on "Non-linear least squares published in 2008"


Journal ArticleDOI
TL;DR: The systemfit package as discussed by the authors provides the capability to estimate systems of linear equations within the R programming environment, which can be used for "ordinary least squares", "seemingly unrelated regression" (SUR), and the instrumental variable (IV) methods "two-stage least squares" (2SLS), where SUR and 3SLS estimations can optionally be iterated.
Abstract: Many statistical analyses (e.g., in econometrics, biostatistics and experimental design) are based on models containing systems of structurally related equations. The systemfit package provides the capability to estimate systems of linear equations within the R programming environment. For instance, this package can be used for "ordinary least squares" (OLS), "seemingly unrelated regression" (SUR), and the instrumental variable (IV) methods "two-stage least squares" (2SLS) and "three-stage least squares" (3SLS), where SUR and 3SLS estimations can optionally be iterated. Furthermore, the systemfit package provides tools for several statistical tests. It has been tested on a variety of datasets and its reliability is demonstrated.

274 citations


Journal ArticleDOI
TL;DR: A randomized algorithm for overdetermined linear least-squares regression based on QR-decompositions or bidiagonalization that computes an n × 1 vector x such that x minimizes the Euclidean norm ‖Ax − b‖ to relative precision ε.
Abstract: We introduce a randomized algorithm for overdetermined linear least-squares regression. Given an arbitrary full-rank m × n matrix A with m ≥ n , any m × 1 vector b , and any positive real number e, the procedure computes an n × 1 vector x such that x minimizes the Euclidean norm ‖ Ax − b ‖ to relative precision e. The algorithm typically requires 𝒪((log( n )+log(1/e)) mn + n 3 ) floating-point operations. This cost is less than the 𝒪( mn 2 ) required by the classical schemes based on QR -decompositions or bidiagonalization. We present several numerical examples illustrating the performance of the algorithm.

214 citations


Journal ArticleDOI
TL;DR: The authors established estimation and model selection consistency, prediction and estimation bounds and persistence for the group-lasso estimator and model selector proposed by Yuan and Lin (2006) for least squares problems when the covariates have a natural grouping structure.
Abstract: We establish estimation and model selection consistency, prediction and estimation bounds and persistence for the group-lasso estimator and model selector proposed by Yuan and Lin (2006) for least squares problems when the covariates have a natural grouping structure. We consider the case of a fixed-dimensional parameter space with increasing sample size and the double asymptotic scenario where the model complexity changes with the sample size.

214 citations


Book
02 May 2008
TL;DR: In this paper, the least squares method can be used to estimate parameters of any model, and it is shown that the least square method can estimate the number of data points included in the fit.
Abstract: ... .10) i=1 i=1 in which I is the number of data points included in the fit. ... the least squares method can be used to estimate parameters of any model, ...

169 citations


01 Jan 2008
TL;DR: The Fortran subroutine BVLS (bounded variable least-squares) solves linear least-Squares problems with upper and lower bounds on the variables, using an active set strategy, and is used to solve minimum l1 and l∞ fitting problems.
Abstract: The Fortran subroutine BVLS (bounded variable least-squares) solves linear least-squares problems with upper and lower bounds on the variables, using an active set strategy. The unconstrained least-squares problems for each candidate set of free variables are solved using the QR decomposition. BVLS has a “warm-start” feature permitting some of the variables to be initialized at their upper or lower bounds, which speeds the solution of a sequence of related problems. Such sequences of problems arise, for example, when BVLS is used to find bounds on linear functionals of a model constrained to satisfy, in an approximate lp-norm sense, a set of linear equality constraints in addition to upper and lower bounds. We show how to use BVLS to solve that problem when p = 1, 2, or ∞, and to solve minimum l1 and l∞ fitting problems. FORTRAN 77 code implementing BVLS is available from the statlib gopher at Carnegie Mellon University.

163 citations


Journal ArticleDOI
TL;DR: Uniform approximation of differentiable or analytic functions of one or several variables on a compact set K is studied by a sequence of discrete least squares polynomials if K satisfies a Markov inequality and point evaluations on standard discretization grids provide nearly optimal approximants.

120 citations


Book
31 Aug 2008
TL;DR: In this article, data analysis using the method of least squares (ML) has been used for data analysis in the field of least square analysis, and the results showed that the method is effective in many applications.
Abstract: Data analysis using the method of least squares , Data analysis using the method of least squares , کتابخانه دیجیتال جندی شاپور اهواز

83 citations


Journal ArticleDOI
TL;DR: The proposed sparse sum of squares techniques significantly improve the computational performance of prior methods for solving these problems and are especially useful in solving sparse polynomial system and nonlinear least squares problems.
Abstract: This paper discusses how to find the global minimum of functions that are summations of small polynomials (“small” means involving a small number of variables). Some sparse sum of squares (SOS) techniques are proposed. We compare their computational complexity and lower bounds with prior SOS relaxations. Under certain conditions, we also discuss how to extract the global minimizers from these sparse relaxations. The proposed methods are especially useful in solving sparse polynomial system and nonlinear least squares problems. Numerical experiments are presented which show that the proposed methods significantly improve the computational performance of prior methods for solving these problems. Lastly, we present applications of this sparsity technique in solving polynomial systems derived from nonlinear differential equations and sensor network localization.

63 citations


Journal ArticleDOI
TL;DR: In this paper, a generalized least squares and a generalized method of moment estimators for dynamic panel data models with both individual-specific and time-specific effects were proposed, and Monte Carlo studies were conducted to investigate the finite sample properties of various estimators.

59 citations


Journal ArticleDOI
Jie Ding1, Feng Ding1
TL;DR: A mathematical model is derived by using the polynomial transformation technique, and the extended least squares algorithm is applied to identify the dual-rate systems directly from the available input-output data {u(t),y(qt)}.
Abstract: In this paper, we focus on a class of dual-rate sampled-data systems in which all the inputs u(t) are available at each instant while only scarce outputs y(qt) can be measured (q being an integer more than unity). To estimate the parameters of such dual-rate systems, we derive a mathematical model by using the polynomial transformation technique, and apply the extended least squares algorithm to identify the dual-rate systems directly from the available input-output data {u(t),y(qt)}. Then, we study the convergence properties of the algorithm in details. Finally, we give an example to test and illustrate the algorithm involved.

58 citations


Journal ArticleDOI
TL;DR: In this paper, the second-order least squares estimator is shown to be asymptotically more efficient than the ordinary least square estimator if the third moment of the random error is nonzero and if the error distribution is symmetric.
Abstract: The ordinary least squares estimation is based on minimization of the squared distance of the response variable to its conditional mean given the predictor variable. We extend this method by including in the criterion function the distance of the squared response variable to its second conditional moment. It is shown that this “second-order” least squares estimator is asymptotically more efficient than the ordinary least squares estimator if the third moment of the random error is nonzero, and both estimators have the same asymptotic covariance matrix if the error distribution is symmetric. Simulation studies show that the variance reduction of the new estimator can be as high as 50% for sample sizes lower than 100. As a by-product, the joint asymptotic covariance matrix of the ordinary least squares estimators for the regression parameter and for the random error variance is also derived, which is only available in the literature for very special cases, e.g. that random error has a normal distribution. The results apply to both linear and nonlinear regression models, where the random error distributions are not necessarily known.

Journal ArticleDOI
TL;DR: Fully discrete, steepest descent based schemes for solving the resulting optimization problems are developed and the adjoint method is used to accurately and efficiently compute requisite gradients.

Journal ArticleDOI
TL;DR: The proposed method is close to optimal with an error variance close to the predictions made by the CRB, and the outcome was compared with that of the corresponding CRB and with a recently proposed seven-parameter fit.
Abstract: The Cramer-Rao bound (CRB) is a lower bound on the error variance of any estimator. For a Gaussian scenario, the CRB is derived for a seven-parameter, dual-channel sine-wave model, which is a model relevant to applications such as impedance measurements and the estimation of particle size and velocity by laser anemometry. Four different parameterizations were considered: the common quadrature/in-phase and amplitude-phase models and two relative amplitude-phase models. The CRB indicated the achievable error variance of an unbiased estimator as a function of the signal-to-noise ratio (SNR), the number of samples, and noise power. A nonlinear least squares fit of the signal model to the collected data was employed. The problem at hand is separable and can be solved by a 1-D search followed by a linear least squares fit of the remaining parameters. The performance of the method was investigated with the aid of a simulation study, and the outcome was compared with that of the corresponding CRB and with a recently proposed seven-parameter fit. For high SNRs, the performance of the proposed method is close to optimal with an error variance close to the predictions made by the CRB.

Journal ArticleDOI
TL;DR: In this paper, a non-linear method of fitting an ellipse to scattered points by solving a least square problem can involve a linear as well as nonlinear formulation using the genetic algorithm.

01 Jan 2008
TL;DR: In this article, a vector y that brings left and right hand side of the linear system as close as possible is chosen to make the distance between Ay and b as small as possible, i.e., minimize the distance over all x, and distance will again be measured in the twonorm.
Abstract: Here we solve linear systems Ax = b that do not have a solution. If b is not in the column space of A, there is no x such that Ax = b. The best we can do is to find a vector y that brings left and right hand side of the linear system as close as possible, in other words y is chosen to make the distance between Ay and b as small as possible. That is, we want to minimize the distance ‖Ax − b‖2 over all x, and distance will again be measured in the twonorm.

Journal ArticleDOI
Cheol-Taek Kim1, Ju-Jang Lee1
TL;DR: The variable projection (VP) method for separable nonlinear least squares (SNLLS) is presented and incorporated into the Levenberg-Marquardt optimization algorithm for training two-layered feedforward neural networks.
Abstract: The variable projection (VP) method for separable nonlinear least squares (SNLLS) is presented and incorporated into the Levenberg-Marquardt optimization algorithm for training two-layered feedforward neural networks. It is shown that the Jacobian of variable projected networks can be computed by simple modification of the backpropagation algorithm. The suggested algorithm is efficient compared to conventional techniques such as conventional Levenberg-Marquardt algorithm (LMA), hybrid gradient algorithm (HGA), and extreme learning machine (ELM).

Journal ArticleDOI
TL;DR: In this article, an alternative method is presented for the Herschel-Bulkley model which eliminates the complexity associated with a general numerical method, and so offers potential benefits when dealing with the model in practice.

Proceedings ArticleDOI
19 May 2008
TL;DR: The new method bypasses the need to measure or estimate joint position, velocity and acceleration by using both Direct and Inverse Dynamic Identification Models (DIDIM), it needs only torque data at a low sample rate.
Abstract: The identification of the dynamic parameters of robot is based on the use of the inverse dynamic model which is linear with respect to the parameters. This model is sampled while the robot is tracking trajectories which excite the system dynamics in order to get an over determined linear system. The linear least squares solution of this system calculates the estimated parameters. The efficiency of this method has been proved through the experimental identification of many prototype and industrial robots. However, this method needs joint torque and position measurements and the estimation of the joint velocities and accelerations through the pass band filtering of the joint position at high sample rate. The new method bypasses the need to measure or estimate joint position, velocity and acceleration by using both Direct and Inverse Dynamic Identification Models (DIDIM). It needs only torque data at a low sample rate. It is based on a closed loop simulation which integrates the direct dynamic model. The optimal parameters minimize the 2 norm of the error between the actual torque and the simulated torque assuming the same control law and the same tracking trajectory. This non linear least squares problem is dramatically simplified using the inverse model to calculate the derivatives of the cost function.

Journal ArticleDOI
TL;DR: By means of real representation of a quaternion matrix, a concept of norm of quaternions matrices is introduced, which is different from that in [T. Jiang, L. Chen, Algebraic algorithms for least squares problem in quaternionic quantum theory, Comput. Phys. Comm. 176 (2007) 481–485], and an iterative method for finding the minimum-norm solution is derived.

Journal ArticleDOI
TL;DR: In this article, a method to estimate the dynamic parameters of a synchronous generator, based on measured electrical power, reactive power, terminal voltage, field current, field voltage and rotor angle following a small perturbation of the field voltage, is described.
Abstract: A method to estimate the dynamic parameters of the commonly used third-order d - q model of a synchronous generator, based on measured electrical power, reactive power, terminal voltage, field current, field voltage and rotor angle following a small perturbation of the field voltage, is described. The parameters are estimated from two newly developed nonlinear functions for electrical power and terminal voltage by using a nonlinear least squares (NLS) algorithm. Results of simulation studies and experimental data collected from an 80 MVA, 10.5 kV generator show the efficacy of the proposed method and also reveal that the proposed method is valid for a wide range of operating conditions. For cases where rotor angle is not available, a new method for rotor angle estimation is also proposed.

Book ChapterDOI
TL;DR: This chapter provides an overview of the techniques involved in "fitting equations to experimental data" with a particular emphasis on the what can be learned and what are the requirements of the experimental data for these techniques.
Abstract: This chapter provides an overview of the techniques involved in "fitting equations to experimental data" with a particular emphasis on the what can be learned with these techniques, what are the requirements of the experimental data for these techniques, and what are the underlying assumptions of these techniques. The layout of this chapter is to start with a set of experimental data, and then walk the reader through the analysis of this set of data. The rigorous mathematical methods are referenced but not presented in detail.

Book ChapterDOI
23 Apr 2008
TL;DR: This paper reviews and analyze existing least squares orthogonal distance fitting techniques in a general numerical optimization framework and proposes two new geometric variant methods (GTDM and CDM).
Abstract: Fitting of data points by parametric curves and surfaces is demanded in many scientific fields. In this paper we review and analyze existing least squares orthogonal distance fitting techniques in a general numerical optimization framework. Two new geometric variant methods (GTDMand CDM) are proposed. The geometric meanings of existing and modified optimization methods are also revealed.

Journal ArticleDOI
TL;DR: The asymptotic normality and strong consistency of the fuzzy least squares estimator (FLSE) are investigated; a confidence region based on a class of FLSEs is proposed; and the asymPTotic relative efficiency ofFLSEs with respect to the crisp least square estimators is provided.

Journal ArticleDOI
TL;DR: In this article, a modification of the nonlinear least squares fitting technique of Vinas and Scudder and Szabo (VSSz) with simultaneous determination of the shock normal direction (θ and ϕ) and propagation speed (VS) was introduced.
Abstract: [1] We introduce a modification of the nonlinear least squares fitting technique of Vinas and Scudder and Szabo (VSSz) with simultaneous determination of the shock normal direction (θ and ϕ) and propagation speed (VS). Similar to the 2D case of the VSSz technique, the uniqueness of the solution can still be graphically demonstrated in the 3D space of the unknown variables. The modified technique is validated through the analysis of synthetic shocks and is also applied to an interplanetary shock observed by Wind. Our technique provides self-consistent 3D confidence regions for the parameters while the VSSz technique assumes the independence of the VS confidence interval from θ and ϕ. The 3D confidence region is highly dependent on VS resulting in θ and ϕ joint confidence regions that are generally significantly larger and oriented differently than those obtained by the VSSz technique. This also leads to significantly larger confidence intervals for the individual parameters determined by our modified technique. While the best fit values provided by the two techniques are usually close to each other, we also demonstrate the advantage of the VS best fit value determination with our technique in the case when a small density jump is combined with significant density fluctuations. The agreement between the best fit solutions of the techniques can also be used as a test for the correctness of the chosen upstream and downstream intervals.

Journal ArticleDOI
TL;DR: In this paper, modified maximum likelihood estimators are derived and shown to be robust and considerably more efficient than the traditional least squares estimators besides being insensitive to moderate design anomalies, and a reparametrization of the model is proposed to rectify the situation.

Book
01 Aug 2008
TL;DR: This book explains the development of Orthogonal Polynomial Approximation and discusses its applications in nonlinear and nonlinear systems identification, as well as investigating its application in Numerical Simulation.
Abstract: Least Square Methods The Least Square Algorithm Linear Least Square Methods Nonlinear Least Squares Algorithm Properties of Least Square Algorithms Examples Polynomial Approximation Gram-Schmidt Procedure of Orthogonalization Hypergeometric Function Approach to Generate Orthogonal Polynomials Discrete Variable Orthogonal Polynomials Approximation Properties of Orthogonal Polynomials Artificial Neural Networks for Input-Output Approximation Introduction Direction-Dependent Approach Directed Connectivity Graph Modified Minimal Resource Allocating Algorithm (MMRAN) Numerical Simulation Examples Multi-Resolution Approximation Methods Wavelets Bezier Spline Moving Least Squares Method Adaptive Multi-Resolution Algorithm Numerical Results Global-Local Orthogonal Polynomial MAPping (GLO-MAP) in N Dimensions Basic Ideas Approximation in 1, 2, and N Dimensions Using Weighting Functions Global-Local Orthogonal Approximation in 1-, 2-, and N-Dimensional Spaces Algorithm Implementation Properties of GLO-MAP Approximation Illustrative Engineering Applications Nonlinear System Identification Problem Statement and Background Novel System Identification Algorithm Nonlinear System Identification Algorithm Numerical Simulation Distributed Parameter Systems MLPG-Moving Least Squares Approach Partition of Unity Finite Element Method Control Distribution for Over-Actuated Systems Problem Statement and Background Control Distribution Functions Hierarchical Control Distribution Algorithm Numerical Results Appendix References Index Each chapter contains an Introduction and a Summary.

Journal ArticleDOI
TL;DR: In this article, the authors show that the least square error of approximation at any ball is bounded by an average of the discrete Menger-type curvature over simplices in the ball.
Abstract: This is the second of two papers wherein we estimate multiscale least squares approximations of certain measures by Menger-type curvatures. More specifically, we study an arbitrary d-regular measure on a real separable Hilbert space. The main result of the paper bounds the least squares error of approximation at any ball by an average of the discrete Menger-type curvature over certain simplices in in the ball. A consequent result bounds the Jones-type flatness by an integral of the discrete curvature over all simplices. The preceding paper provided the opposite inequalities. Furthermore, we demonstrate some other discrete curvatures for characterizing uniform rectifiability and additional continuous curvatures for characterizing special instances of the (p, q)-geometric property. We also show that a curvature suggested by Leger (Annals of Math, 149(3), p. 831-869, 1999) does not fit within our framework.

Journal ArticleDOI
TL;DR: In this correspondence, parameter estimation of a polynomial phase signal in additive white Gaussian noise is addressed and the problem of cubic phase signal estimation is studied in detail and its simplification for a chirp signal is given.
Abstract: In this correspondence, parameter estimation of a polynomial phase signal (PPS) in additive white Gaussian noise is addressed. Assuming that the order of the PPS is at least 3, the basic idea is first to separate its phase parameters into two sets by a novel signal transformation procedure, and then the multiple signal classification (MUSIC) method is utilized for joint estimating the phase parameters with second-order and above. In doing so, the parameter search dimension is reduced by a half as compared to the maximum likelihood and nonlinear least squares approaches. In particular, the problem of cubic phase signal estimation is studied in detail and its simplification for a chirp signal is given. The effectiveness of the proposed approach is also demonstrated by comparing with several conventional techniques via computer simulations.

Journal ArticleDOI
TL;DR: A regularized least squares approach based support vector machine for simultaneously approximating a function and its derivatives and the solution of a structured system of linear equations is needed.

Journal ArticleDOI
TL;DR: The problem of nonlinear weighted least squares fitting of the three-parameter Weibull distribution to the given data (wi,ti,yi), i=1,...,n, is considered and it is shown that the best least squares estimate exists.