scispace - formally typeset
Search or ask a question

Showing papers on "Non-linear least squares published in 1998"



01 Jan 1998
TL;DR: This paper presents a numerically stable non-iterative algorithm for fitting an ellipse to a set of data points based on a least squares minimization which leads to a simple, stable and robust fitting method which can be easily implemented.
Abstract: This paper presents a numerically stable non-iterative algorithm for fitting an ellipse to a set of data points. The approach is based on a least squares minimization and it guarantees an ellipse-specific solution even for scattered or noisy data. The optimal solution is computed directly, no iterations are required. This leads to a simple, stable and robust fitting method which can be easily implemented. The proposed algorithm has no computational ambiguity and it is able to fit more than 100,000 points in a second.

520 citations


Journal ArticleDOI
TL;DR: In this article, the authors present an algorithm to determine if a real polynomial is a sum of squares (of polynomials) and to find an explicit representation if it is.

252 citations


Book ChapterDOI
Er-Wei Bai1
21 Jun 1998
TL;DR: In this article, an optimal two-stage identification algorithm for Hammerstein-Wiener systems is presented, which is shown to be convergent in the absence of noise and convergent with probability one in the presence of white noise.
Abstract: An optimal two stage identification algorithm is presented for Hammerstein-Wiener systems where two static nonlinear elements surround a linear block. The proposed algorithm consists of two steps: The first one is the recursive least squares and the second one is the singular value decomposition of two matrices whose dimensions are fixed and do not increase as the number of the data point increases. Moreover, the algorithm is shown to be convergent in the absence of noise and convergent with probability one in the presence of white noise.

241 citations


Journal ArticleDOI
TL;DR: The method of fitting nonlinear functions with Solver is introduced and the treatment to weighted least squares and to the estimation of uncertainties in the least-squares parameters is extended.
Abstract: "Solver" is a powerful tool in the Microsoft Excel spreadsheet that provides a simple means of fitting experimental data to nonlinear functions. The procedure is so easy to use and its mode of operation is so obvious that it is excellent for students to learn the underlying principle of lease squares curve fitting. This article introduces the method of fitting nonlinear functions with Solver and extends the treatment to weighted least squares and to the estimation of uncertainties in the least-squares parameters.

237 citations


Journal ArticleDOI
TL;DR: In this paper, it is shown that the conditional least squares estimator of the parameters including the threshold parameter is root-n consistent and asymptotically normally distributed in the continuous threshold autoregressive model.
Abstract: The continuous threshold autoregressive model is a sub-class of the threshold autoregressive model subject to the requirement that the piece-wise linear autoregressive function be continuous everywhere. In contrast with the discontinuous case, it is shown that, under suitable regularity conditions, the conditional least squares estimator of the parameters including the threshold parameter is root-n consistent and asymptotically normally distributed. The theory is illustrated by a simulation study and is applied to the quarterly U.S. unemployment rates.

213 citations


Journal ArticleDOI
TL;DR: In this article, a least squares method is developed for minimizing the sum of squares of all elements of a matrix B subject to the constraint that the columns of B are unimodal, i.e. each has only one peak.
Abstract: In this paper a least squares method is developed for minimizing ||Y - XBT||F2 over the matrix B subject to the constraint that the columns of B are unimodal, i.e. each has only one peak, and ||M||F2 being the sum of squares of all elements of M. This method is directly applicable in many curve resolution problems, but also for stabilizing other problems where unimodality is known to be a valid assumption. Typical problems arise in certain types of time series analysis such as chromatography or flow injection analysis. A fundamental and surprising result of this work is that unimodal least squares regression (including optimization of mode location) is not any more difficult than two simple Kruskal monotone regressions. This had not been realized earlier, leading to the use of either undesirable ad hoc methods or very time-consuming exhaustive search algorithms. The new method is useful in and exemplified with two- and multi-way methods based on alternating least squares regression solving problems from fluorescence spectroscopy and flow injection analysis. © 1998 John Wiley & Sons, Ltd.

172 citations


Journal ArticleDOI
TL;DR: Two new robust optic flow methods are introduced that outperform other published methods both in accuracy and robustness and uses total least squares to solve the optic flow problem.
Abstract: This paper formulates the optic flow problem as a set of over-determined simultaneous linear equations. It then introduces and studies two new robust optic flow methods. The first technique is based on using the Least Median of Squares (LMedS) to detect the outliers. Then, the inlier group is solved using the least square technique. The second method employs a new robust statistical method named the Least Median of Squares Orthogonal Distances (LMSOD) to identify the outliers and then uses total least squares to solve the optic flow problem. The performance of both methods are studied by experiments on synthetic and real image sequences. These methods outperform other published methods both in accuracy and robustness.

142 citations


Journal ArticleDOI
TL;DR: The paper gives the statistical analysis for this algorithm, studies the global asymptotic convergence ofThis algorithm by an equivalent energy function, and evaluates the performances of this algorithm via computer simulations.
Abstract: Widrow (1971) proposed the least mean squares (LMS) algorithm, which has been extensively applied in adaptive signal processing and adaptive control. The LMS algorithm is based on the minimum mean squares error. On the basis of the total least mean squares error or the minimum Raleigh quotient, we propose the total least mean squares (TLMS) algorithm. The paper gives the statistical analysis for this algorithm, studies the global asymptotic convergence of this algorithm by an equivalent energy function, and evaluates the performances of this algorithm via computer simulations.

133 citations


Journal ArticleDOI
TL;DR: The WRELAX method is a relaxation-based minimizer of a complicated nonlinear least squares criterion and can be applied to detecting and classifying roadway subsurface anomalies by using an ultra-wideband ground-penetrating radar.
Abstract: We present a conceptually simple and computationally efficient algorithm, which is referred to as WRELAX for the well-known time delay estimation problem. The method is a relaxation-based minimizer of a complicated nonlinear least squares criterion, WRELAX can be applied to detecting and classifying roadway subsurface anomalies by using an ultra-wideband ground-penetrating radar. Numerical and experimental examples are provided to demonstrate the performance of the new algorithm.

129 citations


Journal ArticleDOI
TL;DR: In this article, a simple estimator called the singular value decomposition (SVD) estimator is proposed for bilinear regression models, which is faster and more easily computed.

Journal ArticleDOI
TL;DR: The motivation for developing this method was the complexity of existing statistical methods for analysis of biochemical rate equations, as well as the shortcomings of linear approaches, such as Lineweaver-Burk plots.
Abstract: A convenient method for evaluation of biochemical reaction rate coefficients and their uncertainties is described. The motivation for developing this method was the complexity of existing statistical methods for analysis of biochemical rate equations, as well as the shortcomings of linear approaches, such as Lineweaver-Burk plots. The nonlinear least-squares method provides accurate estimates of the rate coefficients and their uncertainties from experimental data. Linearized methods that involve inversion of data are unreliable since several important assumptions of linear regression are violated. Furthermore, when linearized methods are used, there is no basis for calculation of the uncertainties in the rate coefficients. Uncertainty estimates are crucial to studies involving comparisons of rates for different organisms or environmental conditions. The spreadsheet method uses weighted least-squares analysis to determine the best-fit values of the rate coefficients for the integrated Monod equation. Although the integrated Monod equation is an implicit expression of substrate concentration, weighted least-squares analysis can be employed to calculate approximate differences in substrate concentration between model predictions and data. An iterative search routine in a spreadsheet program is utilized to search for the best-fit values of the coefficients by minimizing the sum of squared weighted errors. The uncertainties in the best-fit values of the rate coefficients are calculated by an approximate method that can also be implemented in a spreadsheet. The uncertainty method can be used to calculate single-parameter (coefficient) confidence intervals, degrees of correlation between parameters, and joint confidence regions for two or more parameters. Example sets of calculations are presented for acetate utilization by a methanogenic mixed culture and trichloroethylene cometabolism by a methane-oxidizing mixed culture. An additional advantage of application of this method to the integrated Monod equation compared with application of linearized methods is the economy of obtaining rate coefficients from a single batch experiment or a few batch experiments rather than having to obtain large numbers of initial rate measurements. However, when initial rate measurements are used, this method can still be used with greater reliability than linearized approaches.

Journal ArticleDOI
TL;DR: In this paper, the authors compare the performance of both direct trilinear decomposition (DTD) and multivariate curve resolution-alternating least squares (MCR-ALS) methods.

Journal ArticleDOI
TL;DR: Fast new algorithms for evaluating trees with respect to least squares and minimum evolution (ME), the most commonly used criteria for inferring phylogenetic trees from distance data, are presented.
Abstract: We present fast new algorithms for evaluating trees with respect to least squares and minimum evolution (ME), the most commonly used criteria for inferring phylogenetic trees from distance data. These include: an optimal O(N2 ) time algorithm for calculating the branch (edge) lengths on a tree according to ordinary or unweighted least squares (OLS); an O(N3) time algorithm for edge lengths under weighted least squares (WLS) and the FitchMargoliash method; and an optimal O(N4) time algorithm for generalised least squares edge lengths. The Minimum Evolution criterion is based on the sum of edge lengths. Consequently, the edge lengths algorithms presented here lead directly to O(N2), O(N3) and O(N4) time algorithms for ME under OLS, WLS and GLS respectively. These algorithms are substantially faster than all those previously published, and the algorithms for OLS and GLS are the fastest possible (with respect to order of computational complexity). An optimal algorithm for determining path lengths in a tree with given edge lengths is also developed. This leads to an optimal O(N ) algorithm for OLS sums of squares evaluation and corresponding O(N3 ) and O(N4) time algorithms for WLS and GLS sums of squares, respectively. The GLS algorithm is time optimal if the covariance matrix is already inverted. The considerable increases in speed enable far more extensive tree searches and statistical evaluations (e.g. bootstrap, parametric bootstrap or jackknife). Hopefully, the fast algorithms for WLS and GLS will encourage their use for evaluating trees and their edge lengths ( e.g. for approximate divergence time estimates), since they should be more statistically efficient than OLS.

Journal ArticleDOI
TL;DR: It is shown here that replacing the SVD by a low-rank revealing decomposition speeds up the computations without affecting the accuracy of the wanted parameter estimates.

Journal ArticleDOI
TL;DR: Inverse modeling has become a standard technique for estimating hydrogeologic parameters as discussed by the authors, and robustness of these estimators has been tested by means of Monte Carlo simulations of a synthetic experiment, in which both non-Gaussian random errors and systematic modeling errors have been introduced.
Abstract: Inverse modeling has become a standard technique for estimating hydrogeologic parameters. These parameters are usually inferred by minimizing the sum of the squared differences between the observed system state and the one calculated by a mathematical model. The robustness of the least squares criterion, however, has to be questioned because of the tendency of outliers in the measurements to strongly influence the outcome of the inversion. We have examined alternative approaches to the standard least squares formulation. The robustness of these estimators has been tested by means of Monte Carlo simulations of a synthetic experiment, in which both non-Gaussian random errors and systematic modeling errors have been introduced. The approach was then applied to data from an actual gas-pressure-pulse-decay experiment. The study demonstrates that robust estimators have the potential to reduce estimation bias in the presence of noisy data and minor systematic errors, which may be a significant advantage over the standard least squares method.

Journal ArticleDOI
TL;DR: In this article, statistical condition estimation is applied to the linear least squares problem and the method obtains componentwise condition estimates via the Frechet derivative, which is as computationally efficient as normwise condition estimation methods, and is easily adapted to respect structural constraints on perturbations of the input data.
Abstract: Statistical condition estimation is applied to the linear least squares problem. The method obtains componentwise condition estimates via the Frechet derivative. A rigorous statistical theory exists that determines the probability of accuracy in the estimates. The method is as computationally efficient as normwise condition estimation methods, and it is easily adapted to respect structural constraints on perturbations of the input data. Several examples illustrate the method.

Journal ArticleDOI
TL;DR: In this paper, linear and nonlinear least squares methods were applied to fit experimental data of adsorption of a metal ion in the presence of another metal ion, on three Taiwan soils, to Langmuir and Freundlich equations.
Abstract: In the Langmuir adsorption equation, q = MbC/(1+bC), the b parameter can be identified as the reciprocal of the concentration, C 1/2 , at which the adsorbent is half-saturated with the adsorbate. If the concentration, C, is scaled in the unit of C 1/2 , and replaced the C', where C' = C/C 1/2 , the universal dimensionless Langmuir equation, θ = C'/(1+C'), is obtained. Arbitrary points chosen on segments of the normal Langmuir plot can be fitted to different Freundlich equations with statistical significance. This indicates that the Freundlich equation can be applied to represent a selected range of the adsorption data that also fit the Langmuir equation. Linear and nonlinear least squares methods were applied to fit experimental data of adsorption of a metal ion in the presence of another metal ion, on three Taiwan soils, to Langmuir and Freundlich equations. The goodness-of-fit of the model to the experimental data was compared with the magnitude of the residual root mean square error (RMSE) of the original nonlinear forms of both adsorption isotherms. Results indicate that simple conclusions, based on the R 2 values obtained by the usual linear least squares method applied to the linearly transformed equations, may be in error. Even when the metal ion adsorption on soils appeared to be better represented by the Freundlich equation,judging from the size of the R 2 value, than by the Langmuir equation, there are cases in which the Langmuir equation could better represent the experimental data based on the size of RMSE value.These were examples of experiments conducted in a limited concentration range. Increasing the range of concentration for the adsorption experiments may eventually turn the Freundlich-type adsorption isotherms into the Langmuir type if no complication arises in the more concentrated solutions.

Journal ArticleDOI
TL;DR: In this paper, a multisplitting (MS) strategy is proposed to solve the linear least square problem, minx kAx bk2, where the system matrix is decomposed by columns into p blocks, and the b and x vectors are partitioned consistently with the matrix decomposition.
Abstract: The linear least squares problem, minx kAx bk2, is solved by applying a multisplitting(MS) strategy in which the system matrix is decomposed by columns into p blocks. The b and x vectors are partitioned consistently with the matrix decomposition. The global least squares problem is then replaced by a sequence of local least squares problems which can be solved in parallel by MS. In MS the solutions to the local problems are recombined using weighting matrices to pick out the appropriate components of each subproblem solution. A new two-stage algorithm which optimizes the global update each iteration is also given. For this algorithm the updates are obtained by finding the optimal update with respect to the weights of the recombination. For the least squares problem presented, the global update optimization can also be formulated as a least squares problem of dimension p. Theoretical results are presented which prove the convergence of the iterations. Numerical results which detail the iteration behavior relative to subproblem size, convergence criteria and recombination techniques are given. The two-stage MS strategy is shown to be effective for near-separable problems. © 1998 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This work suggests a method for generating a surface approximating the given data @e R^3, i = 1, ....

Journal ArticleDOI
TL;DR: In this article, a weighted orthogonal least squares (OWLS) algorithm was proposed to estimate linear and non-linear continuous time differential equation models from complex frequency response data.

Proceedings ArticleDOI
01 Dec 1998
TL;DR: Theoretical and experimental evidence is provided to explain some surprising problems with the weighted least squares procedure and the results of an extensive Monte Carlo study are summarized.
Abstract: We formulate and evaluate weighted and ordinary least squares procedures for estimating the parametric rate function of a nonhomogeneous Poisson process. Special emphasis is given to processes having an exponential rate function, where the exponent may include a polynomial component or some trigonometric components or both. Theoretical and experimental evidence is provided to explain some surprising problems with the weighted least squares procedure. The ordinary least squares procedure is based on a square root transformation of the "detrended" event times; and the results of an extensive Monte Carlo study are summarized to show the advantages and disadvantages of this procedure.

Proceedings ArticleDOI
Ales Ude1
16 Aug 1998
TL;DR: This paper proposes a new method for solving general nonlinear least squares optimisation problems involving unit quaternion functions based on unconstrained optimisation techniques and demonstrates the effectiveness of this approach for pose estimation from 2D to 3D line segment correspondences.
Abstract: Pose estimation from an arbitrary number of 2D to 3D feature correspondences is often done by minimising a nonlinear criterion function using one of the minimal representations for the orientation. However, there are many advantages in using unit quaternions to represent the orientation. However, a straight forward formulation of the pose estimation problem based on quaternions results in a constrained optimisation problem. In this paper we propose a new method for solving general nonlinear least squares optimisation problems involving unit quaternion functions based on unconstrained optimisation techniques. We demonstrate the effectiveness of our approach for pose estimation from 2D to 3D line segment correspondences.

Proceedings ArticleDOI
12 May 1998
TL;DR: It is shown that the single-input multiple-output moving average process has the property that the error sequence of the least squares smoother, under certain conditions, uniquely determines the channel impulse response.
Abstract: A linear least squares smoothing approach is proposed for the blind channel estimation. It is shown that the single-input multiple-output moving average process has the property that the error sequence of the least squares smoother, under certain conditions, uniquely determines the channel impulse response. The relationship among the dimension of the observation space, channel order and smoothing delay is presented. A new algorithm for channel estimation based on the least squares smoothing is developed. The proposed approach has the finite-sample convergence property in the absence of the channel noise. It also has a structure suitable for recursive implementations.

Journal ArticleDOI
TL;DR: In this paper, a least squares method, using the minus one norm developed by Bramble, Lazarov, and Pasciak, is introduced to approximate the solution of the Reissner-Mindlin plate problem with small parameter t, the thickness of the plate.
Abstract: In this paper a least squares method, using the minus one norm developed by Bramble, Lazarov, and Pasciak, is introduced to approximate the solution of the Reissner-Mindlin plate problem with small parameter t, the thickness of the plate. The reformulation of Brezzi and Fortin is employed to prevent locking. Taking advantage of the least squares approach, we use only continuous finite elements for all the unknowns. In particular, we may use continuous linear finite elements. The difficulty of satisfying the inf-sup condition is overcome by the introduction of a stabilization term into the least squares bilinear form, which is very cheap computationally. It is proved that the error of the discrete solution is optimal with respect to regularity and uniform with respect to the parameter t. Apart from the simplicity of the elements, the stability theorem gives a natural block diagonal preconditioner of the resulting least squares system. For each diagonal block, one only needs a preconditioner for a second order elliptic problem.

Journal ArticleDOI
14 Nov 1998-Metrika
TL;DR: In this article, the consistency of the least squares estimators of the model parameters and the asymptotic distribution of the estimators are obtained for a particular two dimensional model, which has a wide application in statistical signal processing and texture classifications.
Abstract: We consider a particular two dimensional model, which has a wide applications in statistical signal processing and texture classifications. We prove the consistency of the least squares estimators of the model parameters and also obtain the asymptotic distribution of the least squares estimators. We observe the strong consistency of the least squares estimators when the errors are independent and identically distributed double array random variables. We show that the asymptotic distribution of the least squares estimators are multivariate normal. It is observed that the asymptotic dispersion matrix coincides with the Cramer-Rao lower bound. This paper generalizes some of the existing one dimensional results to the two dimensional case. Some numerical experiments are performed to see how the asymptotic results work for finite samples.

Journal ArticleDOI
01 Jun 1998
TL;DR: Two algorithms, the FFTB (fast Fourier transform based) algorithm and the NLS (nonlinear least squares) algorithm, are devised to estimate the model parameters, and it is shown that the parameter estimates obtained with both algorithms reach the CRBs as the SNR (signal-to-noise ratio) increases.
Abstract: The authors study feature extraction of targets, consisting of both trihedral and dihedral corner reflectors, via synthetic aperture radar (SAR). A mixed data model is introduced to describe the target features. The Cramer-Rao bounds (CRBs) for the parameter estimates of the data model are also derived. Two algorithms, the FFTB (fast Fourier transform based) algorithm and the NLS (nonlinear least squares) algorithm, are devised to estimate the model parameters. Numerical examples show that the parameter estimates obtained with both algorithms reach the CRBs as the SNR (signal-to-noise ratio) increases. The parameter estimates obtained with the NLS algorithm start to achieve the CRB at a lower SNR than those with the FFTB algorithm, while the latter algorithm is computationally more efficient.

Journal ArticleDOI
TL;DR: In this article, the authors compare the partial least squares (PLS) and principal component analysis (PCA) in a general case in which the existence of a true linear regression is not assumed, and prove under mild conditions that PLS and PCA are equivalent, to within a first-order approximation.
Abstract: We compare the partial least squares (PLS) and the principal component analysis (PCA), in a general case in which the existence of a true linear regression is not assumed. We prove under mild conditions that PLS and PCA are equivalent, to within a first-order approximation, hence providing a theoretical explanation for empirical findings reported by other researchers. Next, we assume the existence of a true linear regression equation and obtain asymptotic formulas for the bias and variance of the PLS parameter estimator

Journal ArticleDOI
TL;DR: In this paper, it was shown that the least squares and the weighted least squares algorithms possess the same asymptotic properties, sharing the same central limit theorem and the same law of iterated logarithm.
Abstract: In autoregressive adaptive tracking, we prove that the least squares and the weighted least squares algorithms possess the same asymptotic properties, sharing the same central limit theorem and the same law of iterated logarithm. We also obtain the same asymptotic behavior and show the limitations of these results in the autoregressive with moving average framework.

Journal ArticleDOI
TL;DR: In this paper, the sensitivity information obtained by means of a local direct method can be useful in performing various tasks associated with the analysis of the statistical error propagation in the solution of forward and inverse problems in the modelling of electrochemical transients.