scispace - formally typeset
Search or ask a question

Showing papers on "Non-linear least squares published in 2021"


Journal ArticleDOI
TL;DR: A neural network-based method for solving linear and nonlinear partial differential equations, by combining the ideas of extreme learning machines, domain decomposition and local neural networks, which exhibits a clear sense of convergence with respect to the degrees of freedom in the neural network.

53 citations


Journal ArticleDOI
23 Feb 2021
TL;DR: In this paper, the authors proposed a generalized robust kernel family, which is automatically tuned based on the distribution of the residuals and includes the common m-estimators and tested their adaptive kernel with two estimation problems in robotics, namely ICP and bundle adjustment.
Abstract: State estimation is a key ingredient in most robotic systems. Often, state estimation is performed using some form of least squares minimization. Basically, all error minimization procedures that work on real-world data use robust kernels as the standard way for dealing with outliers in the data. These kernels, however, are often hand-picked, sometimes in different combinations, and their parameters need to be tuned manually for a particular problem. In this letter, we propose the use of a generalized robust kernel family, which is automatically tuned based on the distribution of the residuals and includes the common m-estimators. We tested our adaptive kernel with two popular estimation problems in robotics, namely ICP and bundle adjustment. The experiments presented in this letter suggest that our approach provides higher robustness while avoiding a manual tuning of the kernel parameters.

23 citations


Journal ArticleDOI
TL;DR: The results show that nonlinear least squares has multiple advantages over the conventional integral matching in terms of accuracy and robustness to noise, especially when the observations are irregularly-spaced.

18 citations


Journal ArticleDOI
TL;DR: The relationships among different forms of the VP algorithms, EPI algorithm and ALS algorithm are derived and a negative answer to Kaufman’s conjecture is generated.
Abstract: Separable nonlinear least squares (SNLLS) problems have attracted interest in a wide range of research fields such as machine learning, computer vision, and signal processing. During the past few decades, several algorithms, including the joint optimization algorithm, alternated least squares (ALS) algorithm, embedded point iterations (EPI) algorithm, and variable projection (VP) algorithms, have been employed for solving SNLLS problems in the literature. The VP approach has been proven to be quite valuable for SNLLS problems and the EPI method has been successful in solving many computer vision tasks. However, no clear explanations about the intrinsic relationships of these algorithms have been provided in the literature. In this paper, we give some insights into these algorithms for SNLLS problems. We derive the relationships among different forms of the VP algorithms, EPI algorithm and ALS algorithm. In addition, the convergence and robustness of some algorithms are investigated. Moreover, the analysis of the VP algorithm generates a negative answer to Kaufman’s conjecture. Numerical experiments on the image restoration task, fitting the time series data using the radial basis function network based autoregressive (RBF-AR) model, and bundle adjustment are given to compare the performance of different algorithms.

16 citations


Journal ArticleDOI
TL;DR: In this paper, a new approach for time-varying (TV) modal parameters identification is proposed, where the entire signal is divided into successive short time windows, and the structure response under white noise excitation is transformed into modal coordinates by the Independent Component Analysis (ICA) method.

13 citations


Journal ArticleDOI
TL;DR: Through analyzing robotic forward kinematics, it is found out that the Cartesian coordinates of the end-point are affine to length-related MDH parameters, where linear and nonlinear parameters can be separated.
Abstract: Kinematic calibration of robots is an effective way to guarantee and promote their performance characteristics. There are many mature researches on kinematic calibration, and methods based on MDH model are the most common ones. However, when employing these calibration methods, it occasionally happens that the objective function cannot converge during iterations. Through analyzing robotic forward kinematics, we found out that the Cartesian coordinates of the end-point are affine to length-related MDH parameters, where linear and nonlinear parameters can be separated. Thanks to the distinctive characteristic of the MDH model, the kinematic calibration problem can be converted into a separable nonlinear least squares problem, which can further be partitioned into two subproblems: a linear least squares problem and a reduced problem involving only nonlinear parameters. Eventually, the optimal structural parameters can be identified by solving this problem iteratively. The results of numerical and experimental validations show that: 1) the robustness during identification procedure is enhanced by eliminating the partial linear structural parameters, the convergence rate is promoted from 68.98% to 100% with different deviation vector pairs; 2) the initial values to be pre-set for kinematic calibration problem are fewer and 3) fewer parameters are to be identified by nonlinear least squares regression, resulting in fewer iterations and faster convergence, where average runtime is reduced from 33.931s to 1.874s.

12 citations


Proceedings ArticleDOI
Jingwei Huang1, Shan Huang1, Mingwei Sun1
20 Jun 2021
TL;DR: In this paper, a stochastic domain decomposition approach is proposed to solve large-scale nonlinear least squares problems based on deep learning frameworks. But the authors focus on solving the problem of bundle adjustment.
Abstract: We propose a novel approach for large-scale nonlinear least squares problems based on deep learning frameworks. Nonlinear least squares are commonly solved with the Levenberg-Marquardt (LM) algorithm for fast convergence. We implement a general and efficient LM solver on a deep learning framework by designing a new backward jacobian network to enable automatic sparse jacobian matrix computation. Furthermore, we introduce a stochastic domain decomposition approach that enables batched optimization and preserves convergence for large problems. We evaluate our method by solving bundle adjustment as a fundamental problem. Experiments show that our optimizer significantly outperforms the state-of-the-art solutions and existing deep learning solvers considering quality, efficiency, and memory. Our stochastic domain decomposition enables distributed optimization, consumes little memory and time, and achieves similar quality compared to a global solver. As a result, our solver effectively solves nonlinear least squares on an extremely large scale. Our code will be available based on Pytorch1 and Mindspore2.

11 citations


Journal ArticleDOI
TL;DR: In this paper, a non-linear least-squares algorithm for solving a combined formula for gravity and self-potential anomalies due to simple geometric shapes was proposed, which was relied upon delimiting the origin anomaly value and two symmetric anomaly values with their equivalent distances along with the anomaly profile in order to invert the buried geometry model parameters.
Abstract: The detection of buried geometrical model parameters is vital to full interpretation of potential field data, especially that related to gravity and/or self-potential anomalies. This study introduced a proposed non-linear least-squares algorithm for solving a combined formula for gravity and self-potential anomalies due to simple geometric shapes. This proposed algorithm was relied upon delimiting the origin anomaly value and two symmetric anomaly values with their equivalent distances along with the anomaly profile in order to invert the buried geometry model parameters. After that, a root mean square error (μ-value) for each parameter value at different postulated shape factor was assessed. The μ-value was considered as a benchmark for detecting the true-values of the subsurface geometry structures. The efficacy and rationality of the proposed approach were revealed by numerous synthetic cases with and without random noise. Furthermore, the sensitivity analysis between shape factor and μ-value were investigated on synthetic gravity and self-potential data. It was evident that the inverted parameters were reliable with the genuine ones. This proposed method was tested on samples of gravity data and self-potential data taken from Senegal and USA. To judge the satisfaction of this approach, the results gained were compared with other available geological or geophysical information in the published literature.

8 citations


Journal ArticleDOI
TL;DR: In this paper, a new inverse analysis approach is proposed to derive the fracture mode I parameters of fiber reinforced concrete (FRC) by using the experimental data obtained from three-point notched beam bending tests (3PNBBT) and round panel tests supported on three points (RPT-3PS).

8 citations


Journal ArticleDOI
TL;DR: In this article, a method based on discrete optimal control theory is proposed to regularize the ill-posed problem of parameter estimation in ODEs, which is computationally less intensive and more accurate in sparse sample case than the one based on continuous control.

6 citations


Journal ArticleDOI
TL;DR: In this article, the regularized total least squares (RTLS) problem is reformulated as a nonlinear least squares problem and can be solved by the Gauss-Newton method.
Abstract: The total least squares (TLS) method is a well-known technique for solving an overdetermined linear system of equations Ax ≈ b, that is appropriate when both the coefficient matrix A and the right-hand side vector b are contaminated by some noise. For ill-posed TLS poblems, regularization techniques are necessary to stabilize the computed solution; otherwise, TLS produces a noise-dominant output. In this paper, we show that the regularized total least squares (RTLS) problem can be reformulated as a nonlinear least squares problem and can be solved by the Gauss–Newton method. Due to the nature of the RTLS problem, we present an appropriate method to choose a good regularization parameter and also a good initial guess. Finally, the efficiency of the proposed method is examined by some test problems.

Proceedings ArticleDOI
01 Jan 2021
TL;DR: In this article, a fast preconditioned solver for the bundle adjustment problem is proposed based on the deflation of the largest eigenvalues of the Hessian, and an estimate of the condition number is derived.
Abstract: The bundle adjustment (BA) problem is formulated as a non linear least squares problem which, requires the solution of a linear system. For solving this system, we present the design and implementation of a fast preconditioned solver. The proposed preconditioner is based on the deflation of the largest eigenvalues of the Hessian. We also derive an estimate of the condition number of the preconditioned system. Numerical experiments on problems from the BAL dataset [3] suggest that our solver is the fastest, sometimes, by a factor of five, when compared to the current state-of-the-art solvers for bundle adjustment.

Journal ArticleDOI
01 Jan 2021
TL;DR: Algorithms for linear and non-linear least squares fitting of Bezier surfaces to unstructured point clouds are derived from first principles, using the developed fitting algorithm to remove the geometric form of a complex engineered surface such that the surface roughness can be evaluated.
Abstract: Algorithms for linear and non-linear least squares fitting of Bezier surfaces to unstructured point clouds are derived from first principles. The presented derivation includes the analytical form of the partial derivatives that are required for minimising the objective functions, these have been computed numerically in previous work concerning Bezier curve fitting, not surface fitting. Results of fitting fourth degree Bezier surfaces to complex simulated and measured surfaces are presented, a quantitative comparison is made between fitting Bezier surfaces and fitting polynomial surfaces. The developed fitting algorithm is used to remove the geometric form of a complex engineered surface such that the surface roughness can be evaluated.

Journal ArticleDOI
TL;DR: In this paper, a modified Levenberg-Marquardt (LM) nonlinear least squares (NLS) algorithm is used to determine the thermal conductivity from the experimental data using a flux boundary condition.


Journal ArticleDOI
TL;DR: In this paper, the authors study parametric robust estimation in nonlinear regression models with regressors generated by a class of non-stationary and null recurrent Markov processes, and derive both the consistency and limit distribution results for the developed general robust estimators (including the nonlinear least squares, least absolute deviation and Huber's M-estimators).

Journal ArticleDOI
TL;DR: In this paper, the power Topp-Leone (PTL) distribution with two parameters, quantile measurements, certain moment-s measures, residual life function, and entropy measure are investigated.
Abstract: We display the power Topp-Leone (PTL) distribution with two parameters. The following major features of the PTL distribution are investigated: quantile measurements, certain moment’s measures, residual life function, and entropy measure. Maximum likelihood, least squares, Cramer von Mises, and weighted least squares approaches are used to estimate the PTL parameters. A numerical illustration is prepared to compare the behavior of the achieved estimates. Data analysis is provided to scrutinize the flexibility of the PTL model matched with Topp-Leone distribution. Received: 02 Aug 2020 Accepted: 05 May 2021

Posted Content
TL;DR: In this article, the authors investigate an approach for derivative-free optimization that has not received sufficient attention in the literature and is yet one of the simplest to implement and parallelize.
Abstract: The goal of this paper is to investigate an approach for derivative-free optimization that has not received sufficient attention in the literature and is yet one of the simplest to implement and parallelize. It consists of computing gradients of a smoothed approximation of the objective function (and constraints), and employing them within established codes. These gradient approximations are calculated by finite differences, with a differencing interval determined by the noise level in the functions and a bound on the second or third derivatives. It is assumed that noise level is known or can be estimated by means of difference tables or sampling. The use of finite differences has been largely dismissed in the derivative-free optimization literature as too expensive in terms of function evaluations and/or as impractical when the objective function contains noise. The test results presented in this paper suggest that such views should be re-examined and that the finite-difference approach has much to be recommended. The tests compared NEWUOA, DFO-LS and COBYLA against the finite-difference approach on three classes of problems: general unconstrained problems, nonlinear least squares, and general nonlinear programs with equality constraints.

Journal ArticleDOI
TL;DR: A novel regularized separable algorithm that takes advantage of the VP method and the expectation–maximization (EM) method to optimize the nonlinear parameters and automatically picks out the regularization parameters during the search process is considered.
Abstract: The radial basis function network-based state-dependent autoregressive (RBF-AR) model has been widely used in modeling and prediction of nonlinear time series. The parameter identification of RBF-AR model can be reformulated as a separable nonlinear least squares problem. The variable projection (VP) algorithm has been proven to be valuable in solving such problems. However, for ill-posed problems, the classical VP algorithm usually yields unstable models. In this paper, we consider a novel regularized separable algorithm that takes advantage of the VP method and the expectation–maximization (EM) method. The proposed algorithm utilizes the VP algorithm to optimize the nonlinear parameters and automatically picks out the regularization parameters during the search process. Numerical results on real-world data and synthetic time series confirm the effectiveness of the proposed algorithm.

Journal ArticleDOI
TL;DR: In this paper, a deterministic numerical method is proposed for the double-diode model parameter extraction through a separable nonlinear least squares approach, which can be easily obtained by deterministic search algorithms without manual preset or intervention.
Abstract: Parameter extraction of the double-diode photovoltaic model is a highly nonconvex optimization problem, and up until now, many metaheuristic methods have been proposed to try to avoid local minima. However, these metaheuristic methods will output different results in repeated tests and need to manually set lower and upper bounds for all parameters. In this work, a deterministic numerical method is proposed for the double-diode model parameter extraction through a separable nonlinear least squares approach. In this approach, the complexity of the optimization problem is greatly lowered by the reduction of independent parameters, and the solution can be easily obtained by deterministic search algorithms without manual preset or intervention. The proposed method is first validated on two commonly used case studies of curve data fitting, showing a comparable performance to the best reported metaheuristic methods. Then, the method is extended to parameter extraction from manufacturer datasheets, in which only three important I–V data points are available. Finally, the method is tested on a large-scale dataset containing more than one million I–V curves provided by the National Renewable Energy Laboratory (NREL). The result of the large-scale test proves the superiority of the double-diode model over the single-diode counterpart in precise modeling of photovoltaic modules.

Journal ArticleDOI
TL;DR: The results derived from the OSSEs indicated that the NASM system could effectively assimilate multi-time PM2.5 observations, reduce uncertainty in surface initial PM1.5 concentrations, and thus improve the accuracy of predictions.

Journal ArticleDOI
TL;DR: The large-scale optimization problems at the core of many graphics, vision, and imaging applications are often implemented by hand in tedious and error-prone processes in order to achieve high performance as discussed by the authors.
Abstract: Large-scale optimization problems at the core of many graphics, vision, and imaging applications are often implemented by hand in tedious and error-prone processes in order to achieve high performa...

Journal ArticleDOI
TL;DR: This work shows that the NLLS algorithm is flexible enough to be made immune to harmonic distortion, DC offset, and phase imbalances and capable of estimation in the presence of phase and amplitude jumps with near-zero error.

Journal ArticleDOI
03 Mar 2021
TL;DR: In this article, the authors applied three nonlinear growth models (Gompertz, Richards, and Weibull) to study the daily cumulative number of COVID-19 cases in Iraq during the period from 13th of March, 2020 to 22nd of July, 2020.
Abstract: This study aimed to apply three of the most important nonlinear growth models (Gompertz, Richards, and Weibull) to study the daily cumulative number of COVID-19 cases in Iraq during the period from 13th of March, 2020 to 22nd of July, 2020.Using the nonlinear least squares method, the three growth models were estimated in addition to calculating some related measures in this study using the “nonlinear regression” tool available in Minitab-17, and the initial values of the parameters were deduced from the transformation to the simple linear regression equation. Comparison of these models was made using some statistics (F-test, AIC, BIC, AICc and WIC).The results indicate that the Weibull model is the best adequate model for studying the cumulative daily number of COVID-19 cases in Iraq according to some criteria such as having the highest F and lowest values for RMSE, bias, MAE, AIC, BIC, AICc and WIC with no any violations of the assumptions for the model’s residuals (independent, normal distribution and homogeneity variance). The overall model test and tests of the estimated parameters showed that the Weibull model was statistically significant for describing the study data.From the Weibull model predictions, the number of cumulative confirmed cases of novel coronavirus in Iraq will increase by a range of 101,396 (95% PI: 99,989 to 102,923) to 114,907 (95% PI: 112,251 to 117,566) in the next 24 days (23rd of July to 15th of August 15, 2020). From the inflection points in the Weibull curve, the peak date when the growth rate will be maximum, is 7th of July, 2020, and at this time the daily cumulative cases become 67,338. Using the nonlinear least squares method, the models were estimated and some related measures were calculated in this study using the “nonlinear regression” tool available in Minitab-17, and the initial values of the parameters were obtained from the transformation to the simple linear regression model.

Journal ArticleDOI
01 Jul 2021-Heliyon
TL;DR: Based on the spectral parameters of Barzillai and Borwein (1998), the authors proposed three structured spectral gradient algorithms for solving nonlinear least-squares (NLS) problems.

Journal ArticleDOI
TL;DR: In this paper, a globally convergent homotopy continuation algorithm is proposed to solve the nonlinear least squares problem through a path-tracking strategy in model space, which is based on introducing a new functional to replace the quadratic Tikhonov-Phillips functional.

Proceedings ArticleDOI
28 Jul 2021
TL;DR: In this article, a rigid body localization (RBL) technology is proposed based on the semi-definite programming (SDP) with NLoS error elimination, which takes the unknown NLoS errors as the balancing parameters in the RSS model, and the results of the maximum likelihood estimation (MLE) is obtained according to a nonlinear least squares (NLS) problem.
Abstract: In the wireless sensor network (WSN) based rigid body target localization systems, the error caused by the non-line-of-sight (NLoS) transmission of wireless signal can seriously deteriorate the localization accuracy. To improve the estimation performance of rigid body posture, including the parameters of the position and the direction, a rigid body localization (RBL) technology is proposed based on the semi-definite programming (SDP) with NLoS error elimination. By taking the unknown NLoS errors as the balancing parameters in the received signal strength (RSS) model, the results of the maximum likelihood estimation (MLE) is obtained according to a nonlinear least squares (NLS) problem, which is solved by using the semi-definite relaxation (SDR) method. The obtained balancing parameters are then re-substituted into the model to refine the parameters in the semi-definite programming problem to mitigate the effect of the NLoS error on the localization performance. Finally, the singular value decomposition (SVD) is used to solve the parameters of rigid body posture. The effectiveness of the algorithm is verified by computer simulation.



Journal ArticleDOI
TL;DR: In this article, the Gibbs algorithm was used to simulate the posterior density of the theta-logistic model, which is approximated by grid approximation and Bayesian credible intervals are obtained.