scispace - formally typeset
Search or ask a question

Showing papers on "Non-linear least squares published in 2005"


Journal ArticleDOI
TL;DR: This work introduces two methods for the inverse kinematics of multibodies with multiple end effectors called selectively damped least squares (SDLS), which adjusts the damping factor separately for each singular vector of the Jacobian singular value decomposition based on the difficulty of reaching the target positions.
Abstract: We introduce two methods for the inverse kinematics of multibodies with multiple end effectors. The first method clamps the distance of the target positions. Experiments show this is effective in reducing oscillation when target positions are unreachable. The second method is an extension of damped least squares called selectively damped least squares (SDLS), which adjusts the damping factor separately for each singular vector of the Jacobian singular value decomposition based on the difficulty of reaching the target positions. SDLS has advantages in converging in fewer iterations and in not requiring ad-hoc damping constants. Source code is available online.

313 citations


Journal ArticleDOI
TL;DR: For multivariable discrete-time systems described by transfer matrices, an HLSI algorithm and a hierarchical least squares iterative algorithm based on a hierarchical identification principle are developed and shown to have significant computational advantage over existing identification algorithms.
Abstract: For multivariable discrete-time systems described by transfer matrices, we develop a hierarchical least squares iterative (HLSI) algorithm and a hierarchical least squares (HLS) algorithm based on a hierarchical identification principle. We show that the parameter estimation error given by the HLSI algorithm converges to zero for the deterministic cases, and that the parameter estimates by the HLS algorithm consistently converge to the true parameters for the stochastic cases. The algorithms proposed have significant computational advantage over existing identification algorithms. Finally, we test the proposed algorithms on an example and show their effectiveness.

297 citations


Journal ArticleDOI
TL;DR: This work studies the least squares fit (LSF) of circular arcs to incomplete scattered data, analyzes theoretical aspects of the problem, and reveals the cause of unstable behavior of conventional algorithms.
Abstract: Fitting standard shapes or curves to incomplete data (which represent only a small part of the curve) is a notoriously difficult problem Even if the curve is quite simple, such as an ellipse or a circle, it is hard to reconstruct it from noisy data sampled along a short arc Here we study the least squares fit (LSF) of circular arcs to incomplete scattered data We analyze theoretical aspects of the problem and reveal the cause of unstable behavior of conventional algorithms We also find a remedy that allows us to build another algorithm that accurately fits circles to data sampled along arbitrarily short arcs

251 citations


Journal ArticleDOI
TL;DR: The application of the stochastic genetic algorithm in conjunction with the deterministic Powell search to analysis of the multicomponent powder EPR spectra based on computer simulation allows for automated extraction of the magnetic parameters and relative abundances of the component signals, from the nonlinear least-squares fitting of experimental spectra, with minimum outside intervention.
Abstract: The application of the stochastic genetic algorithm (GA) in conjunction with the deterministic Powell search to analysis of the multicomponent powder EPR spectra based on computer simulation is described. This approach allows for automated extraction of the magnetic parameters and relative abundances of the component signals, from the nonlinear least-squares fitting of experimental spectra, with minimum outside intervention. The efficiency and robustness of GA alone and its hybrid variant with the Powell method was demonstrated using complex simulated and real EPR data sets. The unique capacity of the genetic algorithm for locating global minima, subsequently refined by the Powell method, allowed for successful fitting of the spectra. The influence of the population size, mutation, and crossover rates on the performance of GA was also investigated.

205 citations


Book
01 Jan 2005
TL;DR: In this article, the authors present a general framework for solving minimum problems in matrix algebra using multiple regression and matrix partitioning, which is aimed at students who have been exposed to an introductory course in Matrix Algebra, and are acquainted with concepts like the rank of a matrix, linear dependence of sets of vectors, matrix inversion, matrix partition, orthogonality, and eigenvectors of (real) symmetric matrices.
Abstract: This version (2005) is essentially the same as the original one, published in 1993 by DSWO Press (Leiden). In particular, all material has been kept on the same pages. Apart from a few typographic and language corrections, the following non-trivial changes have been made. • ALS has been deleted four lines below (97), page 51. • The sentence below (13) on page 8 was deleted. • The last paragraph of page 51 has been rewritten. • The last paragraph of page 52 has been changed. • The line above (100) on page 53 has been changed. • On line 14 of page 64, X has been changed to A. • Page numbers of two references have been added on page 73. • Question 6 of page 75 and the answer (page 77) have been changed. • Question 13 of page 76 has been changed. • Question 34 (and the answer) has been deleted (page 84). • The answers to questions 36 a and b (page 84) have been rephrased. • Question 49c of page 87 has been rephrased. PREFACE This book arose from a course for senior students in Psychometrics and Sociometrics. It is aimed at students who have been exposed to an introductory course in Matrix Algebra, and are acquainted with concepts like the rank of a matrix, linear (in)dependence of sets of vectors, matrix inversion, matrix partitioning, orthogonality, and eigenvectors of (real) symmetric matrices. Also, at least a superficial familiarity with Multivariate Analysis methods is deemed necessary. The methods discussed in this book are Multiple Regression Although the purpose of each of these methods is explained in the text, previous exposure to at least some of them is recommended. This book has a narrowly defined goal. Each of the nine methods mentioned involves a least squares minimization problem. The purpose of this book is to treat these minimum problems in a unified framework. It is this framework that matters, rather than the nine methods. The framework should provide the student with a thorough understanding of a large number of existing (alternating) least squares techniques, and may serve as a tool for dealing with novel least squares problems as they come about. An eminent feature of the framework is that it does not rely on differential calculus. Partial derivatives play no role in it. Instead, the method of completing-the-squares is generalized to vector functions and matrix functions, to yield …

116 citations


Journal ArticleDOI
Peiliang Xu1
TL;DR: Sign-constrained robust least squares as discussed by the authors is a robust estimation method that employs a maximum possible number of good data to derive the robust solution, and thus will not be affected by partial near-multi-collinearity among part of the data or if some data are clustered together.
Abstract: The findings of this paper are summarized as follows: (1) We propose a sign-constrained robust estimation method, which can tolerate 50% of data contamination and meanwhile achieve high, least-squares-comparable efficiency. Since the objective function is identical with least squares, the method may also be called sign-constrained robust least squares. An iterative version of the method has been implemented and shown to be capable of resisting against more than 50% of contamination. As a by-product, a robust estimate of scale parameter can also be obtained. Unlike the least median of squares method and repeated medians, which use a least possible number of data to derive the solution, the sign-constrained robust least squares method attempts to employ a maximum possible number of good data to derive the robust solution, and thus will not be affected by partial near multi-collinearity among part of the data or if some of the data are clustered together; (2) although M-estimates have been reported to have a breakdown point of 1/(t+1), we have shown that the weights of observations can readily deteriorate such results and bring the breakdown point of M-estimates of Huber’s type to zero. The same zero breakdown point of the L 1-norm method is also derived, again due to the weights of observations; (3) by assuming a prior distribution for the signs of outliers, we have developed the concept of subjective breakdown point, which may be thought of as an extension of stochastic breakdown by Donoho and Huber but can be important in explaining real-life problems in Earth Sciences and image reconstruction; and finally, (4) We have shown that the least median of squares method can still break down with a single outlier, even if no highly concentrated good data nor highly concentrated outliers exist.

113 citations


Journal ArticleDOI
TL;DR: This work extends partial least squares (PLS), a popular dimension reduction tool in chemometrics, in the context of generalized linear regression, based on a previous approach, iteratively reweightedpartial least squares, that is, IRWPLS, and shows that by phrasing the problem in a generalized linear model setting and by applying Firth's procedure to avoid (quasi)separation, one gets lower classification error rates.
Abstract: Advances in computational biology have made simultaneous monitoring of thousands of features possible. The high throughput technologies not only bring about a much richer information context in which to study various aspects of gene function, but they also present the challenge of analyzing data with a large number of covariates and few samples. As an integral part of machine learning, classification of samples into two or more categories is almost always of interest to scientists. We address the question of classification in this setting by extending partial least squares (PLS), a popular dimension reduction tool in chemometrics, in the context of generalized linear regression, based on a previous approach, iteratively reweighted partial least squares, that is, IRWPLS. We compare our results with two-stage PLS and with other classifiers. We show that by phrasing the problem in a generalized linear model setting and by applying Firth's procedure to avoid (quasi)separation, we often get lower classificatio...

104 citations


Journal ArticleDOI
TL;DR: This work derives strategies for recycling Krylov subspace information that exploit properties of the application and the nonlinear optimization algorithm to significantly reduce the total number of iterations over all linear systems.
Abstract: We discuss the efficient solution of a long sequence of slowly varying linear systems arising in computations for diffuse optical tomographic imaging. The reconstruction of three-dimensional absorption and scattering information by matching computed solutions from a parameterized model to measured data leads to a nonlinear least squares problem that we solve using the Gauss--Newton method with a line search. This algorithm requires the solution of a long sequence of linear systems. Each choice of parameters in the nonlinear least squares algorithm results in a different matrix describing the optical properties of the medium. These matrices change slowly from one step to the next, but may change significantly over many steps. For each matrix we must solve a set of linear systems with multiple shifts and multiple right-hand sides. For this problem, we derive strategies for recycling Krylov subspace information that exploit properties of the application and the nonlinear optimization algorithm to significantly reduce the total number of iterations over all linear systems. Furthermore, we introduce variants of GCRO that exploit symmetry and that allow simultaneous solution of multiple shifted systems using a single Krylov subspace in combination with recycling. Although we focus on a particular application and optimization algorithm, our approach is applicable generally to problems where sequences of linear systems must be solved. This may guide other researchers to exploit the opportunities of tunable solvers. We provide results for two sets of numerical experiments to demonstrate the effectiveness of the resulting method.

101 citations


Journal ArticleDOI
TL;DR: This work proposes a general regression framework, based on restriction of the search space to a subspace and a particular choice of basis vectors in feature space, which allows to accommodate kernel Partial Least Squares and kernel Canonical Correlation analysis for regression with a sparse representation, which makes them applicable to large data sets, with little loss in accuracy.

70 citations


Journal ArticleDOI
TL;DR: In this paper, an algorithm for nonlinear least squares constrained by partial differential equations is defined and applied to estimate effective membrane conductivity, exchange current densities and oxygen diffusion coefficients in a one-dimensional PEMFC model for transport in the principal direction of current flow.

70 citations


Journal ArticleDOI
TL;DR: The method of generalized least squares (GLS) is used to assess the variance function for isothermal titration calorimetry (ITC) data collected for the 1:1 complexation of Ba(2+) with 18-crown-6 ether, finding that the optimal number of injections is 7-10, which is a factor of 3 smaller than the current norm.

Journal ArticleDOI
TL;DR: In this article, an affine structure with additional assumptions is considered, in particular, Toeplitz and Hankel structured, noise free and unstructured blocks are allowed simultaneously in the augmented data matrix, and an equivalent optimization problem is derived that has as decision variables only the estimated parameters.

Journal ArticleDOI
TL;DR: In this article, a nonlinear least squares approach to parameter estimation was taken, based on the optimization of a cost function with respect to the parameters, which can be fit to approach as much as possible some data coming from experimental measurements or from numerical simulations performed using more complex models.
Abstract: The purpose of this work is to use a variational method to identify some of the parameters of one-dimensional models for blood flow in arteries. These parameters can be fit to approach as much as possible some data coming from experimental measurements or from numerical simulations performed using more complex models. A nonlinear least squares approach to parameter estimation was taken, based on the optimization of a cost function. The resolution of such an optimization problem generally requires the efficient and accurate computation of the gradient of the cost function with respect to the parameters. This gradient is computed analytically when the one-dimensional hyperbolic model is discretized with a second order Taylor-Galerkin scheme. An adjoint approach was used. Some preliminary numerical tests are shown. In these simulations, we mainly focused on determining a parameter that is linked to the mechanical properties of the arterial walls, the compliance. The synthetic data we used to estimate the parameter were obtained from a numerical computation performed with a more accurate model: a three-dimensional fluid-structure interaction model. The first results seem to be promising. In particular, it is worth noticing that the estimated compliance which gives the best fit is quite different from the values that are commonly used in practice.

Journal ArticleDOI
TL;DR: Simulation results are shown, indicating that an alternative method, called the method of weighted least squares (WLS), outperforms the OLS method in terms of mean squared error.

Posted Content
TL;DR: In this article, a central limit theorem is established for a class of semiparametric frequency domain-weighted least squares estimates, which includes both narrowband ordinary least squares and narrow-band generalized least squares as special cases.
Abstract: We consider semiparametric estimation in time-series regression in the presence of long-range dependence in both the errors and the stochastic regressors. A central limit theorem is established for a class of semiparametric frequency domain-weighted least squares estimates, which includes both narrow-band ordinary least squares and narrow-band generalized least squares as special cases. The estimates are semiparametric in the sense that focus is on the neighbourhood of the origin, and only periodogram ordinates in a degenerating band around the origin are used. This setting differs from earlier studies on time-series regression with long-range dependence, where a fully parametric approach has been employed. The generalized least squares estimate is infeasible when the degree of long-range dependence is unknown and must be estimated in an initial step. In that case, we show that a feasible estimate which has the same asymptotic properties as the infeasible estimate, exists. By Monte Carlo simulation, we evaluate the finite-sample performance of the generalized least squares estimate and the feasible estimate.

Journal ArticleDOI
TL;DR: In this paper, a generic least square regression (LSR) algorithm for estimating the parameters of nonlinear rational models has been proposed, where the explicit expression for linear in the parameters model is expanded into an implicit expression, and a generic algorithm in terms of least squares error is developed for the model parameter estimation.

Journal ArticleDOI
TL;DR: Under the hypothesis that derivative satisfies some kinds of weak Lipschitz condition, the exact estimates of the radii of convergence ball of Gauss–Newton’s method and the uniqueness ball of the solution are obtained.

Journal ArticleDOI
Mats Ekman1
TL;DR: In this article, the authors identify linear systems with errors in variables using separable nonlinear least squares (SNNS) for linear systems and identify the errors in the variables.

Proceedings ArticleDOI
08 Jun 2005
TL;DR: In this paper, a two-step identification technique based on subspace algorithms is used to construct a model for ionospheric dynamics, and the inputs to the model are measurements made by the ACE satellite, which is located at the first Lagrangian point between the sun and the earth, while the outputs of the model were ground-based magnetometer readings.
Abstract: To construct a model for ionospheric dynamics, a two step identification technique based on subspace algorithms is used. In the first step a Hammerstein model is identified using subspace algorithms and a basis function expansion for the input nonlinearities. In the second step the Wiener nonlinearity is identified as a standard least squares procedure. The inputs to the model are measurements made by the ACE satellite, which is located at the first Lagrangian point between the sun and the earth, while the outputs of the model are ground-based magnetometer readings. To avoid overfitting, the inputs are ranked in order of their effectiveness using an error search algorithm. Results for the ground-based magnetometer located at Thule in Greenland are presented.

Journal ArticleDOI
TL;DR: In this paper, it is shown that the application of NLS to EIS models can give rise to ill-posed problems and it is therefore basically pointless to attempt to achieve one absolute minimum of any objective function for an EIS model in an NLS problem.
Abstract: Non-linear least-squares (NLS) fitting is the typical approach to the modelling of electrochemical impedance spectroscopy (EIS) data. In general the application of NLS to EIS models can give rise to ill-posed problems. On the one side, with ill-posed problems it is not possible to prove a priori that a unique solution exists. On the other side, the relevant numerical approximations cannot ensure that a unique solution exists even a posteriori. It is therefore basically pointless to endeavour to achieve one absolute minimum of any objective function for an EIS model in an NLS problem. A lack of awareness of the above-mentioned factors might render numerical approaches tending to locate the absolute minimum questionable.

Journal ArticleDOI
TL;DR: In this article, a vector autoregression with deterministic terms and with no restrictions to its characteristic roots is considered, and strong consistency results for the least squares statistics are presented.
Abstract: A vector autoregression with deterministic terms and with no restrictions to its characteristic roots is considered. Strong consistency results for the least squares statistics are presented. This extends earlier results where deterministic terms have not been considered. In addition the convergence rates are improved compared with earlier results.Comments from S. Johansen are gratefully acknowledged.

Proceedings ArticleDOI
13 Oct 2005
TL;DR: A modified least squares support vector machines algorithm is proposed which operates on the principle of structurerisk minimization instead of the empirical risk minimization; hence a better generalization ability is guaranteed.
Abstract: The problem of construction of B-spline curves by a set of given points is an important issue in computer aided geometric design (CAGD) It is actually regression problem The traditional way is least squares fitting of the data based on minimizing the empirical risk Least squares support vector machines (LS-SVMs) are very effective methods for regression issue How to use LS-SVMs to solve the problem of construction B-spline curve in reverse engineering is discussed in this paper Whereas LS-SVMs are not suitable for the regression curves by B-spline form, a modified least squares support vector machines algorithm is proposed which operates on the principle of structure risk minimization instead of the empirical risk minimization; hence a better generalization ability is guaranteed A new kernel function is used to make curves have the B-spline form Our new method provides a new fitting way for CAGD Through the examples, the robust is compared among different methods Results demonstrate the validity of this new algorithm

Posted Content
TL;DR: Different techniques are proposed to discover structure in the data by looking for sparse components in the model based on dedicated regularization schemes on the one hand and fusion of the componentwise LS-SVMs training with a validation criterion on the other hand.
Abstract: This chapter describes componentwise Least Squares Support Vector Machines (LS-SVMs) for the estimation of additive models consisting of a sum of nonlinear components. The primal-dual derivations characterizing LS-SVMs for the estimation of the additive model result in a single set of linear equations with size growing in the number of data-points. The derivation is elaborated for the classification as well as the regression case. Furthermore, different techniques are proposed to discover structure in the data by looking for sparse components in the model based on dedicated regularization schemes on the one hand and fusion of the componentwise LS-SVMs training with a validation criterion on the other hand. (keywords: LS-SVMs, additive models, regularization, structure detection)

Journal ArticleDOI
TL;DR: In this paper, the authors considered the 2 k factorial design when the distribution of error terms are Weibull W(p,σ) and developed robust and efficient estimators for the parameters in 2 k Factorial Design.
Abstract: It is well known that the least squares method is optimal only if the error distributions are normally distributed. However, in practice, non-normal distributions are more prevalent. If the error terms have a non-normal distribution, then the efficiency of least squares estimates and tests is very low. In this paper, we consider the 2 k factorial design when the distribution of error terms are Weibull W(p,σ). From the methodology of modified likelihood, we develop robust and efficient estimators for the parameters in 2 k factorial design. F statistics based on modified maximum likelihood estimators (MMLE) for testing the main effects and interaction are defined. They are shown to have high powers and better robustness properties as compared to the normal theory solutions. A real data set is analysed.

Journal ArticleDOI
TL;DR: In this paper, a nonlinear, least-squares fitting of the Fabry-Pe/spl acute/rot (F-P) modes to an Airy function is proposed.
Abstract: A method for the measurement of the gain-reflectance product of Fabry-Pe/spl acute/rot (F-P) semiconductor lasers is proposed and compared to other techniques. The method is based on a nonlinear, least-squares fitting of the F-P modes to an Airy function. A separate fitting is performed over each mode, as measured with an optical spectrum analyzer (OSA), so that the gain-reflectance parameters are extracted. The influence of the OSAs response function is considered by convolution of the Airy function with the response function of the OSA. By comparing with the Hakki-Paoli method, the mode sum/min method, and the Fourier series expansion method, we find that the nonlinear fitting method is the least sensitive to noise. However, owing to a broadening of the F-P modes of the semiconductor laser, the mode sum/min method combined with a deconvolution technique gives the least underestimated gain above threshold.

Journal ArticleDOI
TL;DR: Using the Arbitrary Lagrangian Eulerian coordinates and the least squares method, a two-dimensional steady fluid structure interaction problem is transformed in an optimal control problem as discussed by the authors.
Abstract: Using the Arbitrary Lagrangian Eulerian coordinates and the least squares method, a two-dimensional steady fluid structure interaction problem is transformed in an optimal control problem. Sensitivity analysis is presented. The BFGS algorithm gives satisfactory numerical results even when we use a reduced number of discrete controls.


Journal ArticleDOI
TL;DR: In this paper, a frequency domain method for estimating the parameters of a multi-frequency signal from the discrete-time observations corrupted by additive noise is presented, which exploits the joint information carried by the spectral samples nearby each spectrum peak, and utilising its particular structure an efficient two-step iterative algorithm is developed to solve it.

DOI
01 Jan 2005
TL;DR: In this article, a new mixed numerical-experimental identification method based on the modal response of thick laminated shells is presented, which is founded on the minimisation of the discrepancies between the eigenvalues and eigenmodes computed with a highly accurate composite shell finite element model.
Abstract: Fibre-reinforced composites are being increasingly used as alternatives for conventional materials primarily because of their high strength, specific stiffness, light weight and adjustable properties. However, before using this type of material with confidence in industrial applications such as marine, automotive or aerospace structural components, a thorough characterization of the constituent material properties is needed. Because of the number and the inherent variability of the constitutive properties of composite materials, the experimental characterization is quite cumbersome and requires a large number of specimens to be tested. An elegant way to circumvent this lack consists in using mixed numerical-experimental methods which constitute powerful tools for estimating unknown constitutive coefficients in a numerical model of a composite structure from static and/or dynamic experimental data collected on the real structure. Starting from the measurement of quantities such as the natural frequencies and mode shapes, these methods allow, by comparing numerical and experimental observations, the progressive refinement of the estimated material properties in the corresponding numerical model. In this domain, dynamic mixed techniques have gained in importance owing to their simplicity and efficiency. In this work, a new mixed numerical-experimental identification method based on the modal response of thick laminated shells is presented. This technique is founded on the minimisation of the discrepancies between the eigenvalues and eigenmodes computed with a highly accurate composite shell finite element model with adjustable elastic properties and the corresponding experimental quantities. In the case of thick shells, the constitutive parameters that can be identified are the two in-plane Young's moduli E1 and E2, the in-plane Poisson's ratio ν12 and the in-plane and transverse shear moduli G12, G13 and G23. To determine these six parameters, a typical set of 10 to 15 measured eigenfrequencies and eigenmodes is selected, and the over-constrained optimisation problem is solved with a nonlinear least squares algorithm. In order to maximize the quality of the identification, free-free boundary conditions and a non-contacting modal measurement method are chosen for the experimental determination of the eigenparameters. To obtain optimal experimental conditions, the specimens are suspended by thin nylon yarns and excited by a calibrated acoustic source (loudspeakers) while the dynamic response is measured with a scanning laser vibrometer. The measured frequency response functions are then treated in a modal curve fitting software to obtain a high quality set of modal data (mode shapes and frequencies). As the accuracy of this inverse method directly depends on the precision of the finite element model, a family of very efficient thick laminated shell finite elements based on a variable p-order approximation of the through-the-thickness displacement with a full 3D orthotropic constitutive law has been developed. In these elements, varying the degree of approximation of the model allows to adjust the needs in accuracy and/or computation time. It is shown that for thick and highly orthotropic plates, the formulation exhibits a good convergence on the eigenfrequencies with p = 3 and a nearly exact solution for p = 7. In comparison to other 3D solid or thick shell elements, such as layerwise models, the presented elements show an equivalent precision of the computed eigenfrequencies and are computationally less expensive for laminates with more than 8 plies. A classical Levenberg-Marquardt nonlinear least squares minimisation algorithm is used to solve the inverse problem of finding the elastic constitutive parameters which are best matching the experimental modal data. Original multiple objective functions are used for comparing the computed and measured values. They are based upon the relative differences between the eigenfrequencies, upon the diagonal and off-diagonal terms of the so-called modal assurance criterion norm on the mode shapes, and upon geometrical properties of the mode shapes such as the nodal lines. In this work, the convergence properties of the minimisation algorithm are also investigated. It can be observed that usually the minimisation requires between 3 and 6 iterations to reach a residual error of less than 0.2 %. Finally, real identification examples are presented, for various thin to thick unidirectional carbon fiber plates and for a relatively thick cross-ply glass – polypropylene specimen. The robustness and the convergence of the present identification method are studied and the identification results are compared to those obtained with classical static tests. It can be concluded that overall, when the test specimens are moderately thick, the present identification method can accurately determine the in-plane Young's and shear moduli as well as the transverse shear moduli and the in-plane Poisson's ratio. It is also seen that the stability of the method is excellent as long as the number of measured modes is reasonably larger than the number of parameters to be identified.

01 Jan 2005
TL;DR: In this article, the problem of parameter estimation of the chirp signals in the presence of station-ary noise has been addressed and the least squares estimators are strongly consistent.
Abstract: The problem of parameter estimation of the chirp signals in presence of station- ary noise has been addressed. We consider the least squares estimators and it is observed that the least squares estimators are strongly consistent. The asymptotic distributions of the least squares estimators are obtained. The multiple chirp signal model is also considered and we obtain the asymptotic properties of the least squares estimators of the unknown param- eters. We perform some small sample simulations to observe how the proposed estimators work for small sample sizes.