scispace - formally typeset
Search or ask a question

Showing papers on "Singular value decomposition published in 1999"


Book
04 Aug 1999
TL;DR: This book discusses vector spaces, signal processing, and the theory of Constrained Optimization, as well as basic concepts and methods of Iterative Algorithms and Dynamic Programming.
Abstract: I. INTRODUCTION AND FOUNDATIONS. 1. Introduction and Foundations. II. VECTOR SPACES AND LINEAR ALGEBRA. 2. Signal Spaces. 3. Representation and Approximation in Vector Spaces. 4. Linear Operators and Matrix Inverses. 5. Some Important Matrix Factorizations. 6. Eigenvalues and Eigenvectors. 7. The Singular Value Decomposition. 8. Some Special Matrices and Their Applications. 9. Kronecker Products and the Vec Operator. III. DETECTION, ESTIMATION, AND OPTIMAL FILTERING. 10. Introduction to Detection and Estimation, and Mathematical Notation. 11. Detection Theory. 12. Estimation Theory. 13. The Kalman Filter. IV. ITERATIVE AND RECURSIVE METHODS IN SIGNAL PROCESSING. 14. Basic Concepts and Methods of Iterative Algorithms. 15. Iteration by Composition of Mappings. 16. Other Iterative Algorithms. 17. The EM Algorithm in Signal Processing. V. METHODS OF OPTIMIZATION. 18. Theory of Constrained Optimization. 19. Shortest-Path Algorithms and Dynamic Programming. 20. Linear Programming. APPENDIXES. A. Basic Concepts and Definitions. B. Completing the Square. C. Basic Matrix Concepts. D. Random Processes. E. Derivatives and Gradients. F. Conditional Expectations of Multinomial and Poisson r.v.s.

1,568 citations


Journal ArticleDOI
TL;DR: This paper demonstrates that by using singular value decomposition as a method for calculating the order matrices, principal frames and order parameters can be determined efficiently, even when a very limited set of experimental data is available.

550 citations


Journal ArticleDOI
TL;DR: POD is utilized to solve open-loop and closed-loop optimal control problems for the Burgers equation to comparison of POD-based algorithms with numerical results obtained from finite-element discretization of the optimality system.
Abstract: Proper orthogonal decomposition (POD) is a method to derive reduced-order models for dynamical systems In this paper, POD is utilized to solve open-loop and closed-loop optimal control problems for the Burgers equation The relative simplicity of the equation allows comparison of POD-based algorithms with numerical results obtained from finite-element discretization of the optimality system For closed-loop control, suboptimal state feedback strategies are presented

433 citations


Journal ArticleDOI
TL;DR: This paper examines the distribution of singular values of low-rank matrices corrupted by additive noise using diagrammatic and saddle point integration techniques to extend these results to heterogeneous and correlated noise sources.
Abstract: The singular value decomposition is a matrix decomposition technique widely used in the analysis of multivariate data, such as complex space-time images obtained in both physical and biological systems. In this paper, we examine the distribution of singular values of low-rank matrices corrupted by additive noise. Past studies have been limited to uniform uncorrelated noise. Using diagrammatic and saddle point integration techniques, we extend these results to heterogeneous and correlated noise sources. We also provide perturbative estimates of error bars on the reconstructed low-rank matrix obtained by truncating a singular value decomposition.

314 citations


Journal ArticleDOI
01 Feb 1999
TL;DR: Several orthogonal transformation-based methods that provide new or alternative tools for rule selection in fuzzy-rule-based modeling are introduced and can be used as a guideline for choosing a proper rule selection method for a specific application.
Abstract: An important issue in fuzzy-rule-based modeling is how to select a set of important fuzzy rules from a given rule base. Even though it is conceivable that removal of redundant or less important fuzzy rules from the rule base can result in a compact fuzzy model with better generalizing ability, the decision as to which rules are redundant or less important is not an easy exercise. In this paper, we introduce several orthogonal transformation-based methods that provide new or alternative tools for rule selection. These methods include an orthogonal least squares (OLS) method, an eigenvalue decomposition (ED) method, a singular value decomposition and QR with column pivoting (SVD-QR) method, a total least squares (TLS) method, and a direct singular value decomposition (D-SVD) method. A common attribute of these methods is that they all work on a firing strength matrix and employ some measure index to detect the rules that should be retained and eliminated. We show the performance of these methods by applying them to solving a nonlinear plant modeling problem. Our conclusions based on analysis and simulation can be used as a guideline for choosing a proper rule selection method for a specific application.

304 citations


Journal ArticleDOI
TL;DR: A generalized linear systems framework for PCA based on the singular value decomposition (SVD) model for representation of spatio-temporal fMRI data sets is presented and illustrated in the setting of dynamic time-series response data from fMRI experiments involving pharmacological stimulation of the dopaminergic nigro-striatal system in primates.

255 citations


Proceedings ArticleDOI
01 Jul 1999
TL;DR: A separable decomposition of bidirectional reflectance distributions (BRDFs) is used to implement arbitrary reflectances from point sources on existing graphics hardware, using no more space than what is required for the final representation.
Abstract: A separable decomposition of bidirectional reflectance distributions (BRDFs) is used to implement arbitrary reflectances from point sources on existing graphics hardware. Two-dimensional texture mapping and compositing operations are used to reconstruct samples of the BRDF at every pixel at interactive rates. A change of variables, the Gram-Schmidt halfangle/difference vector parameterization, improves separability. Two decomposition algorithms are also presented. The singular value decomposition (SVD) minimizes RMS error. The normalized decomposition is fast and simple, using no more space than what is required for the final representation.

250 citations


Journal ArticleDOI
TL;DR: Sufficient conditions for existence of smooth orthonormal decompositions of smooth time varying matrices, and their block-analogues, are given and differential equations for the factors are derived.
Abstract: In this paper we consider smooth orthonormal decompositions of smooth time varying matrices. Among others, we consider QR-, Schur-, and singular value decompositions, and their block-analogues. Sufficient conditions for existence of such decompositions are given and differential equations for the factors are derived. Also generic smoothness of these factors is discussed.

194 citations


Journal ArticleDOI
TL;DR: It is demonstrated that a so-called one-sided reorthogonalization process can be used to maintain an adequate level of orthogonality among the Lanczos vectors and produce accurate low-rank approximations.
Abstract: Low-rank approximation of large and/or sparse matrices is important in many applications, and the singular value decomposition (SVD) gives the best low-rank approximations with respect to unitarily-invariant norms In this paper we show that good low-rank approximations can be directly obtained from the Lanczos bidiagonalization process applied to the given matrix without computing any SVD We also demonstrate that a so-called one-sided reorthogonalization process can be used to maintain an adequate level of orthogonality among the Lanczos vectors and produce accurate low-rank approximations This technique reduces the computational cost of the Lanczos bidiagonalization process We illustrate the efficiency and applicability of our algorithm using numerical examples from several applications areas

139 citations


01 Jan 1999
TL;DR: Wahba'soptimality condition has provided the basis for many attitude determination algorithms and is given an overview of the most popular and most promising algorithm and to provide accuracy and speed comparisons.
Abstract: The most robust estimators minimizing Wahba's loss function are Davenport's q method and the Singular Value Decomposition (SVD) method. The q method is faster than the SVD method with three or more measurements. The other algorithms are less robust since they solve the characteristic polynomial equation to find the maximum eigenvalue of Davenport's K matrix. They are only preferable when speed or processor power is an important consideration. Of these, Fast Optimal Attitude Matrix (FOAM) is the most robust and faster than the q method. Robustness is only an issue for measurements with widely differing accuracies, so the fastest algorithms, Quaternion ESTimator (QUEST), EStimator of the Optimal Quaternion (ESOQ), and ESOQ2, are well suited to star sensor applications.

123 citations


Journal ArticleDOI
TL;DR: Five key features (noise immunity, robustness, resolution, accuracy, and physical insight) of the proposed algorithm are studied using numerical examples.
Abstract: Multiple ray paths are resolved using high-resolution digital signal processing algorithms. The Cramer-Rao (CR) bound is used as a benchmark where a combination of the singular value decomposition method and the eigen-matrix pencil method is proven to be most successful. The conventional complex channel model for wireless propagation is extended to include the frequency-dependent feature of rays which can be used to classify the ray arrivals and provide physical insight of the channel. A novel complex-time model is used to approximate the suggested model. This approach is important to various applications such as equalizers, RAKE receivers, etc., in wireless communication systems. Five key features (noise immunity, robustness, resolution, accuracy, and physical insight) of the proposed algorithm are studied using numerical examples.

Journal ArticleDOI
TL;DR: A new decomposition called the pivoted QLP decomposition, which is computed by applying pivoted orthogonal triangularization to the columns of the matrix X in question to get an upper triangular factor R and then applying the same procedure to the rows of R to get a lower triangular matrix L.
Abstract: In this paper we introduce a new decomposition called the pivoted QLP decomposition. It is computed by applying pivoted orthogonal triangularization to the columns of the matrix X in question to get an upper triangular factor R and then applying the same procedure to the rows of R to get a lower triangular matrix L. The diagonal elements of R are called the R-values of X; those of L are called the L-values. Numerical examples show that the L-values track the singular values of X with considerable fidelity---far better than the R-values. At a gap in the L-values the decomposition provides orthonormal bases of analogues of row, column, and null spaces provided of X. The decomposition requires no more than twice the work required for a pivoted QR decomposition. The computation of R and L can be interleaved, so that the computation can be terminated at any suitable point, which makes the decomposition especially suitable for low-rank determination problems. The interleaved algorithm also suggests a new, efficient 2-norm estimator.

Journal ArticleDOI
TL;DR: In this paper, a method based on the use of singular value decomposition (SVD) for detecting damage in structures at an early stage is presented, with results from an experimental investigation on a cantilever beam.

Proceedings ArticleDOI
24 May 1999
TL;DR: In this paper, the singular value decomposition (SVD) was used for the estimation of harmonics in signals, in the presence of high noise, and the proposed approach results in a linear least squares method.
Abstract: The paper examines the singular value decomposition (SVD) for estimation of harmonics in signals, in the presence of high noise. The proposed approach results in a linear least squares method. The methods developed for locating the frequencies as closely spaced sinusoidal signals are appropriate tools for the investigation of power system signals containing harmonics and interharmonics differing significantly in their multiplicity. The SVD approach is a numerical algorithm to calculate the linear least squares solution. The methods can also be applied for frequency estimation of heavy distorted periodical signals. To investigate the methods several experiments have been performed using simulated signals and the waveforms of a frequency converter current. For comparison, similar experiments have been repeated using the FFT with the same number of samples and sampling period. The comparison has proved superiority of the SVD for signals buried in the noise. However, the SVD computation is much more complex than FFT, and requires more extensive mathematical manipulations.

Journal ArticleDOI
TL;DR: In this paper, the authors study regularization methods for linear ill-posed problems and show that all these methods can be viewed either as smoothing the pseudo-inverse or equivalently as first smoothing data and then applying the pseudoinverse.
Abstract: The aim of this paper is to study regularization methods for linear ill-posed problems. Linear methods are Tikhonov-Phillips methods, iterative methods, truncated singular value decomposition, Backus-Gilbert-type methods and approximate inverse, for example. The first three are generally studied as filter methods where a special filter for the singular value decomposition can be computed. In the other methods mentioned the regularization is achieved by either smoothing the data or the solution. More general is the approximate inverse introduced by Louis (1996 Inverse Problems 12 175-90). Here we show that all these methods can be viewed either as smoothing the pseudo-inverse or equivalently as first smoothing the data and then applying the pseudo-inverse. The smoothing of the data or of the pseudo-inverse has to be at least of the order of the smoothing of the operator in the problem to be solved. Conditions for the order-optimality of the methods are given.

Journal ArticleDOI
TL;DR: In this article, an approximate method to estimate the resolution, covariance and correlation matrix for linear tomographic systems Ax=b that are too large to be solved by singular value decomposition is presented.
Abstract: We present an approximate method to estimate the resolution, covariance and correlation matrix for linear tomographic systems Ax=b that are too large to be solved by singular value decomposition. An explicit expression for the approximate inverse matrix A− is found using one-step backprojections on the Penrose condition AA−≈I, from which we calculate the statistical properties of the solution. The computation of A− can easily be parallelized, each column being constructed independently. The method is validated on small systems for which the exact covariance can still be computed with singular value decomposition. Though A− is not accurate enough to actually compute the solution x, the qualitative agreement obtained for resolution and covariance is sufficient for many purposes, such as rough assessment of model precision or the reparametrization of the model by the grouping of correlating parameters. We present an example for the computation of the complete covariance matrix of a very large (69 043 × 9610) system with 5.9 × 106 non-zero elements in A. Computation time is proportional to the number of non-zero elements in A. If the correlation matrix is computed for the purpose of reparametrization by combining highly correlating unknowns xi, a further gain in efficiency can be obtained by neglecting the small elements in A, but a more accurate estimation of the correlation requires a full treatment of even the smaller Aij. We finally develop a formalism to compute a damped version of A−.

Journal ArticleDOI
TL;DR: In this article, a theoretical explanation of the basics of singular value decomposition (SVD) is given to justify their use in applications in power system dynamics and control, and the ideas are applied to a single-machine infinite busbar system with a static var controller (SVC).

Journal ArticleDOI
TL;DR: This paper compares recent methods for obtaining a projection from incomplete data and finds that one can obtain a projection despite missing a substantial number of data.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the incorporation of the spatial covariance of the pericardial potentials, assumed known a priori as a regularization function, when computing the perimetric potential distribution from observed body surface potentials.
Abstract: This paper investigates the incorporation of the spatial covariance of the pericardial potentials, assumed known a priori as a regularization function, when computing the pericardial potential distribution from observed body surface potentials. The resulting inverse solutions are compared with those using as a regularization function: (1) the norm of the solution, (2) the norm of the surface Laplacian of the solution, as well as with those based on using the truncated singular value decomposition. The study uses a realistic source model to simulate potentials throughout the QRS-interval. This source is placed in an anatomically accurate inhomogeneous volume conductor model of the torso. The use of a single value of the regularization parameter is shown to be feasible: for data incorporating 2% noise, the use of the spatial covariance is demonstrated to result in a relative error over the entire QRS interval as low as 10%. Major errors are demonstrated to result if the effect of the inhomogeneity of the lungs is ignored. The spatial covariance based inverse is shown to be more robust with respect to the perturbations (noise; inhomogeneity) than the other estimators included in this study.

Journal ArticleDOI
TL;DR: In this article, the authors used Singular Value Decomposition (SVD) to analyze tokamak magnetic fluctuation data, time evolution of MHD modes, spatial structure of each time vector, and the energy content of each mode.
Abstract: Identification of coherent waves from fluctuating tokamak plasmas is important for the understanding of magnetohydrodynamics (MHD) behaviour of the plasma and its control Toroidicity, plasma shaping, uneven distances between the resonant surfaces and detectors, and non-circular conducting wall geometry have made mode identification difficult and complex, especially in terms of the conventional toroidal and poloidal mode numbers, which we call (m,n)-identification Singular value decomposition (SVD), without any assumption of the basis vectors, determines its own basis vectors representing the fluctuation data in the directions of maximum coherence Factorization of a synchronized set of spatially distributed data leads to eigenvectors of time- and spatial-covariance matrices, with the energy content of each eigenvector SVD minimizes the number of significant basis vectors, reducing noise, and minimizes the data storage required to restore the fluctuation data For sinusoidal signals, SVD is essentially the same as spectral analysis When the mode has non-smooth structures the advantage of not having to treat all its spectral components is significant in analysing mode dynamics and in data storage From time SVD vectors, we can see the evolution of each coherent structure Therefore, sporadic or intermittent events can be recognized, while such events would be ignored with spectral analysis We present the use of SVD to analyse tokamak magnetic fluctuation data, time evolution of MHD modes, spatial structure of each time vector, and the energy content of each mode If desired, the spatial SVD vectors can be least-square fit to specific numerical predictions for the (m,n) identification A phase-fitting method for (m,n) mode identification is presented for comparison Applications of these methods to mode locking analysis are presented

01 Aug 1999
TL;DR: A novel method is proposed that follows the autocalibration paradigm, according to which calibration is achieved not with the aid of a calibration pattern but by observing a number of image features in a set of successive images, which leads to a particularly simple form of the Kruppa equations.
Abstract: This paper deals with a fundamental problem in motion and stereo analysis, namely that of determining the camera intrinsic calibration parameters. A novel method is proposed that follows the autocalibration paradigm, according to which calibration is achieved not with the aid of a calibration pattern but by observing a number of image features in a set of successive images. The proposed method relies upon the Singular Value Decomposition of the fundamental matrix, which leads to a particularly simple form of the Kruppa equations. In contrast to the classical formulation that yields an over-determined system of constraints, the derivation proposed here provides a straightforward answer to the problem of determining which constraints to employ among the set of available ones. Moreover, the derivatio- n is a purely algebraic one, without a need for resorting to the somewhat non-intuitive geometric concept of the \em absolute conic. Apart from the fundamental matrix itself, no other quantities that can be extracted from it (e.g. the epipoles) are needed for the derivation. Experimental results from extensive simulations and several image sequences demonstrate the effectiveness of the proposed method in accurately estimating the intrinsic calibration matrices. It is also shown that the computed intrinsic calibration matrices are sufficient for recovering 3D motion and performing metric measurements from uncalibrated images.

Journal ArticleDOI
TL;DR: A new method is presented for structure preserving low rank approximation of a matrix, which is based on Structured Total Least Norm (STLN), an efficient method for obtaining an approximate solution to an overdetermined linear system.
Abstract: The structure preserving rank reduction problem arises in many important applications The singular value decomposition (SVD), while giving the closest low rank approximation to a given matrix in matrix L2 norm and Frobenius norm, may not be appropriate for these applications since it does not preserve the given structure We present a new method for structure preserving low rank approximation of a matrix, which is based on Structured Total Least Norm (STLN) The STLN is an efficient method for obtaining an approximate solution to an overdetermined linear system AX ≈ B, preserving the given linear structure in the perturbation [E F] such that (A + E)X = B + F The approximate solution can be obtained to minimize the perturbation [E F] in the Lp norm, where p = 1, 2, or ∞ An algorithm is described for Hankel structure preserving low rank approximation using STLN with Lp norm Computational results are presented, which show performances of the STLN based method for L1 and L2 norms for reduced rank approximation for Hankel matrices

Journal ArticleDOI
TL;DR: Using a singular value decomposition of a beam line matrix, composed of many beam position measurements for a large number of pulses, together with the measurement of pulse-by-pulse beam properties or machine attributes, the contributions of each variable to the beam centroid motion can be identified with a greatly improved resolution as mentioned in this paper.
Abstract: Using a singular value decomposition of a beam line matrix, composed of many beam position measurements for a large number of pulses, together with the measurement of pulse-by-pulse beam properties or machine attributes, the contributions of each variable to the beam centroid motion can be identified with a greatly improved resolution. The eigenvalues above the noise floor determine the number of significant physical variables. This method is applicable to storage rings, linear accelerators, and any system involving a number of sources and a larger number of sensors with unknown correlations. Applications are presented from the Stanford Linear Collider. {copyright} {ital 1999} {ital The American Physical Society}

Journal ArticleDOI
TL;DR: New fast algorithms are presented for tracking singular values, singular vectors, and the dimension of a signal subspace through an overlap sequence of data matrices through an overlapping sequence ofData matrices.
Abstract: New fast algorithms are presented for tracking singular values, singular vectors, and the dimension of a signal subspace through an overlapping sequence of data matrices. The basic algorithm is called fast approximate subspace tracking (FAST). The algorithm is derived for the special case in which the matrix is changed by deleting the oldest column, shifting the remaining columns to the left, and adding a new column on the right. A second algorithm (FAST2) is specified by modifying FAST to trade reduced accuracy for higher speed. The speed and accuracy are compared with the PL algorithm, the PAST and PASTd algorithms, and the FST algorithm. An extension to multicolumn updates for the FAST algorithm is also discussed.

Journal ArticleDOI
TL;DR: In this article, the stability of time-dependent deterministic and stochastic dynamical operators is examined in order to obtain a better understanding of the asymptotic stability and the nature of the first Lyapunov vector.
Abstract: Asymptotic linear stability of time-dependent flows is examined by extending to nonautonomous systems methods of nonnormal analysis that were recently developed for studying the stability of autonomous systems. In the case of either an autonomous or a nonautonomous operator, singular value decomposition (SVD) analysis of the propagator leads to identification of a complete set of optimal perturbations ordered according to the extent of growth over a chosen time interval as measured in a chosen inner product generated norm. The longtime asymptotic structure in the case of an autonomous operator is the norm-independent, most rapidly growing normal mode while in the case of the nonautonomous operator it is the first Lyapunov vector that grows at the norm independent mean rate of the first Lyapunov exponent. While information about the first normal mode such as its structure, energetics, vorticity budget, and growth rate are easily accessible through eigenanalysis of the dynamical operator, analogous information about the first Lyapunov vector is less easily obtained. In this work the stability of time-dependent deterministic and stochastic dynamical operators is examined in order to obtain a better understanding of the asymptotic stability of time-dependent systems and the nature of the first Lyapunov vector. Among the results are a mechanistic physical understanding of the time-dependent instability process, necessary conditions on the time dependence of an operator in order for destabilization to occur, understanding of why the Rayleigh theorem does not constrain the stability of time-dependent flows, the dependence of the first Lyapunov exponent on quantities characterizing the dynamical system, and identification of dynamical processes determining the time-dependent structure of the first Lyapunov vector.

Patent
13 Sep 1999
TL;DR: In this article, a technique for clustering data points in a data set that is arranged as a matrix having n objects and m attributes is presented, where each categorical attribute of the data set is converted to a 1-of-p representation of the categorical attributes.
Abstract: A technique for clustering data points in a data set that is arranged as a matrix having n objects and m attributes. Each categorical attribute of the data set is converted to a 1-of-p representation of the categorical attribute. A converted data set A is formed based on the data set and the 1-of-p representation for each categorical attribute. The converted data set A is compressed using, for example, a Goal Directed Projection compression technique or a Singular Value Decomposition compression technique, to obtain q basis vectors, with q being defined to be at least m+1. The transformed data set is projected onto the q basis vectors to form a data matrix having at least one vector, with each vector having q dimensions. Lastly, a clustering technique is performed on the data matrix having vectors having q dimensions.

Proceedings ArticleDOI
30 May 1999
TL;DR: It is demonstrated through some simple experiments that for a given image reconstruction quality, more scalar parameters must be transmitted using the SVD, than when using the discrete cosine transform (DCT).
Abstract: During the past couple of decades several proposals for image coders using singular value decomposition (SVD) have been put forward. The results using SVD in this context have never been spectacular. The main problem with the SVD is that the transform itself must be transmitted as side information. We demonstrate through some simple experiments that for a given image reconstruction quality, more scalar parameters must be transmitted using the SVD, than when using the discrete cosine transform (DCT). Also, using an alternative interpretation of the SVD we show that the SVD representation necessitates quantization of individual factors as compared to quantization of the associated product. This is clearly suboptimal.

Journal ArticleDOI
TL;DR: A composite-data processing method which simultaneously processes two or more data sets with different measurement errors is presented and the possible usefulness of the apparent T(1)/T(2) ratio extracted from the logs is illustrated.

Journal ArticleDOI
TL;DR: A modified method of PCA is presented, which utilizes complex singular value decomposition (SVD) to analyze spectral data sets with any amount of variation in spectral phase, and is shown to be completely insensitive to spectral phase.
Abstract: Principal component analysis (PCA) is a powerful method for quantitative analysis of nuclear magnetic resonance spectral data sets. It has the advantage of being model independent, making it well suited for the analysis of spectra with complicated or unknown line shapes. Previous applications of PCA have required that all spectra in a data set be in phase or have implemented iterative methods to analyze spectra that are not perfectly phased. However, improper phasing or imperfect convergence of the iterative methods has resulted in systematic errors in the estimation of peak areas with PCA. Presented here is a modified method of PCA, which utilizes complex singular value decomposition (SVD) to analyze spectral data sets with any amount of variation in spectral phase. The new method is shown to be completely insensitive to spectral phase. In the presence of noise, PCA with complex SVD yields a lower variation in the estimation of peak area than conventional PCA by a factor of approximately 2. The performance of the method is demonstrated with simulated data and in vivo 31P spectra from human skeletal muscle.

Journal ArticleDOI
TL;DR: Linear averaging (L-A) was devised for selection of centroids and radii for the RBFs and computing the number of RBF units and the proposed method considers the class membership and localized probability density distribution of each class in the training sets.
Abstract: Construction of radial basis function neural networks (RBFN) involves selection of radial basis function centroid, radius (width or scale), and number of radial basis function (RBF) units in the hidden layer. The K-means clustering algorithm is frequently used for selection of centroids and radii. However, with the K-means clustering algorithm, the number of RBF units is usually arbitrarily selected, which may lead to suboptimal performance of the neural network model. Besides, class membership and the related probability distribution are not considered. Linear averaging (L-A) was devised for selection of centroids and radii for the RBFs and computing the number of RBF units. The proposed method considers the class membership and localized probability density distribution of each class in the training sets. The parameters related to the network construction were investigated. The network was trained with the QuickProp algorithm (QP) or Singular Value Decomposition (SVD) algorithm and evaluated with the po...