scispace - formally typeset
Search or ask a question

Showing papers on "Singular value decomposition published in 1998"


Book
01 Jan 1998
TL;DR: This volume treats the numerical solution of dense and large-scale eigenvalue problems with an emphasis on algorithms and the theoretical background required to understand them.
Abstract: This book is the second volume in a projected five-volume survey of numerical linear algebra and matrix algorithms. This volume treats the numerical solution of dense and large-scale eigenvalue problems with an emphasis on algorithms and the theoretical background required to understand them. Stressing depth over breadth, Professor Stewart treats the derivation and implementation of the more important algorithms in detail. The notes and references sections contain pointers to other methods along with historical comments. The book is divided into two parts: dense eigenproblems and large eigenproblems. The first part gives a full treatment of the widely used QR algorithm, which is then applied to the solution of generalized eigenproblems and the computation of the singular value decomposition. The second part treats Krylov sequence methods such as the Lanczos and Arnoldi algorithms and presents a new treatment of the Jacobi-Davidson method. The volumes in this survey are not intended to be encyclopedic. By treating carefully selected topics in depth, each volume gives the reader the theoretical and practical background to read the research literature and implement or modify new algorithms. The algorithms treated are illustrated by pseudocode that has been tested in MATLAB implementations.

653 citations


Journal ArticleDOI
Er-Wei Bai1
TL;DR: In this article, an optimal two-stage identification algorithm is presented for Hammerstein-Wiener systems, where two static nonlinear elements surround a linear block, and the algorithm is shown to be convergent in the absence of noise and convergent with probability one in the presence of white noise.

519 citations


Journal ArticleDOI
TL;DR: The scaling procedure is a general-purpose tool that can be used not only to estimate latent/unobservable dimensions but also to estimate an Eckart-Young lower-rank approximation matrix of a matrix with missing entries.
Abstract: This paper develops a scaling procedure for estimating the latent/unobservable dimensions underlying a set of manifest/observable variables. The scaling procedure performs, in effect, a singular value decomposition of a rectangular matrix of real elements with missing entries. In contrast to existing techniques such as factor analysis which work with a correlation or covariance matrix computed from the data matrix, the scaling procedure shown here analyzes the data matrix directly. The scaling procedure is a general-purpose tool that can be used not only to estimate latent/unobservable dimensions but also to estimate an Eckart-Young lower-rank approximation matrix of a matrix with missing entries. Monte Carlo tests show that the procedure reliably estimates the latent dimensions and reproduces the missing elements of a matrix even at high levels of error and missing data. A number of applications to political data are shown and discussed.

405 citations


Journal ArticleDOI
TL;DR: In this paper, an alternating trilinear decomposition (ATLD) algorithm was proposed for the decomposition of three-way data arrays, which is based on an alternating least squares principle and an improved iterative procedure that uses the Moore-Penrose generalized inverse with singular value decomposition.
Abstract: In this paper an alternating trilinear decomposition (ATLD) algorithm that is an improvement of the traditional PARAFAC algorithm without any constraints is described as an alternative algorithm for decomposition of three-way data arrays It is based on an alternating least squares principle and an improved iterative procedure that, in a real trilinear sense, uses the Moore–Penrose generalized inverse with singular value decomposition Its performance is compared with that of the traditional PARAFAC algorithm by a series of Monte Carlo simulations with different noise levels It was found that the ATLD algorithm has a capability to converge faster than the traditional PARAFAC algorithm The ATLD-based second-order calibration retains the second-order advantage that calibration in the presence of unknown interferents can be performed to provide satisfactory concentration estimates Both algorithms have been used for simultaneous determination of overlapped chlorinated aromatic hydrocarbons measured by means of a high-performance liquid chromatograph with a diode array detector © 1998 John Wiley & Sons, Ltd

334 citations


Journal ArticleDOI
01 Dec 1998
TL;DR: A partial reorthogonalization procedure (BPRO) for maintaining semi-orthogonality among the left and right Lanczos vectors in the Lanczos bidiagonalization (LBD) is presented.
Abstract: A partial reorthogonalization procedure (BPRO) for maintaining semi-orthogonality among the left and right Lanczos vectors in the Lanczos bidiagonalization (LBD) is presented. The resulting algorithm is mathematically equivalent to the symmetric Lanczos algorithm with partial reorthogonalization (PRO) developed by Simon but works directly on the Lanczos bidiagonalization of A. For computing the singular values and vectors of a large sparse matrix with high accuracy, the BPRO algorithm uses only half the amount of storage and a factor of 3-4 less work compared to methods based on PRO applied to an equivalent symmetric system. Like PRO the algorithm presented here is based on simple recurrences which enable it to monitor the loss of orthogonality among the Lanczos vectors directly without forming inner products. These recurrences are used to develop a Lanczos bidiagonalization algorithm with partial reorthogonalization which has been implemented in a MATLAB package for sparse SVD and eigenvalue problems called PROPACK. Numerical experiments with the routines from PROPACK are conducted using a test problem from inverse helioseismology to illustrate the properties of the method. In addition a number of test matrices from the Harwell-Boeing collection are used to compare the accuracy and efficiency of the MATLAB implementations of BPRO and PRO with the svds routine in MATLAB 5.1, which uses an implicitly restarted Lanczos algorithm.

318 citations


01 Jan 1998
TL;DR: The key contributions to the OPC field made in this thesis work include formulation of OPC as a feedback control problem using an iterative solution, and use of fast aerial image simulation for OPC, which truly enables full chip model-based OPC.
Abstract: In this thesis, we first look at the Optical Proximity Correction (OPC) problem and define the goals, constraints, and techniques available. Then, a practical and general OPC framework is built up using concepts from linear systems, control theory, and computational geometry. A simulation-based, or model-based, OPC algorithm is developed which simulates the optics and processing steps of lithography for millions of locations. The key contributions to the OPC field made in this thesis work include: (1) formulation of OPC as a feedback control problem using an iterative solution, (2) an algorithm for edge movement during OPC with cost function criteria, (3) use of fast aerial image simulation for OPC, which truly enables full chip model-based OPC, and (4) the variable threshold resist (VTR) model for simplified prediction of CD based off aerial image. A major contribution of this thesis is the development of a fast aerial image simulator which is tailored to the problem of OPC. In OPC applications, it is best to compute intensity at sparse points. Therefore, our fast aerial image simulator is tailored to computing intensity at sparse points, rather than on a regular dense grid. The starting point for the fast simulation is an established decomposition of the Hopkins partially coherent imaging equations, originally proposed by Gamo (14). Within this thesis, the decomposition is called the Sum of Coherent Systems (SOCS) structure. The numerical implementation of this decomposition using Singular Value Decomposition (SVD) is described in detail. Another contribution of this thesis is the development of a variable threshold resist model (VTR). The model uses the aerial image peak intensity and image slope along a cutline to deduce the development point of the resist, and has two primary benefits: (1) it is fast, (2) it can be fit to empirical data. We combine the fast aerial image simulator and the VTR model in an iterative feedback loop to formulate OPC as a feedback control problem. (Abstract shortened by UMI.)

286 citations


Book
01 Jan 1998
TL;DR: This chapter discusses computer storage and Manipulation of data, computer operations on Numeric Data, and software for Numerical Linear Algebra.
Abstract: 1 Computer Storage and Manipulation of Data.- 1.1 Digital Representation of Numeric Data.- 1.2 Computer Operations on Numeric Data.- 1.3 Numerical Algorithms and Analysis.- Exercises.- 2 Basic Vector/Matrix Computations.- 2.1 Notation, Definitions, and Basic Properties.- 2.1.1 Operations on Vectors Vector Spaces.- 2.1.2 Vectors and Matrices.- 2.1.3 Operations on Vectors and Matrices.- 2.1.4 Partitioned Matrices.- 2.1.5 Matrix Rank.- 2.1.6 Identity Matrices.- 2.1.7 Inverses.- 2.1.8 Linear Systems.- 2.1.9 Generalized Inverses.- 2.1.10 Other Special Vectors and Matrices.- 2.1.11 Eigenanalysis.- 2.1.12 Similarity Transformations.- 2.1.13 Norms.- 2.1.14 Matrix Norms.- 2.1.15 Orthogonal Transformations.- 2.1.16 Orthogonalization Transformations.- 2.1.17 Condition of Matrices.- 2.1.18 Matrix Derivatives.- 2.2 Computer Representations and Basic Operations.- 2.2.1 Computer Representation of Vectors and Matrices.- 2.2.2 Multiplication of Vectors and Matrices.- Exercises.- 3 Solution of Linear Systems.- 3.1 Gaussian Elimination.- 3.2 Matrix Factorizations.- 3.2.1 LU and LDU Factorizations.- 3.2.2 Cholesky Factorization.- 3.2.3 QR Factorization.- 3.2.4 Householder Transformations (Reflections).- 3.2.5 Givens Transformations (Rotations).- 3.2.6 Gram-Schmidt Transformations.- 3.2.7 Singular Value Factorization.- 3.2.8 Choice of Direct Methods.- 3.3 Iterative Methods.- 3.3.1 The Gauss-Seidel Method with Successive Overrelaxation.- 3.3.2 Solution of Linear Systems as an Optimization Problem Conjugate Gradient Methods.- 3.4 Numerical Accuracy.- 3.5 Iterative Refinement.- 3.6 Updating a Solution.- 3.7 Overdetermined Systems Least Squares.- 3.7.1 Full Rank Coefficient Matrix.- 3.7.2 Coefficient Matrix Not of Full Rank.- 3.7.3 Updating a Solution to an Overdetermined System.- 3.8 Other Computations for Linear Systems.- 3.8.1 Rank Determination.- 3.8.2 Computing the Determinant.- 3.8.3 Computing the Condition Number.- Exercises.- 4 Computation of Eigenvectors and Eigenvalues and the Singular Value Decomposition.- 4.1 Power Method.- 4.2 Jacobi Method.- 4.3 QR Method for Eigenanalysis.- 4.4 Singular Value Decomposition.- Exercises.- 5 Software for Numerical Linear Algebra.- 5.1 Fortran and C.- 5.1.1 BLAS.- 5.1.2 Fortran and C Libraries.- 5.1.3 Fortran 90 and 95.- 5.2 Interactive Systems for Array Manipulation.- 5.2.1 Matlab.- 5.2.2 S, S-Plus.- 5.3 High-Performance Software.- 5.4 Test Data.- Exercises.- 6 Applications in Statistics.- 6.1 Fitting Linear Models with Data.- 6.2 Linear Models and Least Squares.- 6.2.1 The Normal Equations and the Sweep Operator.- 6.2.2 Linear Least Squares Subject to Linear Equality Constraints.- 6.2.3 Weighted Least Squares.- 6.2.4 Updating Linear Regression Statistics.- 6.2.5 Tests of Hypotheses.- 6.2.6 D-Optimal Designs.- 6.3 Ill-Conditioning in Statistical Applications.- 6.4 Testing the Rank of a Matrix.- 6.5 Stochastic Processes.- Exercises.- Appendices.- A Notation and Definitions.- B Solutions and Hints for Selected Exercises.- Literature in Computational Statistics.- World Wide Web, News Groups, List Servers, and Bulletin Boards.- References.- Author Index.

267 citations


Journal ArticleDOI
TL;DR: It is shown that SDD- based LSI does as well as SVD-based LSI in terms of document retrieval while requiring only one-twentieth the storage and one-half the time to compute each query.
Abstract: The vast amount of textual information available today is useless unless it can be effectively and efficiently searched. The goal in information retrieval is to find documents that are relevant to a given user query. We can represent and document collection by a matrix whose (i, j) entry is nonzero only if the ith term appears in the jth document; thus each document corresponds to a columm vector. The query is also represented as a column vector whose ith term is nonzero only if the ith term appears in the query. We score each document for relevancy by taking its inner product with the query. The highest-scoring documents are considered the most relevant. Unfortunately, this method does not necessarily retrieve all relevant documents because it is based on literal term matching. Latent semantic indexing (LSI) replaces the document matrix with an approximation generated by the truncated singular-value decomposition (SVD). This method has been shown to overcome many difficulties associated with literal term matching. In this article we propose replacing the SVD with the semidiscrete decomposition (SDD). We will describe the SDD approximation, show how to compute it, and compare the SDD-based LSI method to the SVD-based LSI methods. We will show that SDD-based LSI does as well as SVD-based LSI in terms of document retrieval while requiring only one-twentieth the storage and one-half the time to compute each query. We will also show how to update the SDD approximation when documents are added or deleted from the document collection.

262 citations


Journal ArticleDOI
TL;DR: In this article, a wavelet-vaguelette decomposition method is proposed to estimate the derivative of a function observed subject to noise in the presence of noise, and the performance of various methods are compared through exact risk calculations, in the context of the estimation of the derivative.
Abstract: SUMMARY A wide variety of scientific settings involve indirect noisy measurements where one faces a linear inverse problem in the presence of noise. Primary interest is in some function f(t) but data are accessible only about some linear transform corrupted by noise. The usual linear methods for such inverse problems do not perform satisfactorily when f(t) is spatially inhomogeneous. One existing nonlinear alternative is the wavelet-vaguelette decomposition method, based on the expansion of the unknown f(t) in wavelet series. In the vaguelette-wavelet decomposition method proposed here, the observed data are expanded directly in wavelet series. The performances of various methods are compared through exact risk calculations, in the context of the estimation of the derivative of a function observed subject to noise. A result is proved demonstrating that, with a suitable universal threshold somewhat larger than that used for standard denoising problems, both the wavelet-based approaches have an ideal spatial adaptivity property.

259 citations


Book ChapterDOI
Er-Wei Bai1
21 Jun 1998
TL;DR: In this article, an optimal two-stage identification algorithm for Hammerstein-Wiener systems is presented, which is shown to be convergent in the absence of noise and convergent with probability one in the presence of white noise.
Abstract: An optimal two stage identification algorithm is presented for Hammerstein-Wiener systems where two static nonlinear elements surround a linear block. The proposed algorithm consists of two steps: The first one is the recursive least squares and the second one is the singular value decomposition of two matrices whose dimensions are fixed and do not increase as the number of the data point increases. Moreover, the algorithm is shown to be convergent in the absence of noise and convergent with probability one in the presence of white noise.

241 citations


Journal ArticleDOI
TL;DR: In this paper, a unified matrix polynomial approach (UMPA) is used to unify the presentation with respect to current algorithms such as the least-squares complex exponential (LSCE), the polyreference time domain (PTD), Ibrahim time domain, eigensystem realization algorithm (ERA), rational fraction polynomials (RFP), polyreference frequency domain (PFD), and the complex mode indication function (CMIF) methods.

Journal ArticleDOI
TL;DR: In this article, the problem of selecting a side constraint and determining the regularization parameter in model updating is addressed, and the weight to be attached to the constraint is determined by the regularisation parameter.

Journal ArticleDOI
TL;DR: The algorithm combines gradient-based optimization of nonlinear weights with singular value decomposition (SVD) computation of linear weights in one integrated routine, particularly effective for the LMN architecture where the linear to nonlinear parameter ratio is large.
Abstract: This paper presents a new hybrid optimization strategy for training feedforward neural networks. The algorithm combines gradient-based optimization of nonlinear weights with singular value decomposition (SVD) computation of linear weights in one integrated routine. It is described for the multilayer perceptron (MLP) and radial basis function (RBF) networks and then extended to the local model network (LMN), a new feedforward structure in which a global nonlinear model is constructed from a set of locally valid submodels. Simulation results are presented demonstrating the superiority of the new hybrid training scheme compared to second-order gradient methods. It is particularly effective for the LMN architecture where the linear to nonlinear parameter ratio is large.

Journal ArticleDOI
TL;DR: In this article, a simple estimator called the singular value decomposition (SVD) estimator is proposed for bilinear regression models, which is faster and more easily computed.

01 Jan 1998
TL;DR: An overview of some important tensor algebraic concepts, and some of their implications in signal processing, and the expansion of a higher-order tensor in a sum of non-orthogonal rank-1 components is investigated.

Journal ArticleDOI
TL;DR: In this paper, a new method for uniform resampling is presented that is both optimal and efficient and compared with those obtained using the conventional gridding algorithm via simulations.
Abstract: The problem of handling data that falls on a nonequally spaced grid occurs in numerous fields of science, ranging from radio-astronomy to medical imaging. In MRI, this condition arises when sampling under time-varying gradients in sequences such as echo-planar imaging (EPI), spiral scans, or radial scans. The technique currently being used to interpolate the nonuniform samples onto a Cartesian grid is called the gridding algorithm. In this paper, a new method for uniform resampling is presented that is both optimal and efficient. It is first shown that the resampling problem can be formulated as a problem of solving a set of linear equations Ax = b, where x and b are vectors of the uniform and nonuniform samples, respectively, and A is a matrix of the sinc interpolation coefficients. In a procedure called Uniform Re-Sampling (URS), this set of equations is given an optimal solution using the pseudoinverse matrix which is computed using singular value decomposition (SVD). In large problems, this solution is neither practical nor computationally efficient. Another method is presented, called the Block Uniform Re-Sampling (BURS) algorithm, which decomposes the problem into solving a small set of linear equations for each uniform grid point. These equations are a subset of the original equations Ax = b and are once again solved using SVD. The final result is both optimal and computationally efficient. The results of the new method are compared with those obtained using the conventional gridding algorithm via simulations.

Journal ArticleDOI
TL;DR: A different set of so-called linear regularization techniques, originally derived to solve ill-posed regression problems, are compared to OLS for chaotic time series corrupted by additive measurement noise, and several of the methods achieve improved prediction compared toOLS for synthetic noise-corrupted data from well-known chaotic systems.

Journal ArticleDOI
TL;DR: An approximate singular value decomposition (SVD) is proposed, which can be used in a variety of applications and is demonstrated that the approximate SVD can be an effective preconditioner for iterative methods.

Journal ArticleDOI
TL;DR: A unified algorithm which can be used to extract both principal and minor component eigenvectors is proposed and if altered simply by the sign, it can also serve as a true minor components extractor.

Journal ArticleDOI
TL;DR: In this paper, instrumental variable extensions of the projection approximation subspace tracking (PAST) algorithm are presented and an extension to a "second order" IV algorithm is proposed, which in certain scenarios is demonstrated to have better tracking properties than the basic IV-PAST algorithms.
Abstract: Subspace estimation plays an important role in, for example, sensor array signal processing. Recursive methods for subspace tracking with application to nonstationary environments have also drawn considerable interest. In this paper, instrumental variable (IV) extensions of the projection approximation subspace tracking (PAST) algorithm are presented. The IV approach is motivated by the fact that PAST gives biased estimates when the noise is not spatially white. The proposed algorithms are based on a projection like unconstrained criterion, with a resulting computational complexity, of 3ml+O(mn), where m is the dimension of the measurement vector; l is the dimension of the IV vector; and n is the subspace dimension. In addition, an extension to a "second order" IV algorithm is proposed, which in certain scenarios is demonstrated to have better tracking properties than the basic IV-PAST algorithms. The performance of the algorithms is demonstrated with a simulation study of a time-varying array processing scenario.

Journal ArticleDOI
TL;DR: In this paper, exponential basis functions preconvolved with the system waveform are used to convert measured transient decays to an ideal frequency-domain response that can be modeled more easily than arbitrary waveform data.
Abstract: Exponential basis functions preconvolved with the system waveform are used to convert measured transient decays to an ideal frequency-domain response that can be modeled more easily than arbitrary waveform data. Singular-value decomposition (SVD) of the basis functions are used to assess which specific EM waveform provides superior resolution of a range of exponential time constants that can be related to earth conductivities. The pulse shape, pulse length, transient sampling scheme, noise levels, and primary field removal used in practical EM systems all affect the resolution of time constants. Step response systems are more diagnostic of long time constants, and hence good conductors, than impulse response systems. The limited bandwidth of airborne EM systems compared with ground systems is improved when the response is sampled during the transmitter on time and gives better resolution of short time constants or fast decays.

Journal ArticleDOI
TL;DR: An extension to these methods enables them to segment and recover motion and shape for multiple independently moving objects and illustrates the generality of the factorization methods with two applications outside structure from motion.
Abstract: In this article we present an overview of factorization methods for recovering structure and motion from image sequences. We distinguish these methods from general nonlinear algorithms primarily by their bilinear formulation in motion and shape parameters. The bilinear formulation makes possible powerful and efficient solution techniques including singular value decomposition. We show how factorization methods apply under various affine camera models and under the perspective camera model, and then we review factorization methods for various features including points, lines, directional point features and line segments. An extension to these methods enables them to segment and recover motion and shape for multiple independently moving objects. Finally we illustrate the generality of the factorization methods with two applications outside structure from motion.

Journal ArticleDOI
TL;DR: In this article, a linear regression method to minimize the deviations of a real cut gear-tooth surface is investigated, based on the Gleason hypoid gear generator, a mathematical model of the tooth surface is proposed.

Journal ArticleDOI
TL;DR: It is shown here that replacing the SVD by a low-rank revealing decomposition speeds up the computations without affecting the accuracy of the wanted parameter estimates.

01 Feb 1998
TL;DR: In this article, a theoretical basis for principal component analysis of nonlinear systems is presented, including the derivation of modal metrics for use in nonlinear model correlation, updating and uncertainty evaluation.
Abstract: Principal Components Analysis of nonlinear systems is based on the singular value decomposition of a collection of response time-histories. The principal components are analogous to the modal response time-histories of linear structural analysis, except that the singular values are related to energy rather than frequency. This paper presents a theoretical basis for Principal Components Analysis, including the derivation of modal metrics for use in nonlinear model correlation, updating and uncertainty evaluation. A numerical example based on current experience will be presented to illustrate application to nonlinear model validation and verification.

Journal ArticleDOI
TL;DR: New algorithms based on plane rotations for tracking the eigenvalue decomposition (EVD) of a time-varying data covariance matrix are presented, which directly produce eigenvectors in orthonormal form and are well suited for the application of subspace methods to nonstationary data.
Abstract: We present new algorithms based on plane rotations for tracking the eigenvalue decomposition (EVD) of a time-varying data covariance matrix. These algorithms directly produce eigenvectors in orthonormal form and are well suited for the application of subspace methods to nonstationary data. After recasting EVD tracking as a simplified rank-one EVD update problem, computationally efficient solutions are obtained in two steps. First, a new kind of parametric perturbation approach is used to express the eigenvector update as an unimodular orthogonal transform, which is represented in exponential matrix form in terms of a reduced set of small, unconstrained parameters. Second, two approximate decompositions of this exponential matrix into products of plane (or Givens) rotations are derived, one of which being previously unknown. These decompositions lead to new plane rotation-based EVD-updating schemes (PROTEUS), whose main feature is the use of plane rotations for updating the eigenvectors, thereby preserving orthonormality. Finally, the PROTEUS schemes are used to derive new EVD trackers whose convergence and numerical stability are investigated via simulations. One algorithm can track all the signal subspace EVD components in only O(LM) operations, where L and M, respectively, denote the data vector and signal subspace dimensions while achieving a performance comparable to an exact EVD approach and maintaining perfect orthonormality of the eigenvectors. The new algorithms show no signs of error buildup.

Journal ArticleDOI
TL;DR: The problem of image reconstruction under an inhomogeneous field is formulated as an inverse problem of a linear Fredholm equation of the first kind and the choice of the imaging sequence for well-conditioned matrix operators is discussed, and it is shown that nonlinear k-space trajectories provide better results.
Abstract: In magnetic resonance imaging, spatial localization is usually achieved using Fourier encoding which is realized by applying a magnetic field gradient along the dimension of interest to create a linear correspondence between the resonance frequency and spatial location following the Larmor equation. In the presence of B/sub 0/ inhomogeneities along this dimension, the linear mapping does not hold and spatial distortions arise in the acquired images. In this paper, the problem of image reconstruction under an inhomogeneous field is formulated as an inverse problem of a linear Fredholm equation of the first kind. The operators in these problems are estimated using field mapping and the k-space trajectory of the imaging sequence. Since such inverse problems are known to be ill-posed in general, robust solvers, singular value decomposition and conjugate gradient method, are employed to obtain corrected images that are optimal in the Frobenius norm sense. Based on this formulation, the choice of the imaging sequence for well-conditioned matrix operators is discussed, and it is shown that nonlinear k-space trajectories provide better results. The reconstruction technique is applied to sequences where the distortion is more severe along one of the image dimensions and the two-dimensional reconstruction problem becomes equivalent to a set of independent one-dimensional problems. Experimental results demonstrate the performance and stability of the algebraic reconstruction methods.

Journal ArticleDOI
TL;DR: A method for obtaining a practical inverse for the distribution of neural activity in the human cerebral cortex is developed and bimodal image reconstructions are generally superior to unimodal ones in finding the center of activity.
Abstract: A method for obtaining a practical inverse for the distribution of neural activity in the human cerebral cortex is developed for electric, magnetic, and bimodal data to exploit their complementary aspects. Intracellular current is represented by current dipoles uniformly distributed on two parallel sulci joined by a gyrus. Linear systems of equations relate electric, magnetic, and bimodal data to unknown dipole moments. The corresponding lead-field matrices are characterized by singular value decomposition (SVD). The optimal reference electrode location for electric data is chosen on the basis of the decay behavior of the singular values. The singular values of these matrices show better decay behavior with increasing number of measurements, however, that property is useful depending on the noise in the measurements. The truncated SVD pseudo-inverse is used to control noise artifacts in the reconstructed images. Simulations for single-dipole sources at different depths reveal the relative contributions of electric and magnetic measures. For realistic noise levels the performance of both unimodal and bimodal systems do not improve with an increase in the number of measurements beyond /spl sim/100. Bimodal image reconstructions are generally superior to unimodal ones in finding the center of activity.

Journal ArticleDOI
TL;DR: In this article, a generalization of the classical structural flexibility matrix is presented, where the flexibility of an individual element or substructure is directly obtained as a particular generalized inverse of the free-free stiffness matrix.

Journal ArticleDOI
TL;DR: In this paper, a tensorial approach to describe a k-way array is employed, and the singular value decomposition of this type of multiarray is established, based on a generalization of the transition formulae, has a Gauss-Seidel form.