scispace - formally typeset
Search or ask a question

Showing papers on "Singular value decomposition published in 2002"


Book ChapterDOI
28 May 2002
TL;DR: This work considers the multilinear analysis of ensembles of facial images that combine several modes, including different facial geometries (people), expressions, head poses, and lighting conditions, and concludes that the resulting "TensorFaces" representation has several advantages over conventional eigenfaces.
Abstract: Natural images are the composite consequence of multiple factors related to scene structure, illumination, and imaging. Multilinear algebra, the algebra of higher-order tensors, offers a potent mathematical framework for analyzing the multifactor structure of image ensembles and for addressing the difficult problem of disentangling the constituent factors or modes. Our multilinear modeling technique employs a tensor extension of the conventional matrix singular value decomposition (SVD), known as the N-mode SVD. As a concrete example, we consider the multilinear analysis of ensembles of facial images that combine several modes, including different facial geometries (people), expressions, head poses, and lighting conditions. Our resulting "TensorFaces" representation has several advantages over conventional eigenfaces. More generally, multilinear analysis shows promise as a unifying framework for a variety of computer vision problems.

955 citations


Journal ArticleDOI
TL;DR: This work proposes a scheme to reverse-engineer gene networks on a genome-wide scale using a relatively small amount of gene expression data from microarray experiments and uses singular value decomposition to construct a family of candidate solutions and robust regression to identify the solution with the smallest number of connections as the most likely solution.
Abstract: We propose a scheme to reverse-engineer gene networks on a genome-wide scale using a relatively small amount of gene expression data from microarray experiments. Our method is based on the empirical observation that such networks are typically large and sparse. It uses singular value decomposition to construct a family of candidate solutions and then uses robust regression to identify the solution with the smallest number of connections as the most likely solution. Our algorithm has O(log N) sampling complexity and O(N4) computational complexity. We test and validate our approach in a series of in numero experiments on model gene networks.

724 citations


Journal ArticleDOI
TL;DR: The equivalence of the matrices for processing, the objective functions, the optimal basis vectors, the mean-square errors, and the asymptotic connections of the three POD methods are demonstrated and proved when the methods are used to handle the POD of discrete random vectors.

682 citations


Journal ArticleDOI
TL;DR: This article introduces a new dimensionality reduction technique, which it is shown how APCA can be indexed using a multidimensional index structure, and proposes two distance measures in the indexed space that exploit the high fidelity of APCA for fast searching.
Abstract: Similarity search in large time series databases has attracted much research interest recently. It is a difficult problem because of the typically high dimensionality of the data. The most promising solutions involve performing dimensionality reduction on the data, then indexing the reduced data with a multidimensional index structure. Many dimensionality reduction techniques have been proposed, including Singular Value Decomposition (SVD), the Discrete Fourier transform (DFT), and the Discrete Wavelet Transform (DWT). In this article, we introduce a new dimensionality reduction technique, which we call Adaptive Piecewise Constant Approximation (APCA). While previous techniques (e.g., SVD, DFT and DWT) choose a common representation for all the items in the database that minimizes the global reconstruction error, APCA approximates each time series by a set of constant value segments of varying lengths such that their individual reconstruction errors are minimal. We show how APCA can be indexed using a multidimensional index structure. We propose two distance measures in the indexed space that exploit the high fidelity of APCA for fast searching: a lower bounding Euclidean distance approximation, and a non-lower-bounding, but very tight, Euclidean distance approximation, and show how they can support fast exact searching and even faster approximate searching on the same index structure. We theoretically and empirically compare APCA to all the other techniques and demonstrate its superiority.

649 citations


Journal ArticleDOI
TL;DR: This work presents an efficient algorithm to solve a class of two- and 2.5-dimensional Fredholm integrals of the first kind with a tensor product structure and nonnegativity constraint on the estimated parameters of interest in an optimization framework using a zeroth-order regularization functional.
Abstract: We present an efficient algorithm to solve a class of two- and 25-dimensional (2-D and 25-D) Fredholm integrals of the first kind with a tensor product structure and nonnegativity constraint on the estimated parameters of interest in an optimization framework A zeroth-order regularization functional is used to incorporate a priori information about the smoothness of the parameters into the problem formulation We adapt the Butler-Reeds-Dawson (1981) algorithm to solve this optimization problem in three steps In the first step, the data are compressed using singular value decomposition (SVD) of the kernels The tensor-product structure of the kernel is exploited so that the compressed data is typically a thousand fold smaller than the original data This size reduction is crucial for fast optimization In the second step, the constrained optimization problem is transformed to an unconstrained optimization problem in the compressed data space In the third step, a suboptimal value of the smoothing parameter is chosen by the BRD method Steps 2 and 3 are iterated until convergence of the algorithm We demonstrate the performance of the algorithm on simulated data

603 citations


Book ChapterDOI
28 May 2002
TL;DR: In computer vision, the incremental SVD is used to develop an efficient and unusually robust subspace-estimating flow-based tracker, and to handle occlusions/missing points in structure-from-motion factorizations.
Abstract: We introduce an incremental singular value decomposition (SVD) of incomplete data. The SVD is developed as data arrives, and can handle arbitrary missing/untrusted values, correlated uncertainty across rows or columns of the measurement matrix, and user priors. Since incomplete data does not uniquely specify an SVD, the procedure selects one having minimal rank. For a dense p × q matrix of low rank r, the incremental method has time complexity O(pqr) and space complexity O((p + q)r)--better than highly optimized batch algorithms such as MATLAB's svd(). In cases of missing data, it produces factorings of lower rank and residual than batch SVD algorithms applied to standard missing-data imputations. We show applications in computer vision and audio feature extraction. In computer vision, we use the incremental SVD to develop an efficient and unusually robust subspace-estimating flow-based tracker, and to handle occlusions/missing points in structure-from-motion factorizations.

564 citations


Journal ArticleDOI
TL;DR: It is shown here that degenerate channel phenomena called "keyholes" may arise under realistic assumptions which have zero correlation between the entries of the channel matrix H and yet only a single degree of freedom.
Abstract: Multielement system capacities are usually thought of as limited only by correlations between elements. It is shown here that degenerate channel phenomena called "keyholes" may arise under realistic assumptions which have zero correlation between the entries of the channel matrix H and yet only a single degree of freedom. Canonical physical examples of keyholes are presented. For outdoor environments, it is shown that roof edge diffraction is perceived as a "keyhole" by a vertical base array that may be avoided by employing instead a horizontal base array.

524 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a general method for estimating the location of small, well-separated scatterers in a randomly inhomogeneous environment using an active sensor array.
Abstract: We present a general method for estimating the location of small, well-separated scatterers in a randomly inhomogeneous environment using an active sensor array. The main features of this method are (i) an arrival time analysis (ATA) of the echo received from the scatterers, (ii) a singular value decomposition of the array response matrix in the frequency domain, and (iii) the construction of an objective function in the time domain that is statistically stable and peaks on the scatterers. By statistically stable we mean here that the objective function is self-averaging over individual realizations of the medium. This is a new approach to array imaging that is motivated by time reversal in random media, analysed in detail previously. It combines features from seismic imaging, like ATA, with frequency-domain signal subspace methodology like multiple signal classification. We illustrate the theory with numerical simulations for ultrasound.

315 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provide insights into the physical interpretation of the proper orthogonal modes using the singular value decomposition (SVD) in the field of structural dynamics.

284 citations


Journal ArticleDOI
TL;DR: The basic ideas of ℋ- andℋ2-matrices are introduced and an algorithm that adaptively computes approximations of general matrices in the latter format is presented.
Abstract: A class of matrices (H2-matrices) has recently been introduced for storing discretisations of elliptic problems and integral operators from the BEM. These matrices have the following properties: (i) They are sparse in the sense that only few data are needed for their representation. (ii) The matrix-vector multiplication is of linear complexity. (iii) In general, sums and products of these matrices are no longer in the same set, but after truncation to the H2-matrix format these operations are again of quasi-linear complexity.We introduce the basic ideas of H- and H2-matrices and present an algorithm that adaptively computes approximations of general matrices in the latter format.

247 citations


Proceedings ArticleDOI
04 Aug 2002
TL;DR: Simulation results are provided which demonstrate the robustness of the proposed technique to a variety of common image degradations and the results of the approach are compared to other transform domain watermarking methods.
Abstract: In this paper, we present a technique for watermarking of digital images based on the singular value decomposition. Simulation results are provided which demonstrate the robustness of the proposed technique to a variety of common image degradations. The results of our approach are also compared to other transform domain watermarking methods.

Journal ArticleDOI
TL;DR: In this paper, the authors compare several algorithms for successfully extending a nonperiodic function f(x) into the fog even when the analytic extension is singular, and the best third-kind extension requires singular value decomposition with iterative refinement but achieves accuracy close to machine precision.

Journal ArticleDOI
TL;DR: In this paper, an alternative solution procedure based on the singular value decomposition of the coefficient matrix is suggested and it is shown that the numerical results are extremely accurate (often within machine precision) and relatively independent of the location of the source points.
Abstract: The method of fundamental solutions (also known as the singularity or the source method) is a useful technique for solving linear partial differential equations such as the Laplace or the Helmholtz equation. The procedure involves only boundary collocation or boundary fitting and hence is a very fast procedure for the solution of these classes of problems. The resulting coefficient matrix, is however ill-conditioned and hence the solution accuracy is sensitive to the location of the source points. In this paper, an alternative solution procedure based on the singular value decomposition of the coefficient matrix is suggested and it is shown that the numerical results are extremely accurate (often within machine precision) and relatively independent of the location of the source points. Copyright © 2002 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: A new approach to covariance-weighted factorization, which can factor noisy feature correspondences with high degree of directional uncertainty into structure and motion and provides a unified approach for treating corner-like points together with points along linear structures in the image.
Abstract: Factorization using Singular Value Decomposition (SVD) is often used for recovering 3D shape and motion from feature correspondences across multiple views. SVD is powerful at finding the global solution to the associated least-square-error minimization problem. However, this is the correct error to minimize only when the x and y positional errors in the features are uncorrelated and identically distributed. But this is rarely the case in real data. Uncertainty in feature position depends on the underlying spatial intensity structure in the image, which has strong directionality to it. Hence, the proper measure to minimize is covariance-weighted squared-error (or the Mahalanobis distance). In this paper, we describe a new approach to covariance-weighted factorization, which can factor noisy feature correspondences with high degree of directional uncertainty into structure and motion. Our approach is based on transforming the raw-data into a covariance-weighted data space, where the components of noise in the different directions are uncorrelated and identically distributed. Applying SVD to the transformed data now minimizes a meaningful objective function in this new data space. This is followed by a linear but suboptimal second step to recover the shape and motion in the original data space. We empirically show that our algorithm gives very good results for varying degrees of directional uncertainty. In particular, we show that unlike other SVD-based factorization algorithms, our method does not degrade with increase in directionality of uncertainty, even in the extreme when only normal-flow data is available. It thus provides a unified approach for treating corner-like points together with points along linear structures in the image.

Journal ArticleDOI
TL;DR: In this paper, the authors describe a set of macros that may be used to draw a biplot display based on results from principal components analysis, correspondence analysis, canonical discriminant analysis, metric multidimensional scaling, redundancy analysis or canonical correspondence analysis.
Abstract: The biplot display is a graph of row and column markers obtained from data that forms a twoway table. The markers are calculated from the singular value decomposition of the data matrix. The biplot display may be used with many multivariate methods to display relationships between variables and objects. It is commonly used in ecological applications to plot relationships between species and sites. This paper describes a set of Excel© macros that may be used to draw a biplot display based on results from principal components analysis, correspondence analysis, canonical discriminant analysis, metric multidimensional scaling, redundancy analysis, canonical correlation analysis or canonical correspondence analysis. The macros allow for a variety of transformations of the data prior to the singular value decomposition and scaling of the markers following the decomposition.

Journal ArticleDOI
TL;DR: This work compares the performance in terms of accuracy and efficiency of four algorithms: the classical SVD algorithm based on the QR decomposition, the Lanczos algorithm, the Lancaster algorithm with partial reorthogonalization, and the implicitly restarted Lanczos algorithms.

Journal ArticleDOI
TL;DR: An effective algebraic matrix equation approach is developed to solve the observer design problem for a class of linear delay systems of the neutral-type by using the singular value decomposition technique and the generalized inverse theory.
Abstract: This paper deals with the observer design problem for a class of linear delay systems of the neutral-type. The problem addressed is that of designing a full-order observer that guarantees the exponential stability of the error dynamic system. An effective algebraic matrix equation approach is developed to solve this problem. In particular, both the observer analysis and design problems are investigated. By using the singular value decomposition technique and the generalized inverse theory, sufficient conditions for a neutral-type delay system to be exponentially stable are first established. Then, an explicit expression of the desired observers is derived in terms of some free parameters. Furthermore, an illustrative example is used to demonstrate the validity of the proposed design procedure.

Journal ArticleDOI
TL;DR: The algorithm generates a Takagi–Sugeno–Kang fuzzy model, characterised with transparency, high accuracy and small number of rules, which is compared with similar existing models, available in literature.

Journal Article
TL;DR: In this article, a plate structure with well-defined modes, resonance frequencies and damping values is used for operational modal analysis (often referred to as output-only or ambient modal analyses).
Abstract: Operational modal analysis (often called output-only or ambient modal analysis) is described in this article. Modal testing is performed on a plate structure with well-defined modes, resonance frequencies and damping values. Frequency Domain Decomposition (FDD) and Enhanced Frequency Domain Decomposition (EFDD) concepts are presented and applied to a plate structure. This article details the signal processing mathematical background and presents alternative curve-fitting processes.

Journal ArticleDOI
TL;DR: It is justified how popular numerical methods, the so-called continuous QR and SVD approaches, can be used to approximate these spectra and how to verify the property of integral separation, and hence how to a posteriori infer stability of the attained spectral information.
Abstract: Different definitions of spectra have been proposed over the years to characterize the asymptotic behavior of nonautonomous linear systems. Here, we consider the spectrum based on exponential dichotomy of Sacker and Sell [J. Differential Equations, 7 (1978), pp. 320--358] and the spectrum defined in terms of upper and lower Lyapunov exponents. A main goal of ours is to understand to what extent these spectra are computable. By using an orthogonal change of variables transforming the system to upper triangular form, and the assumption of integral separation for the diagonal of the new triangular system, we justify how popular numerical methods, the so-called continuous QR and SVD approaches, can be used to approximate these spectra. We further discuss how to verify the property of integral separation, and hence how to a posteriori infer stability of the attained spectral information. Finally, we discuss the algorithms we have used to approximate the Lyapunov and Sacker--Sell spectra and present some numerical results.

Journal ArticleDOI
TL;DR: In this paper, singular value decomposition (SVD) is used to find the vector of coefficients of a quadratic sub-expression embodied in a group method of data handling (GMDH) type neural networks.

Journal ArticleDOI
TL;DR: In this paper, a meshless method for the acoustic eigenfrequencies using radial basis function (RBF) is proposed, where the coefficients of influence matrices are easily determined by the two-point functions.

Journal ArticleDOI
TL;DR: By simultaneously reconstructing points and views, this paper can exploit the numerical stabilizing effect of having wide spread cameras with large mutual baselines, and is demonstrated by reconstructing the outsideand inside of a building on the basis of 35 views in one single Singular Value Decomposition.
Abstract: This paper presents a linear algorithm for simultaneous computation of 3D points and camera positions from multiple perspective views based on having a reference plane visible in all views. The reconstruction and camera recovery is achieved in a single step by finding the null-space of a matrix built from image data using Singular Value Decomposition. Contrary to factorization algorithms this approach does not need to have all points visible in all views. This paper investigates two reference plane configurations: Finite reference planes defined by four coplanar points and infinite reference planes defined by vanishing points. A further contribution of this paper is the study of critical configurations for configurations with four coplanar points. By simultaneously reconstructing points and views we can exploit the numerical stabilizing effect of having wide spread cameras with large mutual baselines. This is demonstrated by reconstructing the outside and inside (courtyard) of a building on the basis of 35 views in one single Singular Value Decomposition.

Journal ArticleDOI
TL;DR: This paper considers the case where the order of a MIMO system to be identified is a priori known, and proves the convergence property of the recursive equation of the gradient type subspace tracking.

01 Apr 2002
TL;DR: Singular Value Decomposition is used to reduce an appropriately constructed matrix of raw pixel values to a set of orthogonal functions that compactly describe the key spatial and temporal variations relating to underlying structural anomalies.
Abstract: : This report describes a robust computational framework for the qualitative enhancement and quantitative interpretation of active thermographic inspection data. Singular Value Decomposition is used to reduce an appropriately constructed matrix of raw pixel values to a set of orthogonal functions that compactly describe the key spatial and temporal variations relating to underlying structural anomalies. Tests against synthetic and experimental data are described that underscore the practical efficacy of the methodology, and demonstrate significant advantages compared to more traditional methods of processing.

Journal ArticleDOI
TL;DR: In this article, a singular value decomposition (SVDC) is applied to the matrix form of Duhamel's principle, where rows and columns are removed from the decomposed matrices that are associated with small singular values that are shown to be associated with frequencies where the signal to noise ratio is small.

Journal ArticleDOI
TL;DR: This paper provides algorithms for adding and subtracting eigenspaces, thus allowing for incremental updating and downdating of data models, and keeps an accurate track of the mean of the data, which allows the methods to be used in classification applications.

Journal ArticleDOI
TL;DR: The theory reveals the necessary and sufficient condition for preserving the smallest singular value of a matrix while appending (or deleting) a column, which represents a basic matrix theory result for updating the singular value decomposition.
Abstract: The standard approaches to solving overdetermined linear systems $Bx \approx c$ construct minimal corrections to the vector c and/or the matrix B such that the corrected system is compatible. In ordinary least squares (LS) the correction is restricted to c, while in data least squares (DLS) it is restricted to B. In scaled total least squares (STLS) [22], corrections to both c and B are allowed, and their relative sizes depend on a real positive parameter $\gamma$ . STLS unifies several formulations since it becomes total least squares (TLS) when $\gamma=1$ , and in the limit corresponds to LS when $\gamma\rightarrow 0$ , and DLS when $\gamma\rightarrow \infty$ . This paper analyzes a particularly useful formulation of the STLS problem. The analysis is based on a new assumption that guarantees existence and uniqueness of meaningful STLS solutions for all parameters $\gamma >0$ . It makes the whole STLS theory consistent. Our theory reveals the necessary and sufficient condition for preserving the smallest singular value of a matrix while appending (or deleting) a column. This condition represents a basic matrix theory result for updating the singular value decomposition, as well as the rank-one modification of the Hermitian eigenproblem. The paper allows complex data, and the equivalences in the limit of STLS with DLS and LS are proven for such data. It is shown how any linear system $Bx \approx c$ can be reduced to a minimally dimensioned core system satisfying our assumption. Consequently, our theory and algorithms can be applied to fully general systems. The basics of practical algorithms for both the STLS and DLS problems are indicated for either dense or large sparse systems. Our assumption and its consequences are compared with earlier approaches.

Journal ArticleDOI
TL;DR: In this paper, a theoretical formulation based on the collocation method is presented for the eigenanalysis of arbitrarily shaped acoustic cavities, which can be seen as the extension of non-dimensional influence function (NDIF) method proposed by Kang et al.
Abstract: In this paper, a theoretical formulation based on the collocation method is presented for the eigenanalysis of arbitrarily shaped acoustic cavities. This article can be seen as the extension of non-dimensional influence function (NDIF) method proposed by Kang et al. (1999, 2000a) extending from two-dimensional to three-dimensional case. Unlike the conventional collocation techniques in the literature, approximate functions used in this paper are two-point functions of which the argument is only the distance between the two points. Based on this radial basis expansion, the acoustic field can be represented more exactly. The field solution is obtained through the linear superposition of radial basis function, and boundary conditions can be applied at the discrete points. The influence matrix is symmetric regardless of the boundary shape of the cavity, and the calculated eigenvalues rapidly converge to the exact values by using only a few boundary nodes. Moreover, the method results in true and spurious boundary modes, which can be obtained from the right and left unitary vectors of singular value decomposition, respectively. By employing the updating term and document of singular value decomposition (SVD), the true and spurious eigensolutions can be sorted out, respectively. The validity of the proposed method are illustrated through several numerical examples.

Journal ArticleDOI
01 Feb 2002
TL;DR: The results confirm that the dynamic ordering is much more efficient with regard to the amount of work required for the computation of SVD of a given accuracy than the static cyclic ordering.
Abstract: A new approach for the parallel computation of singular value decomposition (SVD) of matrix A ∈ Cm×n is proposed. Contrary to the known algorithms that use a static cyclic ordering of subproblems simultaneously solved in one iteration step, the proposed implementation of the two-sided block-Jacobi method uses a dynamic ordering of subproblems. The dynamic ordering takes into account the actual status of matrix A. In each iteration step, a set of the off-diagonal blocks is determined that reduces the Frobenius norm of the off-diagonal elements of A as much as possible and, at the same time, can be annihilated concurrently. The solution of this task is equivalent to the solution of the maximum-weight perfect matching problem. The greedy algorithm for the efficient solution of this problem is presented. The computational experiments with both types of ordering, incorporated into the two-sided block-Jacobi method, were performed on an SGI - Cray Origin 2000 parallel computer using the Message Passing Interface (MPI). The results confirm that the dynamic ordering is much more efficient with regard to the amount of work required for the computation of SVD of a given accuracy than the static cyclic ordering.