scispace - formally typeset
Search or ask a question

Showing papers on "Singular value decomposition published in 2010"


Journal ArticleDOI
TL;DR: Principal component analysis (PCA) as discussed by the authors is a multivariate technique that analyzes a data table in which observations are described by several inter-correlated quantitative dependent variables, and its goal is to extract the important information from the table, to represent it as a set of new orthogonal variables called principal components, and display the pattern of similarity of the observations and of the variables as points in maps.
Abstract: Principal component analysis PCA is a multivariate technique that analyzes a data table in which observations are described by several inter-correlated quantitative dependent variables. Its goal is to extract the important information from the table, to represent it as a set of new orthogonal variables called principal components, and to display the pattern of similarity of the observations and of the variables as points in maps. The quality of the PCA model can be evaluated using cross-validation techniques such as the bootstrap and the jackknife. PCA can be generalized as correspondence analysis CA in order to handle qualitative variables and as multiple factor analysis MFA in order to handle heterogeneous sets of variables. Mathematically, PCA depends upon the eigen-decomposition of positive semi-definite matrices and upon the singular value decomposition SVD of rectangular matrices. Copyright © 2010 John Wiley & Sons, Inc.

6,398 citations


Journal ArticleDOI
TL;DR: This work considers factorizations of the form X = FGT, and focuses on algorithms in which G is restricted to containing nonnegative entries, but allowing the data matrix X to have mixed signs, thus extending the applicable range of NMF methods.
Abstract: We present several new variations on the theme of nonnegative matrix factorization (NMF). Considering factorizations of the form X = FGT, we focus on algorithms in which G is restricted to containing nonnegative entries, but allowing the data matrix X to have mixed signs, thus extending the applicable range of NMF methods. We also consider algorithms in which the basis vectors of F are constrained to be convex combinations of the data points. This is used for a kernel extension of NMF. We provide algorithms for computing these new factorizations and we provide supporting theoretical analysis. We also analyze the relationships between our algorithms and clustering algorithms, and consider the implications for sparseness of solutions. Finally, we present experimental results that explore the properties of these new methods.

1,226 citations


Journal Article
TL;DR: Using the nuclear norm as a regularizer, the algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD in a sequence of regularized low-rank solutions for large-scale matrix completion problems.
Abstract: We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm SOFT-IMPUTE iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity of order linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices; for example SOFT-IMPUTE takes a few hours to compute low-rank approximations of a 106 X 106 incomplete matrix with 107 observed entries, and fits a rank-95 approximation to the full Netflix training set in 3.3 hours. Our methods achieve good training and test errors and exhibit superior timings when compared to other competitive state-of-the-art techniques.

1,195 citations


Journal ArticleDOI
TL;DR: This hierarchical SVD has properties like the matrix SVD (and collapses to the SVD in $d=2$), and it is proved that one can find low rank (almost) best approximations in a hierarchical format ($\mathcal{H}$-Tucker) which requires only $\ mathcal{O}((d-1)k^3+dnk)$ parameters.
Abstract: We define the hierarchical singular value decomposition (SVD) for tensors of order $d\geq2$. This hierarchical SVD has properties like the matrix SVD (and collapses to the SVD in $d=2$), and we prove these. In particular, one can find low rank (almost) best approximations in a hierarchical format ($\mathcal{H}$-Tucker) which requires only $\mathcal{O}((d-1)k^3+dnk)$ parameters, where $d$ is the order of the tensor, $n$ the size of the modes, and $k$ the (hierarchical) rank. The $\mathcal{H}$-Tucker format is a specialization of the Tucker format and it contains as a special case all (canonical) rank $k$ tensors. Based on this new concept of a hierarchical SVD we present algorithms for hierarchical tensor calculations allowing for a rigorous error analysis. The complexity of the truncation (finding lower rank approximations to hierarchical rank $k$ tensors) is in $\mathcal{O}((d-1)k^4+dnk^2)$ and the attainable accuracy is just 2-3 digits less than machine precision.

602 citations


Proceedings Article
06 Dec 2010
TL;DR: In this paper, an efficient convex optimization-based algorithm called Outlier Pursuit is presented, which under mild assumptions on the uncorrupted points (satisfied, e.g., by the standard generative assumption in PCA problems) recovers the exact optimal low-dimensional subspace, and identifies the corrupted points.
Abstract: Singular Value Decomposition (and Principal Component Analysis) is one of the most widely used techniques for dimensionality reduction: successful and efficiently computable, it is nevertheless plagued by a well-known, well-documented sensitivity to outliers. Recent work has considered the setting where each point has a few arbitrarily corrupted components. Yet, in applications of SVD or PCA such as robust collaborative filtering or bioinformatics, malicious agents, defective genes, or simply corrupted or contaminated experiments may effectively yield entire points that are completely corrupted. We present an efficient convex optimization-based algorithm we call Outlier Pursuit, that under some mild assumptions on the uncorrupted points (satisfied, e.g., by the standard generative assumption in PCA problems) recovers the exact optimal low-dimensional subspace, and identifies the corrupted points. Such identification of corrupted points that do not conform to the low-dimensional approximation, is of paramount interest in bioinformatics and financial applications, and beyond. Our techniques involve matrix decomposition using nuclear norm minimization, however, our results, setup, and approach, necessarily differ considerably from the existing line of work in matrix completion and matrix decomposition, since we develop an approach to recover the correct column space of the uncorrupted matrix, rather than the exact matrix itself.

590 citations


Journal ArticleDOI
TL;DR: A new interpolation formula is suggested in which a d -dimensional array is interpolated on the entries of some TT-cross (tensor train-cross) and the total number of entries and the complexity of the interpolation algorithm depend on d linearly, so the approach does not suffer from the curse of dimensionality.

505 citations


Journal ArticleDOI
TL;DR: The recursive least squares dictionary learning algorithm, RLS-DLA, is presented, which can be used for learning overcomplete dictionaries for sparse signal representation and a forgetting factor can be introduced and easily implemented in the algorithm.
Abstract: We present the recursive least squares dictionary learning algorithm, RLS-DLA, which can be used for learning overcomplete dictionaries for sparse signal representation. Most DLAs presented earlier, for example ILS-DLA and K-SVD, update the dictionary after a batch of training vectors has been processed, usually using the whole set of training vectors as one batch. The training set is used iteratively to gradually improve the dictionary. The approach in RLS-DLA is a continuous update of the dictionary as each training vector is being processed. The core of the algorithm is compact and can be effectively implemented. The algorithm is derived very much along the same path as the recursive least squares (RLS) algorithm for adaptive filtering. Thus, as in RLS, a forgetting factor ? can be introduced and easily implemented in the algorithm. Adjusting ? in an appropriate way makes the algorithm less dependent on the initial dictionary and it improves both convergence properties of RLS-DLA as well as the representation ability of the resulting dictionary. Two sets of experiments are done to test different methods for learning dictionaries. The goal of the first set is to explore some basic properties of the algorithm in a simple setup, and for the second set it is the reconstruction of a true underlying dictionary. The first experiment confirms the conjectural properties from the derivation part, while the second demonstrates excellent performance.

413 citations


Journal ArticleDOI
TL;DR: This article considers bipartite graphs that evolve over time and considers matrix- and tensor-based methods for predicting future links and shows that Tensor- based techniques are particularly effective for temporal data with varying periodic patterns.
Abstract: The data in many disciplines such as social networks, web analysis, etc. is link-based, and the link structure can be exploited for many different data mining tasks. In this paper, we consider the problem of temporal link prediction: Given link data for times 1 through T, can we predict the links at time T+1? If our data has underlying periodic structure, can we predict out even further in time, i.e., links at time T+2, T+3, etc.? In this paper, we consider bipartite graphs that evolve over time and consider matrix- and tensor-based methods for predicting future links. We present a weight-based method for collapsing multi-year data into a single matrix. We show how the well-known Katz method for link prediction can be extended to bipartite graphs and, moreover, approximated in a scalable way using a truncated singular value decomposition. Using a CANDECOMP/PARAFAC tensor decomposition of the data, we illustrate the usefulness of exploiting the natural three-dimensional structure of temporal link data. Through several numerical experiments, we demonstrate that both matrix- and tensor-based techniques are effective for temporal link prediction despite the inherent difficulty of the problem. Additionally, we show that tensor-based techniques are particularly effective for temporal data with varying periodic patterns.

408 citations


Journal Article
TL;DR: An optimal two-stage identification algorithm is presented for Hammerstein–Wiener systems where two static nonlinear elements surround a linear block and is shown to be convergent in the absence of noise and convergence with probability one in the presence of white noise.
Abstract: An optimal two-stage identification algorithm is presented for Hammerstein–Wiener systems where two static nonlinear elements surround a linear block. The proposed algorithm consists of two steps: The first one is the recursive least squares and the second one is the singular value decomposition of two matrices whose dimensions are fixed and do not increase as the number of the data point increases. Moreover, the algorithm is shown to be convergent in the absence of noise and convergent with probability one in the presence of white noise.

398 citations


Journal ArticleDOI
TL;DR: In this letter, a new satellite image contrast enhancement technique based on the discrete wavelet transform (DWT) and singular value decomposition has been proposed and it reconstructs the enhanced image by applying inverse DWT.
Abstract: In this letter, a new satellite image contrast enhancement technique based on the discrete wavelet transform (DWT) and singular value decomposition has been proposed. The technique decomposes the input image into the four frequency subbands by using DWT and estimates the singular value matrix of the low-low subband image, and, then, it reconstructs the enhanced image by applying inverse DWT. The technique is compared with conventional image equalization techniques such as standard general histogram equalization and local histogram equalization, as well as state-of-the-art techniques such as brightness preserving dynamic histogram equalization and singular value equalization. The experimental results show the superiority of the proposed method over conventional and state-of-the-art techniques.

310 citations


Proceedings Article
Martin Jaggi1, Marek Sulovsky1
21 Jun 2010
TL;DR: A new approximation algorithm building upon the recent sparse approximate SDP solver of Hazan, 2008 is proposed, which comes with strong convergence guarantees, and can be interpreted as a first theoretically justified variant of Simon-Funk-type SVD heuristics.
Abstract: Optimization problems with a nuclear norm regularization, such as eg low norm matrix factorizations, have seen many applications recently We propose a new approximation algorithm building upon the recent sparse approximate SDP solver of (Hazan, 2008) The experimental efficiency of our method is demonstrated on large matrix completion problems such as the Netflix dataset The algorithm comes with strong convergence guarantees, and can be interpreted as a first theoretically justified variant of Simon-Funk-type SVD heuristics The method is free of tuning parameters, and very easy to parallelize

Journal ArticleDOI
TL;DR: Sparse singular value decomposition (SSVD) is proposed as a new exploratory analysis tool for biclustering or identifying interpretable row–column associations within high‐dimensional data matrices.
Abstract: Sparse singular value decomposition (SSVD) is proposed as a new exploratory analysis tool for biclustering or identifying interpretable row-column associations within high-dimensional data matrices. SSVD seeks a low-rank, checkerboard structured matrix approximation to data matrices. The desired checkerboard structure is achieved by forcing both the left- and right-singular vectors to be sparse, that is, having many zero entries. By interpreting singular vectors as regression coefficient vectors for certain linear regressions, sparsity-inducing regularization penalties are imposed to the least squares regression to produce sparse singular vectors. An efficient iterative algorithm is proposed for computing the sparse singular vectors, along with some discussion of penalty parameter selection. A lung cancer microarray dataset and a food nutrition dataset are used to illustrate SSVD as a biclustering method. SSVD is also compared with some existing biclustering methods using simulated datasets.

Journal ArticleDOI
TL;DR: An approximation algorithm for finding optimal decompositions which is based on the insight provided by the theorem and significantly outperforms a greedy approximation algorithms for a set covering problem to which the problem of matrix decomposition is easily shown to be reducible.

Journal ArticleDOI
TL;DR: In this paper, an atomic decomposition for minimum rank approximation (ADMiRA) algorithm was proposed for matrix completion with rank-restricted isometry property (R-RIP) and bound both the number of iterations and the error in the approximate solution for the general case of noisy measurements.
Abstract: In this paper, we address compressed sensing of a low-rank matrix posing the inverse problem as an approximation problem with a specified target rank of the solution. A simple search over the target rank then provides the minimum rank solution satisfying a prescribed data approximation bound. We propose an atomic decomposition providing an analogy between parsimonious representations of a sparse vector and a low-rank matrix and extending efficient greedy algorithms from the vector to the matrix case. In particular, we propose an efficient and guaranteed algorithm named atomic decomposition for minimum rank approximation (ADMiRA) that extends Needell and Tropp's compressive sampling matching pursuit (CoSaMP) algorithm from the sparse vector to the low-rank matrix case. The performance guarantee is given in terms of the rank-restricted isometry property (R-RIP) and bounds both the number of iterations and the error in the approximate solution for the general case of noisy measurements and approximately low-rank solution. With a sparse measurement operator as in the matrix completion problem, the computation in ADMiRA is linear in the number of measurements. Numerical experiments for the matrix completion problem show that, although the R-RIP is not satisfied in this case, ADMiRA is a competitive algorithm for matrix completion.

Journal ArticleDOI
TL;DR: This work derives new primal and dual reformulations of the primal problem of multi-task learning, including a reduced dual formulation that involves minimizing a convex quadratic function over an operator-norm ball in matrix space.
Abstract: We consider a recently proposed optimization formulation of multi-task learning based on trace norm regularized least squares. While this problem may be formulated as a semidefinite program (SDP), its size is beyond general SDP solvers. Previous solution approaches apply proximal gradient methods to solve the primal problem. We derive new primal and dual reformulations of this problem, including a reduced dual formulation that involves minimizing a convex quadratic function over an operator-norm ball in matrix space. This reduced dual problem may be solved by gradient-projection methods, with each projection involving a singular value decomposition. The dual approach is compared with existing approaches and its practical effectiveness is illustrated on simulations and an application to gene expression pattern analysis.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: In this article, the authors present a method for calculating the low-rank approximation of a matrix which minimizes the L 1 norm in the presence of missing data and outliers.
Abstract: The calculation of a low-rank approximation of a matrix is a fundamental operation in many computer vision applications. The workhorse of this class of problems has long been the Singular Value Decomposition. However, in the presence of missing data and outliers this method is not applicable, and unfortunately, this is often the case in practice. In this paper we present a method for calculating the low-rank factorization of a matrix which minimizes the L 1 norm in the presence of missing data. Our approach represents a generalization the Wiberg algorithm of one of the more convincing methods for factorization under the L 2 norm. By utilizing the differentiability of linear programs, we can extend the underlying ideas behind this approach to include this class of L 1 problems as well. We show that the proposed algorithm can be efficiently implemented using existing optimization software. We also provide preliminary experiments on synthetic as well as real world data with very convincing results.

Journal ArticleDOI
TL;DR: In this article, Kilmer et al. define a free module and show that every linear transformation on that module can be represented by tensor multiplication, and present a generalization of ideas of eigenvalue and eigenvector to the space of n × n-times-n tensors.

Journal ArticleDOI
TL;DR: An R package focused on Bayesian analysis of dynamic linear models with flexibility to deal with a variety of constant or time-varying, univariate or multivariate models, and the numerically stable singular value decomposition-based algorithms used for filtering and smoothing is described.
Abstract: We describe an R package focused on Bayesian analysis of dynamic linear models. The main features of the package are its flexibility to deal with a variety of constant or time-varying, univariate or multivariate models, and the numerically stable singular value decomposition-based algorithms used for filtering and smoothing. In addition to the examples of "out-of-the-box" use, we illustrate how the package can be used in advanced applications to implement a Gibbs sampler for a user-specified model.

Journal ArticleDOI
TL;DR: Simulations for a space-time interference suppression application with a direct-sequence code-division multiple-access (DS-CDMA) system show that the proposed scheme outperforms in convergence and tracking the state-of-the-art reduced-rank schemes at a comparable complexity.
Abstract: This paper presents novel adaptive space-time reduced-rank interference-suppression least squares (LS) algorithms based on a joint iterative optimization of parameter vectors. The proposed space-time reduced-rank scheme consists of a joint iterative optimization of a projection matrix that performs dimensionality reduction and an adaptive reduced-rank parameter vector that yields the symbol estimates. The proposed techniques do not require singular value decomposition (SVD) and automatically find the best set of basis for reduced-rank processing. We present LS expressions for the design of the projection matrix and the reduced-rank parameter vector, and we conduct an analysis of the convergence properties of the LS algorithms. We then develop recursive LS (RLS) adaptive algorithms for their computationally efficient estimation and an algorithm that automatically adjusts the rank of the proposed scheme. A convexity analysis of the LS algorithms is carried out along with the development of a proof of convergence for the proposed algorithms. Simulations for a space-time interference suppression application with a direct-sequence code-division multiple-access (DS-CDMA) system show that the proposed scheme outperforms in convergence and tracking the state-of-the-art reduced-rank schemes at a comparable complexity.

Posted Content
TL;DR: This work presents an efficient convex optimization-based algorithm that it calls outlier pursuit, which under some mild assumptions on the uncorrupted points recovers the exact optimal low-dimensional subspace and identifies the corrupted points.
Abstract: Singular Value Decomposition (and Principal Component Analysis) is one of the most widely used techniques for dimensionality reduction: successful and efficiently computable, it is nevertheless plagued by a well-known, well-documented sensitivity to outliers. Recent work has considered the setting where each point has a few arbitrarily corrupted components. Yet, in applications of SVD or PCA such as robust collaborative filtering or bioinformatics, malicious agents, defective genes, or simply corrupted or contaminated experiments may effectively yield entire points that are completely corrupted. We present an efficient convex optimization-based algorithm we call Outlier Pursuit, that under some mild assumptions on the uncorrupted points (satisfied, e.g., by the standard generative assumption in PCA problems) recovers the exact optimal low-dimensional subspace, and identifies the corrupted points. Such identification of corrupted points that do not conform to the low-dimensional approximation, is of paramount interest in bioinformatics and financial applications, and beyond. Our techniques involve matrix decomposition using nuclear norm minimization, however, our results, setup, and approach, necessarily differ considerably from the existing line of work in matrix completion and matrix decomposition, since we develop an approach to recover the correct column space of the uncorrupted matrix, rather than the exact matrix itself. In any problem where one seeks to recover a structure rather than the exact initial matrices, techniques developed thus far relying on certificates of optimality, will fail. We present an important extension of these methods, that allows the treatment of such problems.

Proceedings Article
21 Jun 2010
TL;DR: An accurate and scalable Nystrom scheme that first samples a large column subset from the input matrix, but then only performs an approximate SVD on the inner submatrix by using the recent randomized low-rank matrix approximation algorithms.
Abstract: The Nystrom method is an efficient technique for the eigenvalue decomposition of large kernel matrices. However, in order to ensure an accurate approximation, a sufficiently large number of columns have to be sampled. On very large data sets, the SVD step on the resultant data submatrix will soon dominate the computations and become prohibitive. In this paper, we propose an accurate and scalable Nystrom scheme that first samples a large column subset from the input matrix, but then only performs an approximate SVD on the inner submatrix by using the recent randomized low-rank matrix approximation algorithms. Theoretical analysis shows that the proposed algorithm is as accurate as the standard Nystrom method that directly performs a large SVD on the inner submatrix. On the other hand, its time complexity is only as low as performing a small SVD. Experiments are performed on a number of large-scale data sets for low-rank approximation and spectral embedding. In particular, spectral embedding of a MNIST data set with 3.3 million examples takes less than an hour on a standard PC with 4G memory.

Journal ArticleDOI
TL;DR: Three different sets of experiments conducted on the GTZAN and the ISMIR2004 Genre datasets demonstrate the superiority of NMPCA against the aforementioned subspace analysis techniques in extracting more discriminating features, especially when the training set has small cardinality.
Abstract: Motivated by psychophysiological investigations on the human auditory system, a bio-inspired two-dimensional auditory representation of music signals is exploited, that captures the slow temporal modulations. Although each recording is represented by a second-order tensor (i.e., a matrix), a third-order tensor is needed to represent a music corpus. Non-negative multilinear principal component analysis (NMPCA) is proposed for the unsupervised dimensionality reduction of the third-order tensors. The NMPCA maximizes the total tensor scatter while preserving the non-negativity of auditory representations. An algorithm for NMPCA is derived by exploiting the structure of the Grassmann manifold. The NMPCA is compared against three multilinear subspace analysis techniques, namely the non-negative tensor factorization, the high-order singular value decomposition, and the multilinear principal component analysis as well as their linear counterparts, i.e., the non-negative matrix factorization, the singular value decomposition, and the principal components analysis in extracting features that are subsequently classified by either support vector machine or nearest neighbor classifiers. Three different sets of experiments conducted on the GTZAN and the ISMIR2004 Genre datasets demonstrate the superiority of NMPCA against the aforementioned subspace analysis techniques in extracting more discriminating features, especially when the training set has small cardinality. The best classification accuracies reported in the paper exceed those obtained by the state-of-the-art music genre classification algorithms applied to both datasets.

Journal ArticleDOI
TL;DR: In this paper, a numerical method is presented for form-finding of tensegrity structures, where the topology and the types of members are the only information that requires in this form finding process.

Proceedings ArticleDOI
03 May 2010
TL;DR: A dedicated adaptation of quadratic programming is proposed that enables fast computations of the hierarchical inverse kinematics and is extended to deal with unilateral constraints, obtaining sufficiently high performances for reactive control.
Abstract: Classically, the inverse kinematics is performed by computing the singular value decomposition of the matrix to invert This enables a very simple writing of the algorithm However, the computation cost is high, especially when applied to complex robots and complex sets of constraints (typically around 5ms for 50 degrees of freedom - DOF) In this paper, we propose a dedicated adaptation of quadratic programming that enables fast computations of the hierarchical inverse kinematics (around 01ms for 50 DOF) We then extend this algorithm to deal with unilateral constraints, obtaining sufficiently high performances for reactive control

Journal ArticleDOI
TL;DR: Two RPCA algorithms that will greatly reduce the computation cost are presented, based on first-order perturbation analysis (FOP), which is a rank-one update of the eigenvalues and their corresponding eigenvectors of a sample covariance matrix.
Abstract: Principal component analysis (PCA) has been successfully applied in large scale process monitoring. However, classical PCA has some drawbacks: one of these aspects is the inability to deal with parameter-varying processes, where it interprets the natural changes in the process as faults, resulting in numerous false alarms. These false alarms threaten the credibility of the monitoring system. Therefore, recursive PCA (RPCA) algorithms are recommended. The most important challenge faced by these algorithms is the high computation costs, due to repeated eigenvalue decomposition (EVD) or singular value decomposition (SVD). Motivated by this issue, we present two RPCA algorithms that will greatly reduce the computation cost. The first algorithm is based on first-order perturbation analysis (FOP), which is a rank-one update of the eigenvalues and their corresponding eigenvectors of a sample covariance matrix. The second one is based on the data projection method (DPM), which is a simple and reliable approach fo...

Proceedings ArticleDOI
01 Sep 2010
TL;DR: This paper extends IRLS-p as a family of algorithms for the matrix rank minimization problem and presents a relatedfamily of algorithms, sIRLS- p, which performs better than algorithms such as Singular Value Thresholding on a range of ‘hard’ problems (where the ratio of number of degrees of freedom in the variable to the number of measurements is large).
Abstract: The classical compressed sensing problem is to find the sparsest solution to an underdetermined system of linear equations. A good convex approximation to this problem is to minimize the l 1 norm subject to affine constraints. The Iterative Reweighted Least Squares (IRLSp) algorithm (0 < p ≤ 1), has been proposed as a method to solve the l p (p ≤ 1) minimization problem with affine constraints. Recently Chartrand et al observed that IRLS-p with p < 1 has better empirical performance than l 1 minimization, and Daubechies et al gave ‘local’ linear and super-linear convergence results for IRLS-p with p = 1 and p < 1 respectively. In this paper we extend IRLS-p as a family of algorithms for the matrix rank minimization problem and we also present a related family of algorithms, sIRLS-p. We present guarantees on recovery of low-rank matrices for IRLS-1 under the Null Space Property (NSP). We also establish that the difference between the successive iterates of IRLS-p and sIRLS-p converges to zero and that the IRLS-0 algorithm converges to the stationary point of a non-convex rank-surrogate minimization problem. On the numerical side, we give a few efficient implementations for IRLS-0 and demonstrate that both sIRLS-0 and IRLS-0 perform better than algorithms such as Singular Value Thresholding (SVT) on a range of ‘hard’ problems (where the ratio of number of degrees of freedom in the variable to the number of measurements is large). We also observe that sIRLS-0 performs better than Iterative Hard Thresholding algorithm (IHT) when there is no apriori information on the low rank solution.

Journal ArticleDOI
TL;DR: A novel approach named the scaling iterative closest point (SICP) algorithm which integrates a scale matrix with boundaries into the original ICP algorithm for scaling registration of m-D point sets is introduced.

Proceedings ArticleDOI
15 Dec 2010
TL;DR: Compared with other constraint-based techniques, this isotropic multi-resolution strain-limiting method is straightforward to implement, efficient to use, and applicable to a wide range of shell and solid materials.
Abstract: In this paper we describe a fast strain-limiting method that allows stiff, incompliant materials to be simulated efficiently. Unlike prior approaches, which act on springs or individual strain components, this method acts on the strain tensors in a coordinate-invariant fashion allowing isotropic behavior. Our method applies to both two-and three-dimensional strains, and only requires computing the singular value decomposition of the deformation gradient, either a small 2x2 or 3x3 matrix, for each element. We demonstrate its use with triangular and tetrahedral linear-basis elements. For triangulated surfaces in three-dimensional space, we also describe a complementary edge-angle-limiting method to limit out-of-plane bending. All of the limits are enforced through an iterative, non-linear, Gauss-Seidel-like constraint procedure. To accelerate convergence, we propose a novel multi-resolution algorithm that enforces fitted limits at each level of a non-conforming hierarchy. Compared with other constraint-based techniques, our isotropic multi-resolution strain-limiting method is straightforward to implement, efficient to use, and applicable to a wide range of shell and solid materials.

Journal ArticleDOI
TL;DR: This paper proposes two nested PRE (proportional reduction of error) measures of fit, and applies the resulting method to citations between journals and to international trade in clothing, to illustrate insights gained from being able to model asymmetrical flow patterns.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the application of the singular value decomposition to compact-binary, gravitational-wave data-analysis, and found that the truncated SVD decomposition can reduce the number of filters required to analyze a given region of parameter space of compact binary coalescence waveforms by an order of magnitude with high reconstruction accuracy.
Abstract: We investigate the application of the singular value decomposition to compact-binary, gravitational-wave data-analysis. We find that the truncated singular value decomposition reduces the number of filters required to analyze a given region of parameter space of compact-binary coalescence waveforms by an order of magnitude with high reconstruction accuracy. We also compute an analytic expression for the expected signal loss due to the singular value decomposition truncation.