scispace - formally typeset
Search or ask a question

Showing papers on "Matrix (mathematics) published in 2010"


Book
Roger A. Horn1
12 Jul 2010
TL;DR: The field of values as discussed by the authors is a generalization of the field of value of matrices and functions, and it includes singular value inequalities, matrix equations and Kronecker products, and Hadamard products.
Abstract: 1. The field of values 2. Stable matrices and inertia 3. Singular value inequalities 4. Matrix equations and Kronecker products 5. Hadamard products 6. Matrices and functions.

7,013 citations


Journal ArticleDOI
TL;DR: This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank, and develops a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms.
Abstract: This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative, produces a sequence of matrices $\{\boldsymbol{X}^k,\boldsymbol{Y}^k\}$, and at each step mainly performs a soft-thresholding operation on the singular values of the matrix $\boldsymbol{Y}^k$. There are two remarkable features making this attractive for low-rank matrix completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates $\{\boldsymbol{X}^k\}$ is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. On the theoretical side, we provide a convergence analysis showing that the sequence of iterates converges. On the practical side, we provide numerical examples in which $1,000\times1,000$ matrices are recovered in less than a minute on a modest desktop computer. We also demonstrate that our approach is amenable to very large scale problems by recovering matrices of rank about 10 with nearly a billion unknowns from just about 0.4% of their sampled entries. Our methods are connected with the recent literature on linearized Bregman iterations for $\ell_1$ minimization, and we develop a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms.

5,276 citations


Book
24 May 2010
TL;DR: The author presents Perron-Frobenius theory of nonnegative matrices Index, a theory of matrices that combines linear equations, vector spaces, and matrix algebra with insights into eigenvalues and Eigenvectors.
Abstract: Preface 1. Linear equations 2. Rectangular systems and echelon forms 3. Matrix algebra 4. Vector spaces 5. Norms, inner products, and orthogonality 6. Determinants 7. Eigenvalues and Eigenvectors 8. Perron-Frobenius theory of nonnegative matrices Index.

4,979 citations


Journal ArticleDOI
TL;DR: This paper shows that, under certain incoherence assumptions on the singular vectors of the matrix, recovery is possible by solving a convenient convex program as soon as the number of entries is on the order of the information theoretic limit (up to logarithmic factors).
Abstract: This paper is concerned with the problem of recovering an unknown matrix from a small fraction of its entries. This is known as the matrix completion problem, and comes up in a great number of applications, including the famous Netflix Prize and other similar questions in collaborative filtering. In general, accurate recovery of a matrix from a small number of entries is impossible, but the knowledge that the unknown matrix has low rank radically changes this premise, making the search for solutions meaningful. This paper presents optimality results quantifying the minimum number of entries needed to recover a matrix of rank r exactly by any method whatsoever (the information theoretic limit). More importantly, the paper shows that, under certain incoherence assumptions on the singular vectors of the matrix, recovery is possible by solving a convenient convex program as soon as the number of entries is on the order of the information theoretic limit (up to logarithmic factors). This convex program simply finds, among all matrices consistent with the observed entries, that with minimum nuclear norm. As an example, we show that on the order of nr log(n) samples are needed to recover a random n x n matrix of rank r by any method, and to be sure, nuclear norm minimization succeeds as soon as the number of entries is of the form nr polylog(n).

2,241 citations


Journal ArticleDOI
26 Apr 2010
TL;DR: This paper surveys the novel literature on matrix completion and introduces novel results showing that matrix completion is provably accurate even when the few observed entries are corrupted with a small amount of noise, and shows that, in practice, nuclear-norm minimization accurately fills in the many missing entries of large low-rank matrices from just a few noisy samples.
Abstract: On the heels of compressed sensing, a new field has very recently emerged. This field addresses a broad range of problems of significant practical interest, namely, the recovery of a data matrix from what appears to be incomplete, and perhaps even corrupted, information. In its simplest form, the problem is to recover a matrix from a small sample of its entries. It comes up in many areas of science and engineering, including collaborative filtering, machine learning, control, remote sensing, and computer vision, to name a few. This paper surveys the novel literature on matrix completion, which shows that under some suitable conditions, one can recover an unknown low-rank matrix from a nearly minimal set of entries by solving a simple convex optimization problem, namely, nuclear-norm minimization subject to data constraints. Further, this paper introduces novel results showing that matrix completion is provably accurate even when the few observed entries are corrupted with a small amount of noise. A typical result is that one can recover an unknown matrix of low rank from just about log noisy samples with an error that is proportional to the noise level. We present numerical results that complement our quantitative analysis and show that, in practice, nuclear-norm minimization accurately fills in the many missing entries of large low-rank matrices from just a few noisy samples. Some analogies between matrix completion and compressed sensing are discussed throughout.

1,623 citations


Journal ArticleDOI
TL;DR: The transmission matrix of a thick random scattering sample is determined and it is shown that this matrix exhibits statistical properties in good agreement with random matrix theory and allows light focusing and imaging through the random medium.
Abstract: We introduce a method to experimentally measure the monochromatic transmission matrix of a complex medium in optics. This method is based on a spatial phase modulator together with a full-field interferometric measurement on a camera. We determine the transmission matrix of a thick random scattering sample. We show that this matrix exhibits statistical properties in good agreement with random matrix theory and allows light focusing and imaging through the random medium. This method might give important insight into the mesoscopic properties of a complex medium.

1,455 citations


Book
16 Sep 2010
TL;DR: This book presents an enormous amount of information in a concise and accessible format and begins with the assumption that the reader has never seen a matrix.

1,236 citations


Journal Article
TL;DR: Using the nuclear norm as a regularizer, the algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD in a sequence of regularized low-rank solutions for large-scale matrix completion problems.
Abstract: We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm SOFT-IMPUTE iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity of order linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices; for example SOFT-IMPUTE takes a few hours to compute low-rank approximations of a 106 X 106 incomplete matrix with 107 observed entries, and fits a rank-95 approximation to the full Netflix training set in 3.3 hours. Our methods achieve good training and test errors and exhibit superior timings when compared to other competitive state-of-the-art techniques.

1,195 citations


Proceedings Article
06 Dec 2010
TL;DR: In this paper, an efficient convex optimization-based algorithm called Outlier Pursuit is presented, which under mild assumptions on the uncorrupted points (satisfied, e.g., by the standard generative assumption in PCA problems) recovers the exact optimal low-dimensional subspace, and identifies the corrupted points.
Abstract: Singular Value Decomposition (and Principal Component Analysis) is one of the most widely used techniques for dimensionality reduction: successful and efficiently computable, it is nevertheless plagued by a well-known, well-documented sensitivity to outliers. Recent work has considered the setting where each point has a few arbitrarily corrupted components. Yet, in applications of SVD or PCA such as robust collaborative filtering or bioinformatics, malicious agents, defective genes, or simply corrupted or contaminated experiments may effectively yield entire points that are completely corrupted. We present an efficient convex optimization-based algorithm we call Outlier Pursuit, that under some mild assumptions on the uncorrupted points (satisfied, e.g., by the standard generative assumption in PCA problems) recovers the exact optimal low-dimensional subspace, and identifies the corrupted points. Such identification of corrupted points that do not conform to the low-dimensional approximation, is of paramount interest in bioinformatics and financial applications, and beyond. Our techniques involve matrix decomposition using nuclear norm minimization, however, our results, setup, and approach, necessarily differ considerably from the existing line of work in matrix completion and matrix decomposition, since we develop an approach to recover the correct column space of the uncorrupted matrix, rather than the exact matrix itself.

590 citations


Journal ArticleDOI
TL;DR: Under this framework, a Boolean network equation is converted into an equivalent algebraic form as a conventional discrete-time linear system, and a matrix expression of logic is proposed, where a logical variable is expressed as a vector, a logical function is express as a multiple linear mapping.
Abstract: A new matrix product, called semi-tensor product of matrices, is reviewed Using it, a matrix expression of logic is proposed, where a logical variable is expressed as a vector, a logical function is expressed as a multiple linear mapping Under this framework, a Boolean network equation is converted into an equivalent algebraic form as a conventional discrete-time linear system Analyzing the transition matrix of the linear system, formulas are obtained to show a) the number of fixed points; b) the numbers of cycles of different lengths; c) transient period, for all points to enter the set of attractors; and d) basin of each attractor The corresponding algorithms are developed and used to some examples

589 citations


Proceedings ArticleDOI
13 Jun 2010
TL;DR: This paper reduces this extremely challenging optimization problem to a sequence of convex programs that minimize the sum of ℓ1-norm and nuclear norm of the two component matrices, which can be efficiently solved by scalable convex optimization techniques with guaranteed fast convergence.
Abstract: This paper studies the problem of simultaneously aligning a batch of linearly correlated images despite gross corruption (such as occlusion). Our method seeks an optimal set of image domain transformations such that the matrix of transformed images can be decomposed as the sum of a sparse matrix of errors and a low-rank matrix of recovered aligned images. We reduce this extremely challenging optimization problem to a sequence of convex programs that minimize the sum of l1-norm and nuclear norm of the two component matrices, which can be efficiently solved by scalable convex optimization techniques with guaranteed fast convergence. We verify the efficacy of the proposed robust alignment algorithm with extensive experiments with both controlled and uncontrolled real data, demonstrating higher accuracy and efficiency than existing methods over a wide range of realistic misalignments and corruptions.

Book
01 Jan 2010
TL;DR: In this article, DeAlba et al. presented a method for computing Eigenvalues and Singular Value Decomposition for linear systems. But their method was not suitable for the problem of linear algebra.
Abstract: Linear Algebra Linear Algebra Vectors, Matrices, and Systems of Linear Equations Jane Day Linear Independence, Span, and Bases Mark Mills Linear Transformations Francesco Barioli Determinants and Eigenvalues Luz M. DeAlba Inner Product Spaces, Orthogonal Projection, Least Squares, and Singular Value Decomposition Lixing Han and Michael Neumann Canonical Forms Leslie Hogben Other Canonical Forms Roger A. Horn and Vladimir V. Sergeichuk Unitary Similarity, Normal Matrices, and Spectral Theory Helene Shapiro Hermitian and Positive Definite Matrices Wayne Barrett Nonnegative and Stochastic Matrices Uriel G. Rothblum Partitioned Matrices Robert Reams Topics in Linear Algebra Schur Complements Roger A. Horn and Fuzhen Zhang Quadratic, Bilinear, and Sesquilinear Forms Raphael Loewy Multilinear Algebra J.A. Dias da Silva and Armando Machado Tensors and Hypermatrices Lek-Heng Lim Matrix Equalities and Inequalities Michael Tsatsomeros Functions of Matrices Nicholas J. Higham Matrix Polynomials Jorg Liesen and Christian Mehl Matrix Equations Beatrice Meini Invariant Subspaces G.W. Stewart Matrix Perturbation Theory Ren-Cang Li Special Types of Matrices Albrecht Bottcher and Ilya Spitkovsky Pseudospectra Mark Embree Singular Values and Singular Value Inequalities Roy Mathias Numerical Range Chi-Kwong Li Matrix Stability and Inertia Daniel Hershkowitz Generalized Inverses of Matrices Yimin Wei Inverse Eigenvalue Problems Alberto Borobia Totally Positive and Totally Nonnegative Matrices Shaun M. Fallat Linear Preserver Problems Peter Semrl Matrices over Finite Fields J. D. Botha Matrices over Integral Domains Shmuel Friedland Similarity of Families of Matrices Shmuel Friedland Representations of Quivers and Mixed Graphs Roger A. Horn and Vladimir V. Sergeichuk Max-Plus Algebra Marianne Akian, Ravindra Bapat, and Stephane Gaubert Matrices Leaving a Cone Invariant Bit-Shun Tam and Hans Schneider Spectral Sets Catalin Badea and Bernhard Beckermann Combinatorial Matrix Theory and Graphs Combinatorial Matrix Theory Combinatorial Matrix Theory Richard A. Brualdi Matrices and Graphs Willem H. Haemers Digraphs and Matrices Jeffrey L. Stuart Bipartite Graphs and Matrices Bryan L. Shader Sign Pattern Matrices Frank J. Hall and Zhongshan Li Topics in Combinatorial Matrix Theory Permanents Ian M. Wanless D-Optimal Matrices Michael G. Neubauer and William Watkins Tournaments T.S. Michael Minimum Rank, Maximum Nullity, and Zero Forcing Number of Graphs Shaun M. Fallat and Leslie Hogben Spectral Graph Theory Steve Butler and Fan Chung Algebraic Connectivity Steve Kirkland Matrix Completion Problems Luz M. DeAlba, Leslie Hogben, and Amy Wangsness Wehe Numerical Methods Numerical Methods for Linear Systems Vector and Matrix Norms, Error Analysis, Efficiency, and Stability Ralph Byers and Biswa Nath Datta Matrix Factorizations and Direct Solution of Linear Systems Christopher Beattie Least Squares Solution of Linear Systems Per Christian Hansen and Hans Bruun Nielsen Sparse Matrix Methods Esmond G. Ng Iterative Solution Methods for Linear Systems Anne Greenbaum Numerical Methods for Eigenvalues Symmetric Matrix Eigenvalue Techniques Ivan Slapnicar Unsymmetric Matrix Eigenvalue Techniques David S. Watkins The Implicitly Restarted Arnoldi Method D.C. Sorensen Computation of the Singular Value Decomposition Alan Kaylor Cline and Inderjit S. Dhillon Computing Eigenvalues and Singular Values to High Relative Accuracy Zlatko Drmac Nonlinear Eigenvalue Problems Heinrich Voss Topics in Numerical Linear Algebra Fast Matrix Multiplication Dario A. Bini Fast Algorithms for Structured Matrix Computations Michael Stewart Structured Eigenvalue Problems | Structure-Preserving Algorithms, Structured Error Analysis Heike Fassbender Large-Scale Matrix Computations Roland W. Freund Linear Algebra in Other Disciplines Applications to Physical and Biological Sciences Linear Algebra and Mathematical Physics Lorenzo Sadun Linear Algebra in Biomolecular Modeling Zhijun Wu Linear Algebra in Mathematical Population Biology and Epidemiology Fred Brauer and Carlos Castillo-Chavez Applications to Optimization Linear Programming Leonid N. Vaserstein Semidefinite Programming Henry Wolkowicz Applications to Probability and Statistics Random Vectors and Linear Statistical Models Simo Puntanen and George P.H. Styan Multivariate Statistical Analysis Simo Puntanen, George A.F. Seber, and George P.H. Styan Markov Chains Beatrice Meini Applications to Computer Science Coding Theory Joachim Rosenthal and Paul Weiner Quantum Computation Zijian Diao Operator Quantum Error Correction Chi-Kwong Li, Yiu-Tung Poon, and Nung-Sing Sze Information Retrieval and Web Search Amy N. Langville and Carl D. Meyer Signal Processing Michael Stewart Applications to Analysis Differential Equations and Stability Volker Mehrmann and Tatjana Stykel Dynamical Systems and Linear Algebra Fritz Colonius and Wolfgang Kliemann Control Theory Peter Benner Fourier Analysis Kenneth Howell Applications to Geometry Geometry Mark Hunacek Some Applications of Matrices and Graphs in Euclidean Geometry Miroslav Fiedler Applications to Algebra Matrix Groups Peter J. Cameron Group Representations Randall R. Holmes and Tin-Yau Tam Nonassociative Algebras Murray R. Bremner, Lucia I. Murakami, and Ivan P. Shestakov Lie Algebras Robert Wilson Computational Software Interactive Software for Linear Algebra MATLAB Steven J. Leon Linear Algebra in Maple David J. Jeffrey and Robert M. Corless Mathematica Heikki Ruskeep'a'a Sage Robert A. Beezer, Robert Bradshaw, Jason Grout, and William Stein Packages of Subroutines for Linear Algebra BLAS Jack Dongarra, Victor Eijkhout, and Julien Langou LAPACK Zhaojun Bai, James Demmel, Jack Dongarra, Julien Langou, and Jenny Wang Use of ARPACK and EIGS D.C. Sorensen Summary of Software for Linear Algebra Freely Available on the Web Jack Dongarra, Victor Eijkhout, and Julien Langou Glossary Notation Index Index

Journal Article
TL;DR: In this article, Naive mesenchymal stem cells (MSCs) are shown to specify lineage and commit to phenotypes with extreme sensitivity to tissue-level elasticity, consistent with the elasticity-insensitive commitment of differentiated cell types.
Abstract: Microenvironments appear important in stem cell lineage specification but can be difficult to adequately characterize or control with soft tissues. Naive mesenchymal stem cells (MSCs) are shown here to specify lineage and commit to phenotypes with extreme sensitivity to tissue-level elasticity. Soft matrices that mimic brain are neurogenic, stiffer matrices that mimic muscle are myogenic, and comparatively rigid matrices that mimic collagenous bone prove osteogenic. During the initial week in culture, reprogramming of these lineages is possible with addition of soluble induction factors, but after several weeks in culture, the cells commit to the lineage specified by matrix elasticity, consistent with the elasticity-insensitive commitment of differentiated cell types. Inhibition of nonmuscle myosin II blocks all elasticity-directed lineage specification-without strongly perturbing many other aspects of cell function and shape. The results have significant implications for understanding physical effects of the in vivo microenvironment and also for therapeutic uses of stem cells.

Journal ArticleDOI
TL;DR: A new interpolation formula is suggested in which a d -dimensional array is interpolated on the entries of some TT-cross (tensor train-cross) and the total number of entries and the complexity of the interpolation algorithm depend on d linearly, so the approach does not suffer from the curse of dimensionality.

Journal ArticleDOI
TL;DR: The improved computation presented in this paper is aimed to optimize the neural networks learning process using Levenberg-Marquardt (LM) algorithm, and the improved memory and time efficiencies are especially true for large sized patterns training.
Abstract: The improved computation presented in this paper is aimed to optimize the neural networks learning process using Levenberg-Marquardt (LM) algorithm. Quasi-Hessian matrix and gradient vector are computed directly, without Jacobian matrix multiplication and storage. The memory limitation problem for LM training is solved. Considering the symmetry of quasi-Hessian matrix, only elements in its upper/lower triangular array need to be calculated. Therefore, training speed is improved significantly, not only because of the smaller array stored in memory, but also the reduced operations in quasi-Hessian matrix calculation. The improved memory and time efficiencies are especially true for large sized patterns training.

Posted Content
TL;DR: This result shows that the proposed convex program recovers the low-rank matrix even though a positive fraction of its entries are arbitrarily corrupted, with an error bound proportional to the noise level, the first result that shows the classical Principal Component Analysis, optimal for small i.i.d. noise, can be made robust to gross sparse errors.
Abstract: In this paper, we study the problem of recovering a low-rank matrix (the principal components) from a high-dimensional data matrix despite both small entry-wise noise and gross sparse errors. Recently, it has been shown that a convex program, named Principal Component Pursuit (PCP), can recover the low-rank matrix when the data matrix is corrupted by gross sparse errors. We further prove that the solution to a related convex program (a relaxed PCP) gives an estimate of the low-rank matrix that is simultaneously stable to small entrywise noise and robust to gross sparse errors. More precisely, our result shows that the proposed convex program recovers the low-rank matrix even though a positive fraction of its entries are arbitrarily corrupted, with an error bound proportional to the noise level. We present simulation results to support our result and demonstrate that the new convex program accurately recovers the principal components (the low-rank matrix) under quite broad conditions. To our knowledge, this is the first result that shows the classical Principal Component Analysis (PCA), optimal for small i.i.d. noise, can be made robust to gross sparse errors; or the first that shows the newly proposed PCP can be made stable to small entry-wise perturbations.

Journal ArticleDOI
TL;DR: This article considers bipartite graphs that evolve over time and considers matrix- and tensor-based methods for predicting future links and shows that Tensor- based techniques are particularly effective for temporal data with varying periodic patterns.
Abstract: The data in many disciplines such as social networks, web analysis, etc. is link-based, and the link structure can be exploited for many different data mining tasks. In this paper, we consider the problem of temporal link prediction: Given link data for times 1 through T, can we predict the links at time T+1? If our data has underlying periodic structure, can we predict out even further in time, i.e., links at time T+2, T+3, etc.? In this paper, we consider bipartite graphs that evolve over time and consider matrix- and tensor-based methods for predicting future links. We present a weight-based method for collapsing multi-year data into a single matrix. We show how the well-known Katz method for link prediction can be extended to bipartite graphs and, moreover, approximated in a scalable way using a truncated singular value decomposition. Using a CANDECOMP/PARAFAC tensor decomposition of the data, we illustrate the usefulness of exploiting the natural three-dimensional structure of temporal link data. Through several numerical experiments, we demonstrate that both matrix- and tensor-based techniques are effective for temporal link prediction despite the inherent difficulty of the problem. Additionally, we show that tensor-based techniques are particularly effective for temporal data with varying periodic patterns.

Journal ArticleDOI
TL;DR: The exponent of a given square matrix is characterized and upper and lower bounds on achievable exponents are derived and it is shown that there are no matrices of size less than 15 with exponents exceeding 1/2.
Abstract: Polar codes were recently introduced by Arikan. They achieve the symmetric capacity of arbitrary binary-input discrete memoryless channels under a low complexity successive cancellation decoding scheme. The original polar code construction is closely related to the recursive construction of Reed-Muller codes and is based on the 2 × 2 matrix [1 0 : 1 1]. It was shown by Arikan Telatar that this construction achieves an error exponent of 1/2, i.e., that for sufficiently large blocklengths the error probability decays exponentially in the square root of the blocklength. It was already mentioned by Arikan that in principle larger matrices can be used to construct polar codes. In this paper, it is first shown that any l × l matrix none of whose column permutations is upper triangular polarizes binary-input memoryless channels. The exponent of a given square matrix is characterized, upper and lower bounds on achievable exponents are given. Using these bounds it is shown that there are no matrices of size smaller than 15×15 with exponents exceeding 1/2. Further, a general construction based on BCH codes which for large I achieves exponents arbitrarily close to 1 is given. At size 16 × 16, this construction yields an exponent greater than 1/2.

Journal ArticleDOI
TL;DR: A general class of measures based on matrix functions is introduced, and it is shown that a particular case involving a matrix resolvent arises naturally from graph-theoretic arguments.
Abstract: The emerging field of network science deals with the tasks of modeling, comparing, and summarizing large data sets that describe complex interactions. Because pairwise affinity data can be stored in a two-dimensional array, graph theory and applied linear algebra provide extremely useful tools. Here, we focus on the general concepts of centrality, communicability, and betweenness, each of which quantifies important features in a network. Some recent work in the mathematical physics literature has shown that the exponential of a network's adjacency matrix can be used as the basis for defining and computing specific versions of these measures. We introduce here a general class of measures based on matrix functions, and show that a particular case involving a matrix resolvent arises naturally from graph-theoretic arguments. We also point out connections between these measures and the quantities typically computed when spectral methods are used for data mining tasks such as clustering and ordering. We finish with computational examples showing the new matrix resolvent version applied to real networks.

Journal ArticleDOI
TL;DR: This paper generalizes the hierarchically semiseparable (HSS) matrix representations and proposes some fast algorithms for HSS matrices that are useful in developing fast‐structured numerical methods for large discretized PDEs, integral equations, eigenvalue problems, etc.
Abstract: Semiseparable matrices and many other rank-structured matrices have been widely used in developing new fast matrix algorithms. In this paper, we generalize the hierarchically semiseparable (HSS) matrix representations and propose some fast algorithms for HSS matrices. We represent HSS matrices in terms of general binary HSS trees and use simplified postordering notation for HSS forms. Fast HSS algorithms including new HSS structure generation and HSS form Cholesky factorization are developed. Moreover, we provide a new linear complexity explicit ULV factorization algorithm for symmetric positive definite HSS matrices with a low-rank property. The corresponding factors can be used to solve the HSS systems also in linear complexity. Numerical examples demonstrate the efficiency of the algorithms. All these algorithms have nice data locality. They are useful in developing fast-structured numerical methods for large discretized PDEs (such as elliptic equations), integral equations, eigenvalue problems, etc. Some applications are shown. Copyright q 2009 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, the notion of the core inverse is introduced as an alternative to the group inverse and several properties of its properties are derived with a perspective towards possible applications, such as matrix partial ordering.
Abstract: This article introduces the notion of the Core inverse as an alternative to the group inverse. Several of its properties are derived with a perspective towards possible applications. Furthermore, a matrix partial ordering based on the Core inverse is introduced and extensively investigated.

Journal ArticleDOI
TL;DR: Simple criteria are provided that guarantee that a deterministic sensing matrix satisfying these criteria acts as a near isometry on an overwhelming majority of k-sparse signals; in particular, most such signals have a unique representation in the measurement domain.
Abstract: Compressed Sensing aims to capture attributes of k-sparse signals using very few measurements. In the standard compressed sensing paradigm, the N × C measurement matrix ? is required to act as a near isometry on the set of all k-sparse signals (restricted isometry property or RIP). Although it is known that certain probabilistic processes generate N × C matrices that satisfy RIP with high probability, there is no practical algorithm for verifying whether a given sensing matrix ? has this property, crucial for the feasibility of the standard recovery algorithms. In contrast, this paper provides simple criteria that guarantee that a deterministic sensing matrix satisfying these criteria acts as a near isometry on an overwhelming majority of k-sparse signals; in particular, most such signals have a unique representation in the measurement domain. Probability still plays a critical role, but it enters the signal model rather than the construction of the sensing matrix. An essential element in our construction is that we require the columns of the sensing matrix to form a group under pointwise multiplication. The construction allows recovery methods for which the expected performance is sub-linear in C, and only quadratic in N, as compared to the super-linear complexity in C of the Basis Pursuit or Matching Pursuit algorithms; the focus on expected performance is more typical of mainstream signal processing than the worst case analysis that prevails in standard compressed sensing. Our framework encompasses many families of deterministic sensing matrices, including those formed from discrete chirps, Delsarte-Goethals codes, and extended BCH codes.

Proceedings ArticleDOI
23 Oct 2010
TL;DR: Generic equivalences between matrix products over a large class of algebraic structures used in optimization, verifying a matrix product over the same structure, and corresponding triangle detection problems over the structure are shown.
Abstract: We say an algorithm on n by n matrices with entries in [-M, M] (or n-node graphs with edge weights from [-M, M]) is truly sub cubic if it runs in O(n^{3-\delta} \poly(\log M)) time for some \delta > 0. We define a notion of sub cubic reducibility, and show that many important problems on graphs and matrices solvable in O(n^3) time are equivalent under sub cubic reductions. Namely, the following weighted problems either all have truly sub cubic algorithms, or none of them do: - The all-pairs shortest paths problem (APSP). - Detecting if a weighted graph has a triangle of negative total edge weight. - Listing up to n^{2.99} negative triangles in an edge-weighted graph. - Finding a minimum weight cycle in a graph of non-negative edge weights. - The replacement paths problem in an edge-weighted digraph. - Finding the second shortest simple path between two nodes in an edge-weighted digraph. - Checking whether a given matrix defines a metric. - Verifying the correctness of a matrix product over the (\min, +)-semiring. Therefore, if APSP cannot be solved in n^{3-\eps} time for any \eps > 0, then many other problems also need essentially cubic time. In fact we show generic equivalences between matrix products over a large class of algebraic structures used in optimization, verifying a matrix product over the same structure, and corresponding triangle detection problems over the structure. These equivalences simplify prior work on sub cubic algorithms for all-pairs path problems, since it now suffices to give appropriate sub cubic triangle detection algorithms. Other consequences of our work are new combinatorial approaches to Boolean matrix multiplication over the (OR, AND)-semiring (abbreviated as BMM). We show that practical advances in triangle detection would imply practical BMM algorithms, among other results. Building on our techniques, we give two new BMM algorithms: a derandomization of the recent combinatorial BMM algorithm of Bansal and Williams (FOCS'09), and an improved quantum algorithm for BMM.

Journal ArticleDOI
TL;DR: In this paper, a review of non-commutative gravity within Yang-Mills matrix models is presented, where spacetime is described as a noncommutive brane solution of the matrix model, i.e. as a submanifold of.
Abstract: An introductory review to emergent noncommutative gravity within Yang–Mills matrix models is presented. Spacetime is described as a noncommutative brane solution of the matrix model, i.e. as a submanifold of . Fields and matter on the brane arise as fluctuations of the bosonic resp. fermionic matrices around such a background, and couple to an effective metric interpreted in terms of gravity. Suitable tools are provided for the description of the effective geometry in the semi-classical limit. The relation to non-commutative gauge theory and the role of UV/IR mixing are explained. Several types of geometries are identified, in particular 'harmonic' and 'Einstein' types of solutions. The physics of the harmonic branch is discussed in some detail, emphasizing the non-standard role of vacuum energy. This may provide a new approach to some of the big puzzles in this context. The IKKT model with D = 10 and close relatives are singled out as promising candidates for quantum theory of fundamental interactions including gravity.

Journal ArticleDOI
TL;DR: Sparse singular value decomposition (SSVD) is proposed as a new exploratory analysis tool for biclustering or identifying interpretable row–column associations within high‐dimensional data matrices.
Abstract: Sparse singular value decomposition (SSVD) is proposed as a new exploratory analysis tool for biclustering or identifying interpretable row-column associations within high-dimensional data matrices. SSVD seeks a low-rank, checkerboard structured matrix approximation to data matrices. The desired checkerboard structure is achieved by forcing both the left- and right-singular vectors to be sparse, that is, having many zero entries. By interpreting singular vectors as regression coefficient vectors for certain linear regressions, sparsity-inducing regularization penalties are imposed to the least squares regression to produce sparse singular vectors. An efficient iterative algorithm is proposed for computing the sparse singular vectors, along with some discussion of penalty parameter selection. A lung cancer microarray dataset and a food nutrition dataset are used to illustrate SSVD as a biclustering method. SSVD is also compared with some existing biclustering methods using simulated datasets.

Journal ArticleDOI
TL;DR: Numerical results show that the modulus-based relaxation methods are superior to the projected relaxation methods as well as the modified modulus method in computing efficiency.
Abstract: For the large sparse linear complementarity problems, by reformulating them as implicit fixed-point equations based on splittings of the system matrices, we establish a class of modulus-based matrix splitting iteration methods and prove their convergence when the system matrices are positive-definite matrices and H+-matrices. These results naturally present convergence conditions for the symmetric positive-definite matrices and the M-matrices. Numerical results show that the modulus-based relaxation methods are superior to the projected relaxation methods as well as the modified modulus method in computing efficiency. Copyright © 2009 John Wiley & Sons, Ltd.

Book
15 Oct 2010
TL;DR: This book discusses preconditioning Nonsymmetric and Indefinite Matrices, block Factorization Preconditioners, and Variable-Step Iterative Methods for Preconditioning Nonlinear Problems.
Abstract: Motivation for Preconditioning.- A Finite Element Tutorial.- A Main Goal.- Block Factorization Preconditioners.- Two-by-Two Block Matrices and Their Factorization.- Classical Examples of Block-Factorizations.- Multigrid (MG).- Topics on Algebraic Multigrid (AMG).- Domain Decomposition (DD) Methods.- Preconditioning Nonsymmetric and Indefinite Matrices.- Preconditioning Saddle-Point Matrices.- Variable-Step Iterative Methods.- Preconditioning Nonlinear Problems.- Quadratic Constrained Minimization Problems.

Journal ArticleDOI
TL;DR: An approximation algorithm for finding optimal decompositions which is based on the insight provided by the theorem and significantly outperforms a greedy approximation algorithms for a set covering problem to which the problem of matrix decomposition is easily shown to be reducible.

Journal ArticleDOI
TL;DR: In this paper, an atomic decomposition for minimum rank approximation (ADMiRA) algorithm was proposed for matrix completion with rank-restricted isometry property (R-RIP) and bound both the number of iterations and the error in the approximate solution for the general case of noisy measurements.
Abstract: In this paper, we address compressed sensing of a low-rank matrix posing the inverse problem as an approximation problem with a specified target rank of the solution. A simple search over the target rank then provides the minimum rank solution satisfying a prescribed data approximation bound. We propose an atomic decomposition providing an analogy between parsimonious representations of a sparse vector and a low-rank matrix and extending efficient greedy algorithms from the vector to the matrix case. In particular, we propose an efficient and guaranteed algorithm named atomic decomposition for minimum rank approximation (ADMiRA) that extends Needell and Tropp's compressive sampling matching pursuit (CoSaMP) algorithm from the sparse vector to the low-rank matrix case. The performance guarantee is given in terms of the rank-restricted isometry property (R-RIP) and bounds both the number of iterations and the error in the approximate solution for the general case of noisy measurements and approximately low-rank solution. With a sparse measurement operator as in the matrix completion problem, the computation in ADMiRA is linear in the number of measurements. Numerical experiments for the matrix completion problem show that, although the R-RIP is not satisfied in this case, ADMiRA is a competitive algorithm for matrix completion.