scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A matrix decomposition and its applications

03 Oct 2015-Linear & Multilinear Algebra (Taylor & Francis)-Vol. 63, Iss: 10, pp 2033-2042
TL;DR: In this paper, the uniqueness and construction of the Z matrix in Theorem 2.1 was shown and an affirmative answer to a question proposed in [J. Math. Anal. Appl. 407 (2013) 436-442 was given.
Abstract: We show the uniqueness and construction (of the Z matrix in Theorem 2.1, to be exact) of a matrix decomposition and give an affirmative answer to a question proposed in [J. Math. Anal. Appl. 407 (2013) 436-442].
Citations
More filters
Journal ArticleDOI
TL;DR: This monograph builds on Tensor Networks for Dimensionality Reduction and Large-scale Optimization by discussing tensor network models for super-compressed higher-order representation of data/parameters and cost functions, together with an outline of their applications in machine learning and data analytics.
Abstract: Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.

224 citations

Journal ArticleDOI
TL;DR: In this paper, the authors study chaos in the classical limit of the matrix quantum system describing D0-brane dynamics and determine a precise value of the largest Lyapunov exponent.
Abstract: We study chaos in the classical limit of the matrix quantum mechanical system describing D0-brane dynamics. We determine a precise value of the largest Lyapunov exponent, and, with less precision, calculate the entire spectrum of Lyapunov exponents. We verify that these approach a smooth limit as N → ∞. We show that a classical analog of scrambling occurs with fast scrambling scaling, t ∗ ∼ log S. These results confirm the k-locality property of matrix mechanics discussed by Sekino and Susskind.

93 citations

Journal ArticleDOI
TL;DR: Predictive Subspace Clustering (PSC) as discussed by the authors partitions the data into clusters while simultaneously estimating cluster-wise principal component analysis (PCA) parameters, which minimises an objective function that depends upon a new measure of influence for PCA models.
Abstract: In several application domains, high-dimensional observations are collected and then analysed in search for naturally occurring data clusters which might provide further insights about the nature of the problem. In this paper we describe a new approach for partitioning such high-dimensional data. Our assumption is that, within each cluster, the data can be approximated well by a linear subspace estimated by means of a principal component analysis (PCA). The proposed algorithm, Predictive Subspace Clustering (PSC) partitions the data into clusters while simultaneously estimating cluster-wise PCA parameters. The algorithm minimises an objective function that depends upon a new measure of influence for PCA models. A penalised version of the algorithm is also described for carrying our simultaneous subspace clustering and variable selection. The convergence of PSC is discussed in detail, and extensive simulation results and comparisons to competing methods are presented. The comparative performance of PSC has been assessed on six real gene expression data sets for which PSC often provides state-of-art results.

76 citations

Posted Content
TL;DR: The main objective of this paper is to show how tensor networks can be used to solve a wide class of big data optimization problems by applying tensorization and performing all operations using relatively small size matrices and tensors and applying iteratively optimized and approximative tensor contractions.
Abstract: In this paper we review basic and emerging models and associated algorithms for large-scale tensor networks, especially Tensor Train (TT) decompositions using novel mathematical and graphical representations We discus the concept of tensorization (ie, creating very high-order tensors from lower-order original data) and super compression of data achieved via quantized tensor train (QTT) networks The purpose of a tensorization and quantization is to achieve, via low-rank tensor approximations "super" compression, and meaningful, compact representation of structured data The main objective of this paper is to show how tensor networks can be used to solve a wide class of big data optimization problems (that are far from tractable by classical numerical methods) by applying tensorization and performing all operations using relatively small size matrices and tensors and applying iteratively optimized and approximative tensor contractions Keywords: Tensor networks, tensor train (TT) decompositions, matrix product states (MPS), matrix product operators (MPO), basic tensor operations, tensorization, distributed representation od data optimization problems for very large-scale problems: generalized eigenvalue decomposition (GEVD), PCA/SVD, canonical correlation analysis (CCA)

70 citations

Journal ArticleDOI
TL;DR: In this paper, the authors study chaos in the classical limit of the matrix quantum mechanical system describing D0-brane dynamics and show that a classical analog of scrambling occurs with fast scrambling scaling.
Abstract: We study chaos in the classical limit of the matrix quantum mechanical system describing D0-brane dynamics. We determine a precise value of the largest Lyapunov exponent, and, with less precision, calculate the entire spectrum of Lyapunov exponents. We verify that these approach a smooth limit as $N \rightarrow \infty$. We show that a classical analog of scrambling occurs with fast scrambling scaling, $t_* \sim \log S$. These results confirm the k-locality property of matrix mechanics discussed by Sekino and Susskind.

53 citations

References
More filters
Book
Roger A. Horn1
12 Jul 2010
TL;DR: The field of values as discussed by the authors is a generalization of the field of value of matrices and functions, and it includes singular value inequalities, matrix equations and Kronecker products, and Hadamard products.
Abstract: 1. The field of values 2. Stable matrices and inertia 3. Singular value inequalities 4. Matrix equations and Kronecker products 5. Hadamard products 6. Matrices and functions.

7,013 citations

Book
06 Apr 2011
TL;DR: In this paper, Doubly Stochastic Matrices and Schur-Convex Functions are used to represent matrix functions in the context of matrix factorizations, compounds, direct products and M-matrices.
Abstract: Introduction.- Doubly Stochastic Matrices.- Schur-Convex Functions.- Equivalent Conditions for Majorization.- Preservation and Generation of Majorization.- Rearrangements and Majorization.- Combinatorial Analysis.- Geometric Inequalities.- Matrix Theory.- Numerical Analysis.- Stochastic Majorizations.- Probabilistic, Statistical, and Other Applications.- Additional Statistical Applications.- Orderings Extending Majorization.- Multivariate Majorization.- Convex Functions and Some Classical Inequalities.- Stochastic Ordering.- Total Positivity.- Matrix Factorizations, Compounds, Direct Products, and M-Matrices.- Extremal Representations of Matrix Functions.

6,641 citations

Book
15 Nov 1996

3,392 citations

Book
27 May 1999
TL;DR: In this paper, the authors present an elementary linear algebra review of the second edition of the Second Edition of the Basic Linear Algebra (BLA) and discuss the use of matrix polynomials and Canonical forms.
Abstract: Preface to the Second Edition.- Preface.- Frequently Used Notation and Terminology.- Frequently Used Terms.- 1 Elementary Linear Algebra Review.- 2 Partitioned Matrices, Rank, and Eigenvalues.- 3 Matrix Polynomials and Canonical Forms.- 4 Numerical Ranges, Matrix Norms, and Special Operations.- 5 Special Types of Matrices.- 6 Unitary Matrices and Contractions.- 7 Positive Semidefinite Matrices.- 8 Hermitian Matrices.- 9 Normal Matrices.- 10 Majorization and Matrix Inequalities.- References.- Notation.- Index.

806 citations

Journal ArticleDOI
TL;DR: In this paper, the authors obtained the following singular value inequality: σ j(A/A11) sec2(α)σ j (A22), j = 1,...,q, where j(·) denotes the j -th largest eigenvalue.
Abstract: Let A = [ A11 A12 A21 A22 ] , where A22 is q× q , be an n× n complex matrix such that the numerical range of A is contained in Sα = {z ∈ C : Rz > 0, |Iz| (Rz) tanα} for some α ∈ [0,π/2) . We obtain the following singular value inequality: σ j(A/A11) sec2(α)σ j(A22), j = 1, . . . ,q, where A/A11 := A22−A21A−1 11 A12 and σ j(·) means the j -th largest singular value. This strengthens some recent results on determinantal inequalities. We also prove σ j(A) sec(α)λ j(RA), j = 1, . . . ,n, where λ j(·) denotes the j -th largest eigenvalue, complementing a result of Fan and Hoffman. Mathematics subject classification (2010): 15A45.

71 citations


"A matrix decomposition and its appl..." refers background in this paper

  • ...Lin for reference [4] which initiated this work; he is also indebted to M....

    [...]

  • ...We point out that in [4] Drury and Lin presented a set of related inequalities:...

    [...]