scispace - formally typeset
Search or ask a question

Showing papers on "Matrix analysis published in 2010"


Book
24 May 2010
TL;DR: The author presents Perron-Frobenius theory of nonnegative matrices Index, a theory of matrices that combines linear equations, vector spaces, and matrix algebra with insights into eigenvalues and Eigenvectors.
Abstract: Preface 1. Linear equations 2. Rectangular systems and echelon forms 3. Matrix algebra 4. Vector spaces 5. Norms, inner products, and orthogonality 6. Determinants 7. Eigenvalues and Eigenvectors 8. Perron-Frobenius theory of nonnegative matrices Index.

4,979 citations


Book
03 May 2010
TL;DR: Wigner Matrices and Semicircular Law for Hadamard products have been used in this article for spectral separations and convergence rates of ESD for linear spectral statistics.
Abstract: Wigner Matrices and Semicircular Law.- Sample Covariance Matrices and the Mar#x010D enko-Pastur Law.- Product of Two Random Matrices.- Limits of Extreme Eigenvalues.- Spectrum Separation.- Semicircular Law for Hadamard Products.- Convergence Rates of ESD.- CLT for Linear Spectral Statistics.- Eigenvectors of Sample Covariance Matrices.- Circular Law.- Some Applications of RMT.

1,715 citations


Journal ArticleDOI
TL;DR: Under this framework, a Boolean network equation is converted into an equivalent algebraic form as a conventional discrete-time linear system, and a matrix expression of logic is proposed, where a logical variable is expressed as a vector, a logical function is express as a multiple linear mapping.
Abstract: A new matrix product, called semi-tensor product of matrices, is reviewed Using it, a matrix expression of logic is proposed, where a logical variable is expressed as a vector, a logical function is expressed as a multiple linear mapping Under this framework, a Boolean network equation is converted into an equivalent algebraic form as a conventional discrete-time linear system Analyzing the transition matrix of the linear system, formulas are obtained to show a) the number of fixed points; b) the numbers of cycles of different lengths; c) transient period, for all points to enter the set of attractors; and d) basin of each attractor The corresponding algorithms are developed and used to some examples

589 citations


Journal ArticleDOI
TL;DR: Numerical results show that the modulus-based relaxation methods are superior to the projected relaxation methods as well as the modified modulus method in computing efficiency.
Abstract: For the large sparse linear complementarity problems, by reformulating them as implicit fixed-point equations based on splittings of the system matrices, we establish a class of modulus-based matrix splitting iteration methods and prove their convergence when the system matrices are positive-definite matrices and H+-matrices. These results naturally present convergence conditions for the symmetric positive-definite matrices and the M-matrices. Numerical results show that the modulus-based relaxation methods are superior to the projected relaxation methods as well as the modified modulus method in computing efficiency. Copyright © 2009 John Wiley & Sons, Ltd.

268 citations


Posted Content
TL;DR: It is shown that properly constrained nuclear-norm minimization stably recovers a low-rank matrix from a constant number of noisy measurements per degree of freedom; this seems to be the first result of this nature.
Abstract: This paper presents several novel theoretical results regarding the recovery of a low-rank matrix from just a few measurements consisting of linear combinations of the matrix entries. We show that properly constrained nuclear-norm minimization stably recovers a low-rank matrix from a constant number of noisy measurements per degree of freedom; this seems to be the first result of this nature. Further, the recovery error from noisy data is within a constant of three targets: 1) the minimax risk, 2) an oracle error that would be available if the column space of the matrix were known, and 3) a more adaptive oracle error which would be available with the knowledge of the column space corresponding to the part of the matrix that stands above the noise. Lastly, the error bounds regarding low-rank matrices are extended to provide an error bound when the matrix has full rank with decaying singular values. The analysis in this paper is based on the restricted isometry property (RIP) introduced in [6] for vectors, and in [22] for matrices.

198 citations


Journal ArticleDOI
TL;DR: The Bernstein polynomials (B-polynomials) operational matrices of integration P, differentiation D and product Ĉ are derived and can be used to solve problems such as calculus of variations, differential equations, optimal control and integral equations.
Abstract: The Bernstein polynomials (B-polynomials) operational matrices of integration P, differentiation D and product C are derived. A general procedure of forming these matrices are given. These matrices can be used to solve problems such as calculus of variations, differential equations, optimal control and integral equations. Illustrative examples are included to demonstrate the validity and applicability of the operational matrices.

157 citations


Journal ArticleDOI
Fumio Hiai1
TL;DR: These lecture notes are largely based on my course at Graduate School of Information Sciences of Tohoku University during April-July of 2009 as mentioned in this paper, and the main topics covered in these notes are matrix/operator monotone and convex functions (the Löwner and Kraus theory), operator means (the so-called Kubo-Ando theory), majorization for eigen/singular values of matrices and its applications to matrix norm inequalities, and means of matrics and related norm inequalities.
Abstract: These lecture notes are largely based on my course at Graduate School of Information Sciences of Tohoku University during April–July of 2009. The aim of my lectures was to explain several important topics on matrix analysis from the point of view of functional analysis. These notes are also suitable for an introduction to functional analysis though the arguments are mostly restricted to the finite-dimensional situation. The main topics covered in these notes are matrix/operator monotone and convex functions (the so-called Löwner and Kraus theory), operator means (the so-called Kubo–Ando theory), majorization for eigen/singular values of matrices and its applications to matrix norm inequalities, and means of matrices and related norm inequalities. These have been chosen from my knowledge and interest while there are many other important topics on the subject. I have tried to make expositions as transparent as possible and also as self-contained as possible. To do so, some technical stuffs rather apart from matrix analysis are compiled in Appendices. The proof of the theorem of Kraus is also deferred into Appendices since it seems too much to include in the main body. A number of exercises are put in these lecture notes, which are supplements of my expositions as proofs omitted, examples, and further remarks. Concerning References I should mention that their list and citations are not so complete. At the moment I am collaborating with D. Petz in writing a more comprehensive textbook on matrix analysis, hoping that some parts of these notes will be incorporated into the forthcoming book. I express my gratitude to Professors T. Ando and H. Kosaki. Ando sent me his English translation of the German paper by Kraus, without which I could not understand the characterization of matrix convex functions due to Kraus. Kosaki gave me comments on Chapter 5, which were helpful to update the content of the chapter. I am thankful to Professor N. Obata, Editor-in-Chief of Interdisciplinary Information Sciences, who suggested me to submit these lecture notes as GSIS selected lectures, a newly launched section of the journal. Finally, this work was partially supported by Grant-in-Aid for Scientific Research (C)21540208.

152 citations


Journal ArticleDOI
TL;DR: It is shown that the global Galerkin matrix associated with complete polynomials cannot be diagonalized in the stochastically linear case.
Abstract: We investigate the structural, spectral, and sparsity properties of Stochastic Galerkin matrices as they arise in the discretization of linear differential equations with random coefficient functions. These matrices are characterized as the Galerkin representation of polynomial multiplication operators. In particular, it is shown that the global Galerkin matrix associated with complete polynomials cannot be diagonalized in the stochastically linear case.

133 citations


Journal ArticleDOI
TL;DR: It is observed that basic ideas and tools of rigidity theory can be adapted to determine uniqueness of low-rank matrix completion, where inner products play the role that distances play in Rigidity theory.
Abstract: The problem of completing a low-rank matrix from a subset of its entries is often encountered in the analysis of incomplete data sets exhibiting an underlying factor model with applications in collaborative filtering, computer vision, and control. Most recent work has been focused on constructing efficient algorithms for exact or approximate recovery of the missing matrix entries and proving lower bounds for the number of known entries that guarantee a successful recovery with high probability. A related problem from both the mathematical and algorithmic points of view is the distance geometry problem of realizing points in a Euclidean space from a given subset of their pairwise distances. Rigidity theory answers basic questions regarding the uniqueness of the realization satisfying a given partial set of distances. We observe that basic ideas and tools of rigidity theory can be adapted to determine uniqueness of low-rank matrix completion, where inner products play the role that distances play in rigidity theory. This observation leads to efficient randomized algorithms for testing necessary and sufficient conditions for local completion and for testing sufficient conditions for global completion. Crucial to our analysis is a new matrix, which we call the completion matrix, that serves as the analogue of the rigidity matrix.

128 citations


Journal ArticleDOI
TL;DR: GAP is a Java-designed exploratory data analysis (EDA) software for matrix visualization (MV) and clustering of high-dimensional data sets and provides direct visual perception for exploring structures of a given data matrix and its corresponding proximity matrices, for variables and subjects.

112 citations


Proceedings Article
21 Jun 2010
TL;DR: In this article, the authors consider an instance of high-dimensional statistical inference in which the goal is to use N noisy observations to estimate a matrix Θ+ ∈ ℝk×p that is assumed to be either exactly low rank, or near low-rank.
Abstract: We study an instance of high-dimensional statistical inference in which the goal is to use N noisy observations to estimate a matrix Θ+ ∈ ℝk×p that is assumed to be either exactly low rank, or "near" low-rank, meaning that it can be well-approximated by a matrix with low rank. We consider an M-estimator based on regularization by the trace or nuclear norm over matrices, and analyze its performance under high-dimensional scaling. We provide non-asymptotic bounds on the Frobenius norm error that hold for a general class of noisy observation models, and apply to both exactly low-rank and approximately low-rank matrices. We then illustrate their consequences for a number of specific learning models, including low-rank multivariate or multi-task regression, system identification in vector autoregressive processes, and recovery of low-rank matrices from random projections. Simulations show excellent agreement with the high-dimensional scaling of the error predicted by our theory.

Journal ArticleDOI
TL;DR: The behavior of the inverse participation ratio and the localization transition in infinitely large random matrices through the cavity method is studied and a critical line separating localized from extended states in the case of Lévy matrices is derived.
Abstract: We study the behavior of the inverse participation ratio and the localization transition in infinitely large random matrices through the cavity method. Results are shown for two ensembles of random matrices: Laplacian matrices on sparse random graphs and fully connected Levy matrices. We derive a critical line separating localized from extended states in the case of Levy matrices. Comparison between theoretical results and diagonalization of finite random matrices is shown.

Journal ArticleDOI
TL;DR: The aim is to develop fast computable formulae that produce as-sharp-as-possible bounds on real eigenvalues of interval matrices, and the various approaches are illustrated and compared by a series of examples.
Abstract: We study bounds on real eigenvalues of interval matrices, and our aim is to develop fast computable formulae that produce as-sharp-as-possible bounds. We consider two cases: general and symmetric interval matrices. We focus on the latter case, since on the one hand such interval matrices have many applications in mechanics and engineering, and on the other hand many results from classical matrix analysis could be applied to them. We also provide bounds for the singular values of (generally nonsquare) interval matrices. Finally, we illustrate and compare the various approaches by a series of examples.

Book
01 Jan 2010
TL;DR: In this paper, the authors present a classification of real quadratic forms and their relation to Vector Spaces, including the following: 1. Real Quadratic Forms. 2. Hermitian Forms.
Abstract: 1. REAL COORDINATE SPACES. The Vector Spaces Rn. Linear Independence. Subspaces of Rn. Spanning Sets. Geometric Interpretations of R^2 and R^3. Bases and Dimension. 2. ELEMENTARY OPERATIONS ON VECTORS. Elementary Operations and Their Inverses. Elementary Operations and Linear Independence. Standard Bases for Subspaces. 3. MATRIX MULTIPLICATION. Matrices of Transition. Properties of Matrix Multiplication. Invertible Matrices. Column Operations and Column-Echelon Forms. Row Operations and Row-Echelon Forms. Row and Column Equivalence. Rank and Equivalence. LU Decompositions. 4. VECTOR SPACES, MATRICES, AND LINEAR EQUATIONS. Vector Spaces. Subspaces and Related Concepts. Isomorphisms of Vector Spaces. Standard Bases for Subspaces. Matrices over an Arbitrary Field. Systems of Linear Equations. More on Systems of Linear Equations. 5. LINEAR TRANSFORMATIONS. Linear Transformations. Linear Transformations and Matrices. Change of Basis. Composition of Linear Transformations. 6. DETERMINANTS. Permutations and Indices. The Definition of a Determinant. Cofactor Expansions. Elementary Operations and Cramer's Rule. Determinants and Matrix Multiplication. 7. EIGENVALUES AND EIGENVECTORS. Eigenvalues and Eigenvectors. Eigenspaces and Similarity. Representation by a Diagonal Matrix. 8. FUNCTIONS OF VECTORS. Linear Functionals. Real Quadratic Forms. Orthogonal Matrices. Reduction of Real Quadratic Forms. Classification of Real Quadratic Forms. Binlinear Forms. Symmetric Bilinear Forms. Hermitian Forms. 9. INNER PRODUCT SPACES. Inner Products. Norms and Distances. Orthonormal Bases. Orthogonal Complements. Isometrics. Normal Matrices. Normal Linear Operators. 10. SPECTRAL DECOMPOSITIONS. Projections and Direct Sums. Spectral Decompositions. Minimal Polynomials and Spectral Decompositions. Nilpotent Transformations. The Jordan Canonical Form. 11. NUMERICAL METHODS. Sequences and Series of Vectors. Sequences and Series of Matrices. The Standard Method of Iteration. Cimmino's Method. An Iterative Method for Determining Eigenvalues.

Journal ArticleDOI
TL;DR: In this paper, the authors present detailed computations of the free energy in a one-cut matrix model with a hard edge a, in beta-ensembles, with any polynomial potential.
Abstract: We present detailed computations of the 'at least finite' terms (three dominant orders) of the free energy in a one-cut matrix model with a hard edge a, in beta-ensembles, with any polynomial potential. beta is a positive number, so not restricted to the standard values beta = 1 (hermitian matrices), beta = 1/2 (symmetric matrices), beta = 2 (quaternionic self-dual matrices). This model allows to study the statistic of the maximum eigenvalue of random matrices. We compute the large deviation function to the left of the expected maximum. We specialize our results to the gaussian beta-ensembles and check them numerically. Our method is based on general results and procedures already developed in the literature to solve the Pastur equations (also called "loop equations"). It allows to compute the left tail of the analog of Tracy-Widom laws for any beta, including the constant term.

Journal ArticleDOI
TL;DR: The fine spectra of lower triangular double-band matrices have been examined by several authors and here they are determined over the sequence spaces c"0 and c.

Reference EntryDOI
TL;DR: In this paper, an application of the random matrix theory in the context of estimating the bipartite entanglement of a quantum system is discussed, where the smallest eigenvalue of the reduced density matrix of one of the subsystems has similar statistical properties as those of the Wishart matrices, except that their trace is constrained to be unity.
Abstract: We discuss an application of the random matrix theory in the context of estimating the bipartite entanglement of a quantum system. We discuss how the Wishart ensemble (the earliest studied random matrix ensemble) appears in this quantum problem. The eigenvalues of the reduced density matrix of one of the subsystems have similar statistical properties as those of the Wishart matrices, except that their {\em trace is constrained to be unity}. We focus here on the smallest eigenvalue which serves as an important measure of entanglement between the two subsystems. In the hard edge case (when the two subsystems have equal sizes) one can fully characterize the probability distribution of the minimum eigenvalue for real, complex and quaternion matrices of all sizes. In particular, we discuss the important finite size effect due to the {\em fixed trace constraint}.

Journal ArticleDOI
TL;DR: The existence and global exponential stability of an almost periodic solution of an impulsive neural network model with distributed delays is considered in a matrix setting and a concrete Hopfield model shows the advantages in comparison with a classical norm approach.

Book ChapterDOI
10 Oct 2010
TL;DR: This work shows how one can generalize the triangular matrix interpretations method to matrices that are not necessarily triangular but nevertheless polynomially bounded, and shows that this approach also applies to matrix interpretations over the real (algebraic) numbers.
Abstract: Matrix interpretations can be used to bound the derivational complexity of term rewrite systems. In particular, triangular matrix interpretations over the natural numbers are known to induce polynomial upper bounds on the derivational complexity of (compatible) rewrite systems. Using techniques from linear algebra, we show how one can generalize the method to matrices that are not necessarily triangular but nevertheless polynomially bounded. Moreover, we show that our approach also applies to matrix interpretations over the real (algebraic) numbers. In particular, it allows triangular matrix interpretations to infer tighter bounds than the original approach.

Journal ArticleDOI
TL;DR: Some necessary and sufficient conditions for matrices to be invertible over commutative semirings are obtained and the factor rank of matrices over Semirings is investigated.

Journal ArticleDOI
TL;DR: In this paper, the degree of freedom "warping intensity" is introduced following a new approach for 3D frame elements, associated with the warping basic mode, a geometric characteristic of the cross-section.

Book
26 Apr 2010
TL;DR: The BzzMath Library contains chapters on modeling Physical Phenomena Number Representation on the Computer Elementary Operations Error Sources Error Propagation Decision-Making for an Optimal Program Selection of Programming Languages: Why C++?
Abstract: Preface BASIC CONCEPTS Introduction Modeling Physical Phenomena Number Representation on the Computer Elementary Operations Error Sources Error Propagation Decision-Making for an Optimal Program Selection of Programming Languages: Why C++? SOME UTILITIES IN THE BzzMATH LIBRARY Introduction Messages and Printing Save and Load Integer Algebra BzzVevtorIntArray and BzzVectorArray BzzMatrixCoefficientsExistence BzzMatrixExistence BzzSymmetricMatrixCoefficientsExistence Complex Numbers Miscellaneous Utilities BzzPlot.exe and BzzPlotSparse.exe LINEAR ALGEBRA Introduction Classes for Linear Algebra BzzVector Class BzzMatrix Class Vector and Matrix Norms Structured Matrices Sparse Unstructured Matrices Symmetric Matrices Linear Algebra Operations SQUARE LINEAR SYSTEMS Introduction Gauss Elimination Gauss Transformation Classical Gauss Factorization Alternative Methods Conditioning of Linear Systems Best Pivot Selection Solution Features Class for Linear System Solution Condition Number Computation Determinant Evaluation Inverse Matrix Sparse Matrices Classes for Linear System Solution with Sparse Unstructured Matrices STRUCTURED LINEAR SYSTEMS Introduction Symmetric Matrices Symmetric Sparse Matrices Band Matrices Diagonal Block Matrices Iterative Methods Systems Generated by Special Physical Problems OVERDIMENSIONED LINEAR SYSTEMS Introduction Orthogonal Matrices Problem Conditioning Method of Least Squares Orthogonal Transformation QR Factorization Classes for QRT Factorization SVC Factorization Class for SVD Factorization Advantages of SVD Factorization UNDERDIMENSIONED LINEAR SYSTEMS Introduction LQ Factorization Classes for LQ Factorization Null Space Minimization with Linear Constraints Minimizing a Sum of Squares Subject to Linear Constraints Special Problems Solved by LQ Factorization EIGENVALUES AND EIGENVECTORS FOR SYMMETRIC MATRICES Introduction Eigenvalues of Symmetric Matrices Power Method Inverse Power Method Inverse-Translate Power Method Jacobi Method QR Algorithm Eigenvalues of Rank-2 Matrices ITERATIVE PROCESSES Introduction Convergence of an Iterative Algorithm Convergence Speed Convergence Accelerators Extrapolation Extrapolation Methods Class for Numerical Derivation APPENDIX A: Matrix Product APPENDIX B: Entertainment APPENDIX C: Basic Requirements for Using the BzzMath Library APPENDIX D: Copyrights

BookDOI
01 Apr 2010
Abstract: Algebra and Matrices: Operators Preserving Primitivity for Matrix Pairs (L B Beasley & A Guterman) Determining the Schein Rank of Boolean Matrices (E Marenich) Matrix Algebras and Their Length (O Markova) Matrices and Algorithms: Some Relationships Between Optimal Preconditioner and Superoptimal Preconditioner (J-B Chen et al.) Separated Variables in Nonlinear Equation Fermi (Yu I Kuznetsov) Faster Multipoint Polynomial Evaluation (B Murphy & R E Rosholt) Matrices and Applications: Multilevel Algorithm for Graphs Partitioning (N Bochkarev et al.) Operator Equations for Eddy Currents on Singular Carriers (J Naumenko) Matrix Approach to Modeling of Polarized Radiation Transfer in Heterogeneous Systems (T Sushkevich et al.) and other papers.

01 Jan 2010
TL;DR: In this paper, the authors considered a special case of upper Hessenberg matrices, in which all subdiagonal elements are −1 and investigated three types of matrices related to polynomials, generalized Fibonacci numbers, and special compositions of natural numbers.
Abstract: We consider a particular case of upper Hessenberg matrices, in which all subdiagonal elements are −1. We investigate three type of matrices related to polynomials, generalized Fibonacci numbers, and special compositions of natural numbers. We give the combinatorial meaning of the coefficients of the characteristic polynomials of these matrices.


Book ChapterDOI
21 Jun 2010
TL;DR: In this article, an index-free, calculational approach to matrix algebra can be developed by regarding matrices as morphisms of a category with biproducts, which shifts the traditional view of matrix as indexed structures to a type-level perspective analogous to that of the pointfree algebra of programming.
Abstract: Motivated by the need to formalize generation of fast running code for linear algebra applications, we show how an index-free, calculational approach to matrix algebra can be developed by regarding matrices as morphisms of a category with biproducts. This shifts the traditional view of matrices as indexed structures to a type-level perspective analogous to that of the pointfree algebra of programming. The derivation of fusion, cancellation and abide laws from the biproduct equations makes it easy to calculate algorithms implementing matrix multiplication, the kernel operation of matrix algebra, ranging from its divide-and-conquer version to the conventional, iterative one. From errant attempts to learn how particular products and coproducts emerge from biproducts, we not only rediscovered block-wise matrix combinators but also found a way of addressing other operations calculationally such as e.g. Gaussian elimination. A strategy for addressing vectorization along the same lines is also given.

Journal ArticleDOI
TL;DR: A new O(n)-space implementation of the GKO-Cauchy algorithm for the solution of linear systems where the coefficient matrix is Cauchy-like, and it outperforms the customary GKO algorithm for matrices of size larger than n ≈ 500–1,000.
Abstract: We propose a new O(n)-space implementation of the GKO-Cauchy algorithm for the solution of linear systems where the coefficient matrix is Cauchy-like. Moreover, this new algorithm makes a more efficient use of the processor cache memory; for matrices of size larger than n ≈ 500-1,000, it outperforms the customary GKO algorithm. We present an applicative case of Cauchy-like matrices with non-reconstructible main diagonal. In this special instance, the O(n) space algorithms can be adapted nicely to provide an efficient implementation of basic linear algebra operations in terms of the low displacement-rank generators.

Journal ArticleDOI
TL;DR: It is proven that cell matrices are (Circum-)Euclidean Distance Matrices ((C)EDM), and their generalization, k-cell matrices, are CEDM under certain natural restrictions.

Journal ArticleDOI
TL;DR: In this article, it was shown that a pair of almost commuting self-adjoint, symmetric matrices is close to the pair of self-dual in a uniform way.
Abstract: We show that a pair of almost commuting self-adjoint, symmetric matrices is close to a pair of commuting self-adjoint, symmetric matrices (in a uniform way). Moreover we prove that the same holds with self-dual in place of symmetric. The notion of self-dual Hermitian matrices is important in physics when studying fermionic systems that have time reversal symmetry. Since a symmetric, self-adjoint matrix is real, we get a real version of Huaxin Lin's famous theorem on almost commuting matrices. Similarly the self-dual case gives a version for matrices over the quaternions. We prove analogous results for element of real C^*-algebras of "low rank." In particular, these stronger results apply to paths of almost commuting Hermitian matrices that are real or self-dual. Along the way we develop a theory of semiprojectivity for real C^*-algebras.

Journal ArticleDOI
TL;DR: In this paper, the authors study the 0-1 matrices whose squares are still 0- 1 matrices and determine the maximal number of ones in such a matrix, which is a special case of a problem posed by Zhan.