scispace - formally typeset
Search or ask a question
Topic

Square matrix

About: Square matrix is a research topic. Over the lifetime, 5000 publications have been published within this topic receiving 92428 citations.


Papers
More filters
Proceedings ArticleDOI
TL;DR: In this paper, the authors present a method to analyze the powers of a given trilinear form (a special kind of algebraic constructions also called a tensor) and obtain upper bounds on the asymptotic complexity of matrix multiplication.
Abstract: This paper presents a method to analyze the powers of a given trilinear form (a special kind of algebraic constructions also called a tensor) and obtain upper bounds on the asymptotic complexity of matrix multiplication. Compared with existing approaches, this method is based on convex optimization, and thus has polynomial-time complexity. As an application, we use this method to study powers of the construction given by Coppersmith and Winograd [Journal of Symbolic Computation, 1990] and obtain the upper bound $\omega<2.3728639$ on the exponent of square matrix multiplication, which slightly improves the best known upper bound.

940 citations

Journal ArticleDOI
TL;DR: It is shown that an unknown matrix of rank can be efficiently reconstructed from only randomly sampled expansion coefficients with respect to any given matrix basis, which quantifies the “degree of incoherence” between the unknown matrix and the basis.
Abstract: We present novel techniques for analyzing the problem of low-rank matrix recovery. The methods are both considerably simpler and more general than previous approaches. It is shown that an unknown matrix of rank can be efficiently reconstructed from only randomly sampled expansion coefficients with respect to any given matrix basis. The number quantifies the “degree of incoherence” between the unknown matrix and the basis. Existing work concentrated mostly on the problem of “matrix completion” where one aims to recover a low-rank matrix from randomly selected matrix elements. Our result covers this situation as a special case. The proof consists of a series of relatively elementary steps, which stands in contrast to the highly involved methods previously employed to obtain comparable results. In cases where bounds had been known before, our estimates are slightly tighter. We discuss operator bases which are incoherent to all low-rank matrices simultaneously. For these bases, we show that randomly sampled expansion coefficients suffice to recover any low-rank matrix with high probability. The latter bound is tight up to multiplicative constants.

936 citations

Book
12 Jan 2005
TL;DR: A review of elementary matrix algebra can be found in this article, with a focus on matrix multiplication and matrix factorizations and Martrix Norms, as well as generalized inverses.
Abstract: Preface. 1. A Review of Elementary Matrix Algebra. 2. Vector Spaces. 3. Eigenvalues and Eigenvectors. 4. Matrix Factorizations and Martrix Norms. 5. Generalized Inverses. 6. Systems of Linear Equations. 7. Partitioned Matrices. 8. Special Matrices and Matrix Operations. 9. Matrix Derivatives and Related Topics. 10. Some Special Topics Related to Quadratic Forms. References. Index.

790 citations

Journal ArticleDOI
TL;DR: The CONCOR procedure is applied to several illustrative sets of social network data and is found to give results that are highly compatible with analyses and interpretations of the same data using the blockmodel approach of White.

750 citations

Book ChapterDOI
01 Jan 2011
TL;DR: In this paper, the inverse matrix of A is defined as the inverse transformation from y ∈ En to x ∈ Em, whereas the matrix A represents a transformation from x to y, and the solution vector x in the equation y = Ax is determined uniquely as x = A -1 y.
Abstract: Let A be a square matrix of order n. If it is nonsingular, then Ker(A) = {0} and, as mentioned earlier, the solution vector x in the equation y = Ax is determined uniquely as x = A -1 y. Here, A -1 is called the inverse (matrix) of A defining the inverse transformation from y ∈ En to x ∈ Em, whereas the matrix A represents a transformation from x to y. When A is n by m, Ax = y has a solution if and only if y ∈ Sp(A). Even then, if Ker(A) ≠ {A}, there are many solutions to the equation Ax = A due to the existence of x 0 (≠ 0) such that Ax 0 = 0, so that A(x+x 0) = y. If y ∉ Sp(A), there is no solution vector to the equation Ax = y.

729 citations


Network Information
Related Topics (5)
Matrix (mathematics)
105.5K papers, 1.9M citations
84% related
Polynomial
52.6K papers, 853.1K citations
84% related
Eigenvalues and eigenvectors
51.7K papers, 1.1M citations
81% related
Bounded function
77.2K papers, 1.3M citations
80% related
Hilbert space
29.7K papers, 637K citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202322
202244
2021115
2020149
2019134
2018145