scispace - formally typeset
Search or ask a question
Topic

Square matrix

About: Square matrix is a research topic. Over the lifetime, 5000 publications have been published within this topic receiving 92428 citations.


Papers
More filters
Journal Article
TL;DR: It is shown that the fast model can potentially solve eigenvalue problems and kernel learning problems in linear time with respect to the matrix size n to achieve 1 + e relative-error, whereas both the prototype model and the Nystrom method cost at least quadratic time to attain comparable error bound.
Abstract: Symmetric positive semi-definite (SPSD) matrix approximation methods have been extensively used to speed up large-scale eigenvalue computation and kernel learning methods. The standard sketch based method, which we call the prototype model, produces relatively accurate approximations, but is inefficient on large square matrices. The Nystrom method is highly efficient, but can only achieve low accuracy. In this paper we propose a novel model that we call the fast SPSD matrix approximation model. The fast model is nearly as efficient as the Nystrom method and as accurate as the prototype model. We show that the fast model can potentially solve eigenvalue problems and kernel learning problems in linear time with respect to the matrix size n to achieve 1 + e relative-error, whereas both the prototype model and the Nystrom method cost at least quadratic time to attain comparable error bound. Empirical comparisons among the prototype model, the Nystrom method, and our fast model demonstrate the superiority of the fast model. We also contribute new understandings of the Nystrom method. The Nystrom method is a special instance of our fast model and is approximation to the prototype model. Our technique can be straightforwardly applied to make the CUR matrix decomposition more efficiently computed without much affecting the accuracy.

25 citations

Book ChapterDOI
01 Jan 1993
TL;DR: In this article, the singular value decomposition (SVD decomposition) is used to reduce matrices to the canonical form by using orthogonal transformations in the spaces of images and preimages.
Abstract: In this chapter we discuss reduction of matrices to the canonical form by use of orthogonal transformations in the spaces of images and preimages. Such canonical form is called the singular value decomposition. In what follows we will use the well-known polar decomposition, which is recalled in Section 1 in course of discussion of singular value decomposition of square matrices.

24 citations

Journal ArticleDOI
TL;DR: An iterative method for determining the interval hull solution of A ( p) x = b ( p ) , p ?

24 citations

Journal ArticleDOI
TL;DR: In this paper, the complementarity eigenvalues of a general square matrix are defined in terms of a certain complementarity system relative to the componentwise ordering of a graph, which form the so-called complementarity spectrum of the graph.

24 citations

Journal ArticleDOI
TL;DR: In this paper, the linear operators that strongly preserve the matrix majorization were characterized, which is a generalization of multivariate majorization, and they were used to characterize the linear operator that strongly preserves the matrix regularization.

24 citations


Network Information
Related Topics (5)
Matrix (mathematics)
105.5K papers, 1.9M citations
84% related
Polynomial
52.6K papers, 853.1K citations
84% related
Eigenvalues and eigenvectors
51.7K papers, 1.1M citations
81% related
Bounded function
77.2K papers, 1.3M citations
80% related
Hilbert space
29.7K papers, 637K citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202322
202244
2021115
2020149
2019134
2018145