scispace - formally typeset
Search or ask a question
Topic

Square matrix

About: Square matrix is a research topic. Over the lifetime, 5000 publications have been published within this topic receiving 92428 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A method with high convergence rate for finding approximate inverses of nonsingular matrices and an extension of the introduced computational scheme to general square matrices to use for finding the Drazin inverse are suggested and established analytically.
Abstract: A method with high convergence rate for finding approximate inverses of nonsingular matrices is suggested and established analytically. An extension of the introduced computational scheme to general square matrices is defined. The extended method could be used for finding the Drazin inverse. The application of the scheme on large sparse test matrices alongside the use in preconditioning of linear system of equations will be presented to clarify the contribution of the paper.

36 citations

Book ChapterDOI
23 Sep 2013
TL;DR: The results for a noiseless setting are extended and the first guarantees for recovery under noise for alternating minimization are provided, showing that for well conditioned matrices corrupted by random noise of bounded Frobenius norm, if the number of observed entries is O(k7n log n), then the ALS algorithm recovers the original matrix within an error bound that depends on the norm of the noise matrix.
Abstract: The task of matrix completion involves estimating the entries of a matrix, M ∈ ℝm×n, when a subset, Ω ⊂{(i,j):1≤i≤m,1≤j≤n} of the entries are observed. A particular set of low rank models for this task approximate the matrix as a product of two low rank matrices, M = UVT, where U ∈ ℝm×k and V ∈ ℝn×k and k ≪ min {m,n}. A popular algorithm of choice in practice for recovering M from the partially observed matrix using the low rank assumption is alternating least square (ALS) minimization, which involves optimizing over U and V in an alternating manner to minimize the squared error over observed entries while keeping the other factor fixed. Despite being widely experimented in practice, only recently were theoretical guarantees established bounding the error of the matrix estimated from ALS to that of the original matrix M. In this work we extend the results for a noiseless setting and provide the first guarantees for recovery under noise for alternating minimization. We specifically show that for well conditioned matrices corrupted by random noise of bounded Frobenius norm, if the number of observed entries is O(k7n log n), then the ALS algorithm recovers the original matrix within an error bound that depends on the norm of the noise matrix. The sample complexity is the same as derived in [7] for the noise-free matrix completion using ALS.

36 citations

Journal ArticleDOI
TL;DR: The Hadamard square of any square matrix A is bounded above and below by some doubly stochastic matrices times the square of the largest and smallest singular values of A as discussed by the authors.

36 citations

Patent
30 Mar 2006
TL;DR: In this paper, a method and apparatus for decomposing a channel matrix in a wireless communication system is described. But the method is not suitable for the case where the channel matrix H is generated for channels between transmit antennas and receive antennas.
Abstract: A method and apparatus for decomposing a channel matrix in a wireless communication system are disclosed. A channel matrix H is generated for channels between transmit antennas and receive antennas (202). A Heπnitian matrix A=HHH or A=HHn is created (204). A Jacobi process is cyclically performed on the matrix A to obtain Q and DA matrixes such that A=QDAQH (206). DA is a diagonal matrix obtained by singular value decomposition (SVD) on the A matrix. In each Jacobi transformation, real part diagonalization is performed to annihilate real parts of off-diagonal elements of the matrix and imaginary part diagonalization is performed to annihilate imaginary parts of off -diagonal elements of the matrix after the real part diagonalization. U, V and DH matrixes of H matrix are then calculated from the Q and DA matrices. DH is a diagonal matrix comprising singular values of the H matrix (208).

36 citations

Journal ArticleDOI
TL;DR: This work presents a computationally-efficient matrix-vector expression for the solution of a matrix linear least squares problem that arises in multistatic antenna array processing and relates the vectorization-by-columns operator to the diagonal extraction operator.
Abstract: We present a computationally-efficient matrix-vector expression for the solution of a matrix linear least squares problem that arises in multistatic antenna array processing. Our derivation relies on an explicit new relation between Kronecker, Khatri-Rao and Schur-Hadamard matrix products, which involves a selection matrix (i.e., a subset of the columns of a permutation matrix). Moreover, we show that the same selection matrix also relates the vectorization-by-columns operator to the diagonal extraction operator, which plays a central role in our computationally- efficient solution.

36 citations


Network Information
Related Topics (5)
Matrix (mathematics)
105.5K papers, 1.9M citations
84% related
Polynomial
52.6K papers, 853.1K citations
84% related
Eigenvalues and eigenvectors
51.7K papers, 1.1M citations
81% related
Bounded function
77.2K papers, 1.3M citations
80% related
Hilbert space
29.7K papers, 637K citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202322
202244
2021115
2020149
2019134
2018145