scispace - formally typeset
Search or ask a question

Showing papers on "Square matrix published in 2012"


Book
25 Dec 2012
TL;DR: In this article, the second edition of the Second Edition of the first edition, the authors presented a list of symbols for elementary linear and multilinear algebra, including square matrices, tensor and exterior products, with real or complex entries.
Abstract: Preface to the Second Edition.- Preface to the First Edition.- List of Symbols.- 1 Elementary Linear and Multilinear Algebra.- 2 What Are Matrices.- 3 Square Matrices.- 4 Tensor and Exterior Products.- 5 Matrices with Real or Complex Entries.- 6 Hermitian Matrices.- 7 Norms.- 8 Nonnegative Matrices.- 9 Matrices with Entries in a Principal Ideal Domain Jordan Reduction.- 10 Exponential of a Matrix, Polar Decomposition, and Classical Groups.- 11 Matrix Factorizations and Their Applications.- 12 Iterative Methods for Linear Systems.- 13 Approximation of Eigenvalues.- References.- Index of Notation.- General Index.- Cited Names.-

692 citations


Proceedings Article
26 Jun 2012
TL;DR: A randomized algorithm is proposed that takes as input an arbitrary n × d matrix A, with n ≫ d, and returns, as output, relative-error approximations to all n of the statistical leverage scores.
Abstract: The statistical leverage scores of a data matrix are the squared row-norms of any matrix whose columns are obtained by orthogonalizing the columns of the data matrix; and, the coherence is the largest leverage score. These quantities play an important role in several machine learning algorithms because they capture the key structural nonuniformity of the data matrix that must be dealt with in developing efficient randomized algorithms. Our main result is a randomized algorithm that takes as input an arbitrary n × d matrix A, with n ≫ d, and returns, as output, relative-error approximations to all n of the statistical leverage scores. The proposed algorithm runs in O(nd log n) time, as opposed to the O(nd2) time required by the naive algorithm that involves computing an orthogonal basis for the range of A. This resolves an open question from (Drineas et al., 2006) and (Mohri & Talwalkar, 2011); and our result leads to immediate improvements in coreset-based l2-regression, the estimation of the coherence of a matrix, and several related low-rank matrix problems. Interestingly, to achieve our result we judiciously apply random projections on both sides of A.

389 citations


Posted Content
TL;DR: In this article, the authors used the learning with errors (LWE) problem to build a new simple and provably secure key exchange scheme, which can be viewed as certain extension of DiffieHellman problem with errors.
Abstract: We use the learning with errors (LWE) problem to build a new simple and provably secure key exchange scheme. The basic idea of the construction can be viewed as certain extension of DiffieHellman problem with errors. The mathematical structure behind comes from the commutativity of computing a bilinear form in two different ways due to the associativity of the matrix multiplications: (x ×A) × y = x × (A× y), where x,y are column vectors and A is a square matrix. We show that our new schemes are more efficient in terms of communication and computation complexity compared with key exchange schemes or key transport schemes via encryption schemes based on the LWE problem. Furthermore, we extend our scheme to the ring learning with errors (RLWE) problem, resulting in small key size and better efficiency.

211 citations


Proceedings ArticleDOI
20 Oct 2012
TL;DR: In this paper, it was shown that the product of an n-dimensional matrix by an n−1−1 matrix can be computed in O(n−2+o(1) ) time, where n is the number of vertices.
Abstract: Let $\alpha$ be the maximal value such that the product of an $n\times n^\alpha$ matrix by an $n^\alpha\times n$ matrix can be computed with $n^{2+o(1)}$ arithmetic operations. In this paper we show that $\alpha>0.30298$, which improves the previous record $\alpha>0.29462$ by Coppersmith (Journal of Complexity, 1997). More generally, we construct a new algorithm for multiplying an $n\times n^k$ matrix by an $n^k\times n$ matrix, for any value $k eq 1$. The complexity of this algorithm is better than all known algorithms for rectangular matrix multiplication. In the case of square matrix multiplication (i.e., for $k=1$), we recover exactly the complexity of the algorithm by Coppersmith and Wino grad (Journal of Symbolic Computation, 1990). These new upper bounds can be used to improve the time complexity of several known algorithms that rely on rectangular matrix multiplication. For example, we directly obtain a $O(n^{2.5302})$-time algorithm for the all-pairs shortest paths problem over directed graphs with small integer weights, where $n$ denotes the number of vertices, and also improve the time complexity of sparse square matrix multiplication.

149 citations


Journal ArticleDOI
TL;DR: This paper presents a probing method for determining the diagonal of the inverse of a sparse matrix in the common situation when its inverse exhibits a decay property, i.e. when many of the entries of the matrix inverse are small.
Abstract: SUMMARY The computation of some entries of a matrix inverse arises in several important applications in practice. This paper presents a probing method for determining the diagonal of the inverse of a sparse matrix in the common situation when its inverse exhibits a decay property, i.e. when many of the entries of the inverse are small. A few simple properties of the inverse suggest a way to determine effective probing vectors based on standard graph theory results. An iterative method is then applied to solve the resulting sequence of linear systems, from which the diagonal of the matrix inverse is extracted. The results of numerical experiments are provided to demonstrate the effectiveness of the probing method. Copyright © 2011 John Wiley & Sons, Ltd.

142 citations


27 Aug 2012
TL;DR: A new algorithm for multiplying an n × nk matrix by an n–k × n matrix, which is better than all known algorithms for rectangular matrix multiplication and recovers exactly the complexity of the algorithm by Coppersmith and Winograd.
Abstract: Let $\alpha$ be the maximal value such that the product of an $n\times n^\alpha$ matrix by an $n^\alpha\times n$ matrix can be computed with $n^{2+o(1)}$ arithmetic operations. In this paper we show that $\alpha>0.30298$, which improves the previous record $\alpha>0.29462$ by Coppersmith (Journal of Complexity, 1997). More generally, we construct a new algorithm for multiplying an $n\times n^k$ matrix by an $n^k\times n$ matrix, for any value $k eq 1$. The complexity of this algorithm is better than all known algorithms for rectangular matrix multiplication. In the case of square matrix multiplication (i.e., for $k=1$), we recover exactly the complexity of the algorithm by Coppersmith and Wino grad (Journal of Symbolic Computation, 1990). These new upper bounds can be used to improve the time complexity of several known algorithms that rely on rectangular matrix multiplication. For example, we directly obtain a $O(n^{2.5302})$-time algorithm for the all-pairs shortest paths problem over directed graphs with small integer weights, where $n$ denotes the number of vertices, and also improve the time complexity of sparse square matrix multiplication.

126 citations


Journal ArticleDOI
TL;DR: An extension of the recently introduced Generalized Matrix Learning Vector Quantization algorithm to matrices of limited rank corresponding to low-dimensional representations of the data to incorporate prior knowledge of the intrinsic dimension and to reduce the number of adaptive parameters efficiently.

113 citations


Journal ArticleDOI
TL;DR: The penalized ℓ(1) norm method with stopping parameter λ (also called basis pursuit denoising) is used to recover pulsed or sinusoidal RF signals as a function of the small dimension of the measurement matrix and stopping parameter.
Abstract: We demonstrate an optical mixing system for measuring properties of sparse radio frequency (RF) signals using compressive sensing (CS) Two types of sparse RF signals are investigated: (1) a signal that consists of a few 04 ns pulses in a 268 ns window and (2) a signal that consists of a few sinusoids at different frequencies The RF is modulated onto the intensity of a repetitively pulsed, wavelength-chirped optical field, and time-wavelength-space mapping is used to map the optical field onto a 118-pixel, one-dimensional spatial light modulator (SLM) The SLM pixels are programmed with a pseudo-random bit sequence (PRBS) to form one row of the CS measurement matrix, and the optical throughput is integrated with a photodiode to obtain one value of the CS measurement vector Then the PRBS is changed to form the second row of the mixing matrix and a second value of the measurement vector is obtained This process is performed 118 times so that we can vary the dimensions of the CS measurement matrix from 1×118 to 118×118 (square) We use the penalized l(1) norm method with stopping parameter λ (also called basis pursuit denoising) to recover pulsed or sinusoidal RF signals as a function of the small dimension of the measurement matrix and stopping parameter For a square matrix, we also find that penalized l(1) norm recovery performs better than conventional recovery using matrix inversion

81 citations


Book ChapterDOI
10 Jun 2012
TL;DR: It is shown that by using either standard blocking or recursive blocking the computation of the square root of the triangular matrix can be made rich in matrix multiplication, and the excellent numerical stability of the point algorithm is shown to be preserved by blocking.
Abstract: The Schur method for computing a matrix square root reduces the matrix to the Schur triangular form and then computes a square root of the triangular matrix. We show that by using either standard blocking or recursive blocking the computation of the square root of the triangular matrix can be made rich in matrix multiplication. Numerical experiments making appropriate use of level 3 BLAS show significant speedups over the point algorithm, both in the square root phase and in the algorithm as a whole. In parallel implementations, recursive blocking is found to provide better performance than standard blocking when the parallelism comes only from threaded BLAS, but the reverse is true when parallelism is explicitly expressed using OpenMP. The excellent numerical stability of the point algorithm is shown to be preserved by blocking. These results are extended to the real Schur method. Blocking is also shown to be effective for multiplying triangular matrices.

69 citations


Journal ArticleDOI
TL;DR: In this article, two iterative algorithms were proposed to find the least Frobenius norm generalized reflexive (generalized anti-reflexive) solution of a new system of matrix equations.

42 citations


Journal ArticleDOI
08 Apr 2012
TL;DR: In this paper, a free deterministic equivalent of a random matrix model with operators satisfying certain freeness relations has been proposed for the case of square matrices and rectangular matrices.
Abstract: Motivated by the asymptotic collective behavior of random and deterministic matrices, we propose an approximation (called "free deterministic equivalent") to quite general random matrix models, by replacing the matrices with operators satisfying certain freeness relations. We comment on the relation between our free deterministic equivalent and deterministic equivalents considered in the engineering literature. We do not only consider the case of square matrices, but also show how rectangular matrices can be treated. Furthermore, we emphasize how operator-valued free probability techniques can be used to solve our free deterministic equivalents. As an illustration of our methods we show how the free deterministic equivalent of a random matrix model from [6] can be treated and we thus recover in a conceptual way the results from [6]. On a technical level, we generalize a result from scalar-valued free probability, by showing that randomly rotated deterministic matrices of different sizes are asymptotically free from deterministic rectangular matrices, with amalgamation over a certain algebra of projections. In Appendix A, we show how estimates for differences between Cauchy transforms can be extended from a neighborhood of infinity to a region close to the real axis. This is of some relevance if one wants to compare the original random matrix problem with its free deterministic equivalent.

Journal ArticleDOI
TL;DR: In this article, a special singular value decomposition for complex symmetric matrices was proposed for a class of quaternion matrices that includes symmetric or Hermitian matrices.
Abstract: A complex symmetric matrix A can always be factored as A = UΣU T , in which U is complex unitary and Σ is a real diagonal matrix whose diagonal entries are the singular values of A. This factorization may be thought of as a special singular value decomposition for complex symmetric matrices. We present an analogous special singular value decomposition for a class of quaternion matrices that includes complex matrices that are symmetric or Hermitian.

Journal ArticleDOI
TL;DR: This paper develops a new framework to apply the character expansions for integrations over the unitary group, involving general rectangular complex matrices in the integrand, and derives the correct distribution functions and uses them to obtain the capacity of the Ricean and correlated Rayleigh MIMO systems in a unified and straightforward approach.
Abstract: To evaluate the unitary integrals, such as the well-known Harish-Chandra-Itzykson-Zuber integral, character expansions were developed by Balantekin, where the matrix integrand is a group member; i.e., a square matrix with a nonzero determinant. Recently, this method has been exploited to derive the joint eigenvalue distributions of the Wishart matrices; i.e., HH* where H is the complex Gaussian random channel matrix of a multiple-input multiple-output (MIMO) system. The joint eigenvalue distributions are used to calculate the moment generating function of the mutual information (ergodic capacity) of a MIMO channel. In this paper, we show that the previous integration framework presented in the literature is not correct, and results in incorrect joint eigenvalue distributions for the Ricean and full-correlated Rayleigh MIMO channels. We develop a new framework to apply the character expansions for integrations over the unitary group, involving general rectangular complex matrices in the integrand. We derive the correct distribution functions and use them to obtain the capacity of the Ricean and correlated Rayleigh MIMO systems in a unified and straightforward approach. The integration technique proposed in this paper is general enough to be used for other unitary integrals in engineering, mathematics, and physics.

Journal ArticleDOI
TL;DR: In this paper, the expectation value of observables in a scalar theory on the fuzzy two-sphere, represented as a generalized Hermitian matrix model, is analyzed.
Abstract: We analyze the expectation value of observables in a scalar theory on the fuzzy two-sphere, represented as a generalized Hermitian matrix model. We calculate explicitly the form of the expectation values in the large-$N$ limit and demonstrate that, for any single kind of field (matrix), the distribution of its eigenvalues is still a Wigner semicircle but with a renormalized radius. For observables involving more than one type of matrix, we obtain a new distribution corresponding to correlated Wigner semicircles.

Posted Content
TL;DR: In this paper, a canonical decomposition position for complex matrices which are unitarily equivalent to their transpose (UET) was obtained for matrices 8 × 8 and larger.
Abstract: Motivated by a problem of Halmos, we obtain a canonical decom- position for complex matrices which are unitarily equivalent to their transpose (UET). Surprisingly, the na¨ove assertion that a matrix is UET if and only if it is unitarily equivalent to a complex symmetric matrix holds for matrices 7×7 and smaller, but fails for matrices 8 × 8 and larger.

01 Jan 2012
TL;DR: The maximum controllability index of square matrices is defined and analyzed, and a general-ized controllable canonical form is introduced for single-input systems.
Abstract: For two types of linear time-invariant dynamical multi-agent systems un- der leader-follower framework, the problem of graph topology adjustment is addressed to improve system controllability. As important concepts and theoretical foundations, the maximum controllability index of square matrices is defined and analyzed, and a general- ized controllability canonical form is introduced for single-input systems. Based on these concepts, approaches for adjusting the leader-follower and follower-follower communica- tion architectures are presented respectively.

Journal ArticleDOI
TL;DR: In this paper, the eigenvalues of non-normal square matrices of the form An = UnTnVn with Un, Vn independent Haar distributed on the unitary group and Tn real diagonal were studied.
Abstract: We study the eigenvalues of non-normal square matrices of the form An = UnTnVn with Un, Vn independent Haar distributed on the unitary group and Tn real diagonal. We show that when the empirical measure of the eigenvalues of Tn converges, and Tn satisfies some technical conditions, all these eigenvalues lie in a single ring.

Journal ArticleDOI
TL;DR: A finitely terminating primal-dual bilinear programming algorithm for the solution of the NP-hard absolute value equation (AVE): Ax − |x| = b, where A is an n × n square matrix.
Abstract: We propose a finitely terminating primal-dual bilinear programming algorithm for the solution of the NP-hard absolute value equation (AVE): Ax − |x| = b, where A is an n × n square matrix. The algorithm, which makes no assumptions on AVE other than solvability, consists of a finite number of linear programs terminating at a solution of the AVE or at a stationary point of the bilinear program. The proposed algorithm was tested on 500 consecutively generated random instances of the AVE with n = 10, 50, 100, 500 and 1,000. The algorithm solved 88.6% of the test problems to an accuracy of 10−6.

Posted Content
TL;DR: In this paper, it was shown that a perturbation of any fixed square matrix D by a random unitary matrix is well invertible with high probability, and a similar result holds for perturbations by random orthogonal matrices.
Abstract: We show that a perturbation of any fixed square matrix D by a random unitary matrix is well invertible with high probability. A similar result holds for perturbations by random orthogonal matrices; the only notable exception is when D is close to orthogonal. As an application, these results completely eliminate a hard-to-check condition from the Single Ring Theorem by Guionnet, Krishnapur and Zeitouni.

Journal ArticleDOI
TL;DR: In this paper, the authors considered the matrix Schroedinger equation with a self-adjoint matrix potential on the half line with the general selfadjoint boundary condition at the origin.
Abstract: The matrix Schroedinger equation with a selfadjoint matrix potential is considered on the half line with the general selfadjoint boundary condition at the origin. When the matrix potential is integrable, the high-energy asymptotics are established for the related Jost matrix, the inverse of the Jost matrix, and the scattering matrix. Under the additional assumption that the matrix potential has a first moment, Levinson's theorem is derived, relating the number of bound states to the change in the argument of the determinant of the scattering matrix.

Book ChapterDOI
01 Jan 2012
TL;DR: In this paper, it is proved that the determinant of a square matrix can be defined as an antisymmetric multilinear function of the rows of the columns.
Abstract: In the second chapter we deal with matrices and determinants. The chapter starts with determinants of second and third orders, which are defined through solutions of linear algebraic systems; determinants of arbitrary order are defined inductively. The basic properties of determinants are investigated. We then take a look at determinants from a more abstract viewpoint: it is proved that the determinant of a square matrix can be defined as an antisymmetric multilinear function of the rows. Using some basic elements of permutation theory, we continue to study the properties of determinants; in particular, we derive explicit formula for determinants. Finally, we define the rank of a matrix and the main operations on matrices (sum, product, inverse matrix) and investigate their properties.

Book
30 Jan 2012
TL;DR: In this paper, the Smith Normal Form is used to define the Smith normal form of a matrix with integer entries and the similarity classes of square matrices over a field. But the Smith norm is not a sufficient condition for the Smith normalized form of the matrix.
Abstract: Part 1 :Finitely Generated Abelian Groups: Matrices with Integer Entries: The Smith Normal Form.- Basic Theory of Additive Abelian Groups.- Decomposition of Finitely Generated Z-Modules. Part 2: Similarity of Square Matrices over a Field: The Polynomial Ring F[x] and Matrices over F[x]- F[x] Modules: Similarity of t xt Matrices over a Field F.- Canonical Forms and Similarity Classes of Square Matrices over a Field.

Journal ArticleDOI
TL;DR: In this article, the problem of finding an n × n real symmetric matrix that has a principal submatrix of rank k if and only if r k = 1, for all 0 ⩽ k⩽ n, was studied.

Journal ArticleDOI
TL;DR: In this paper, the positivity of a partitioned positive semidefinite matrix with each square block replaced by a compound matrix, an elementary symmetric function or a generalized matrix function is shown.
Abstract: Using an elementary fact on matrices we show by a unified approach the positivity of a partitioned positive semidefinite matrix with each square block replaced by a compound matrix, an elementary symmetric function or a generalized matrix function. In addition, we present a refined version of the Thompson determinant compression theorem.

Journal ArticleDOI
TL;DR: Several approaches to circumvent orthogonalization by the modified Gram-Schmidt method have been described in the literature, including the generation of Krylov subspace bases with the aid of suitably chosen Chebyshev or Newton polynomials.

Journal ArticleDOI
B. Winn1
TL;DR: In this paper, the joint moments of the characteristic polynomial of a random unitary matrix from the circular unitary ensemble and its derivative in the case that the power in the moments is an odd positive integer were calculated.
Abstract: We calculate joint moments of the characteristic polynomial of a random unitary matrix from the circular unitary ensemble and its derivative in the case that the power in the moments is an odd positive integer. The calculations are carried out for finite matrix size and in the limit as the size of the matrices goes to infinity. The latter asymptotic calculation allows us to prove a long-standing conjecture from random matrix theory.

Journal ArticleDOI
01 Mar 2012
TL;DR: In this paper, the smallest singular value of a square random matrix with i.i.d. columns drawn from an isotropic log-concave distribution was studied.
Abstract: We study the smallest singular value of a square random matrix with i.i.d. columns drawn from an isotropic log-concave distribution. An important example is obtained by sampling vectors uniformly dis- tributed in an isotropic convex body. We deduce that the condition number of such matrices is of the order of the size of the matrix and give an estimate on its tail behavior.

01 Jan 2012
TL;DR: In this article, the authors obtained formulas for the left and right eigenvectors and minimal bases of some families of Fiedler-like linearizations of square matrix polynomials.
Abstract: In this paper we obtain formulas for the left and right eigenvectors and minimal bases of some families of Fiedler-like linearizations of square matrix polynomials. In particular, for the families of Fiedler pencils, generalized Fiedler pencils, and Fiedler pencils with repetition. These formulas allow us to relate the eigenvectors and minimal bases of the linearizations with the ones of the polynomial. Since the eigenvectors appear in the standard formula of the condition number of eigenvalues of matrix polynomials, our results may be used to compare the condition numbers of eigenvalues of the linearizations within these families and the corresponding condition number of the polynomial eigenvalue problem.

Journal ArticleDOI
TL;DR: A feasible and effective algorithm is proposed to find solutions to the matrix equation $AX=B$ subject to a matrix inequality constraint $CXD\geq E$.
Abstract: In this paper a feasible and effective algorithm is proposed to find solutions to the matrix equation $AX=B$ subject to a matrix inequality constraint $CXD\geq E$. Numerical experiments are performed to illustrate the applicability of the algorithm. A comparison with some existing methods (with necessary modifications) is also given.

Book ChapterDOI
03 Dec 2012
TL;DR: This work extends the expansion analysis approach to fast algorithms for rectangular matrix multiplication, obtaining a new class of communication cost lower bounds to these algorithms of Bini et al. (1979) and the algorithms of Hopcroft and Kerr (1971).
Abstract: Graph expansion analysis of computational DAGs is useful for obtaining communication cost lower bounds where previous methods, such as geometric embedding, are not applicable. This has recently been demonstrated for Strassen's and Strassen-like fast square matrix multiplication algorithms. Here we extend the expansion analysis approach to fast algorithms for rectangular matrix multiplication, obtaining a new class of communication cost lower bounds. These apply, for example to the algorithms of Bini et al. (1979) and the algorithms of Hopcroft and Kerr (1971). Some of our bounds are proved to be optimal.