scispace - formally typeset
Search or ask a question

Showing papers on "Square matrix published in 2010"


Journal ArticleDOI
TL;DR: The exponent of a given square matrix is characterized and upper and lower bounds on achievable exponents are derived and it is shown that there are no matrices of size less than 15 with exponents exceeding 1/2.
Abstract: Polar codes were recently introduced by Arikan. They achieve the symmetric capacity of arbitrary binary-input discrete memoryless channels under a low complexity successive cancellation decoding scheme. The original polar code construction is closely related to the recursive construction of Reed-Muller codes and is based on the 2 × 2 matrix [1 0 : 1 1]. It was shown by Arikan Telatar that this construction achieves an error exponent of 1/2, i.e., that for sufficiently large blocklengths the error probability decays exponentially in the square root of the blocklength. It was already mentioned by Arikan that in principle larger matrices can be used to construct polar codes. In this paper, it is first shown that any l × l matrix none of whose column permutations is upper triangular polarizes binary-input memoryless channels. The exponent of a given square matrix is characterized, upper and lower bounds on achievable exponents are given. Using these bounds it is shown that there are no matrices of size smaller than 15×15 with exponents exceeding 1/2. Further, a general construction based on BCH codes which for large I achieves exponents arbitrarily close to 1 is given. At size 16 × 16, this construction yields an exponent greater than 1/2.

374 citations


Posted Content
TL;DR: It is shown that properly constrained nuclear-norm minimization stably recovers a low-rank matrix from a constant number of noisy measurements per degree of freedom; this seems to be the first result of this nature.
Abstract: This paper presents several novel theoretical results regarding the recovery of a low-rank matrix from just a few measurements consisting of linear combinations of the matrix entries. We show that properly constrained nuclear-norm minimization stably recovers a low-rank matrix from a constant number of noisy measurements per degree of freedom; this seems to be the first result of this nature. Further, the recovery error from noisy data is within a constant of three targets: 1) the minimax risk, 2) an oracle error that would be available if the column space of the matrix were known, and 3) a more adaptive oracle error which would be available with the knowledge of the column space corresponding to the part of the matrix that stands above the noise. Lastly, the error bounds regarding low-rank matrices are extended to provide an error bound when the matrix has full rank with decaying singular values. The analysis in this paper is based on the restricted isometry property (RIP) introduced in [6] for vectors, and in [22] for matrices.

198 citations


Journal ArticleDOI
TL;DR: This work investigates the permanent of a square matrix over a field and calculates it using ways different from Ryser's formula or the standard definition.
Abstract: We investigate the permanent of a square matrix over a field and calculate it using ways different from Ryser's formula or the standard definition. One formula is related to symmetric tensors and has the same efficiency O(2^mm) as Ryser's method. Another algebraic method in the prime characteristic case uses partial differentiation.

129 citations


Journal ArticleDOI
TL;DR: It is proved that these pencils are linearizations even when $P(\lambda)$ is a singular square matrix polynomial, and it is shown explicitly how to recover the left and right minimal indices and minimal bases of the polynomials from the minimum indices and bases of these linearizations.
Abstract: A standard way of dealing with a matrix polynomial $P(\lambda)$ is to convert it into an equivalent matrix pencil—a process known as linearization. For any regular matrix polynomial, a new family of linearizations generalizing the classical first and second Frobenius companion forms has recently been introduced by Antoniou and Vologiannidis, extending some linearizations previously defined by Fiedler for scalar polynomials. We prove that these pencils are linearizations even when $P(\lambda)$ is a singular square matrix polynomial, and show explicitly how to recover the left and right minimal indices and minimal bases of the polynomial $P(\lambda)$ from the minimal indices and bases of these linearizations. In addition, we provide a simple way to recover the eigenvectors of a regular polynomial from those of any of these linearizations, without any computational cost. The existence of an eigenvector recovery procedure is essential for a linearization to be relevant for applications.

115 citations


01 Jan 2010
TL;DR: The Randic energy as mentioned in this paper is defined as the sum of the absolute values of the eigenvalues of the Randic matrix and established some of its properties, in particular lower and upper bounds for it.
Abstract: If G is a graph on n vertices, and di is the degree of its i-th vertex, then the Randic matrix of G is the square matrix of order n whose (i, j)-entry is equal to 1/ � di dj if the i-th and j-th vertex of G are adjacent, and zero otherwise. This matrix in a natural way occurs within Laplacian spectral theory, and provides the non-trivial part of the so-called normalized Laplacian matrix. In spite of its obvious relation to the famous Randic index, the Randic matrix seems to have not been much studied in mathematical chemistry. In this paper we define the Randic energy as the sum of the absolute values of the eigenvalues of the Randic matrix, and establish some of its properties, in particular lower and upper bounds for it.

95 citations


Journal ArticleDOI
TL;DR: Based on a quadratical convergence method, a family of iterative methods to compute the approximate inverse of square matrix are presented and can be used to computed the inner inverse and their convergence proofs are given by fundamental matrix tools.

73 citations


Journal ArticleDOI
TL;DR: In this article, the authors propose an algorithm for finding the finest simultaneous block-diagonalization of a finite number of square matrices, or equivalently the irreducible decomposition of a matrix *-algebra given in terms of its generators.
Abstract: An algorithm is proposed for finding the finest simultaneous block-diagonalization of a finite number of square matrices, or equivalently the irreducible decomposition of a matrix *-algebra given in terms of its generators This extends the approach initiated by Murota–Kanno–Kojima–Kojima The algorithm, composed of numerical-linear algebraic computations, does not require any algebraic structure to be known in advance The main ingredient of the algorithm is the Schur decomposition and its skew-Hamiltonian variant for eigenvalue computation

61 citations


Journal ArticleDOI
TL;DR: A unified treatment of continuous, semi-discrete (Ablowitz?Ladik) and fully discrete matrix NLS systems is presented and a large part of this work is devoted to an exploration of the corresponding solutions, in particular regularity and asymptotic behaviour of matrix soliton solutions.
Abstract: Using a bidifferential graded algebra approach to 'integrable' partial differential or difference equations, a unified treatment of continuous, semi-discrete (Ablowitz?Ladik) and fully discrete matrix NLS systems is presented. These equations originate from a universal equation within this framework, by specifying a representation of the bidifferential graded algebra and imposing a reduction. By application of a general result, corresponding families of exact solutions are obtained that in particular comprise the matrix soliton solutions in the focusing NLS case. The solutions are parametrized in terms of constant matrix data subject to a Sylvester equation (which previously appeared as a rank condition in the integrable systems literature). These data exhibit a certain redundancy, which we diminish to a large extent. More precisely, we first consider more general AKNS-type systems from which two different matrix NLS systems emerge via reductions. In the continuous case, the familiar Hermitian conjugation reduction leads to a continuous matrix (including vector) NLS equation, but it is well known that this does not work as well in the discrete cases. On the other hand, there is a complex conjugation reduction, which apparently has not been studied previously. It leads to square matrix NLS systems, but works in all three cases (continuous, semi- and fully discrete). A large part of this work is devoted to an exploration of the corresponding solutions, in particular regularity and asymptotic behaviour of matrix soliton solutions.

60 citations


Journal ArticleDOI
Feng Ding1
TL;DR: It is proved that a companion matrix is similar to a diagonal matrix or Jordan matrix, and the transformation matrices between them are given and the similarity transformation and the companion matrix to system identification are applied.
Abstract: Special matrices are very useful in signal processing and control systems. This paper studies the transformations and relationships between some special matrices. The conditions that a matrix is similar to a companion matrix are derived. It is proved that a companion matrix is similar to a diagonal matrix or Jordan matrix, and the transformation matrices between them are given. Finally, we apply the similarity transformation and the companion matrix to system identification.

58 citations


Journal ArticleDOI
TL;DR: Rota's basis conjecture is true for a vector space of dimension p-1 over any field of characteristic zero or p, and all other characteristics except possibly a finite number are shown.
Abstract: A formula for Glynn's hyperdeterminant $\det_p$ ($p$ prime) of a square matrix shows that the number of ways to decompose any integral doubly stochastic matrix with row and column sums $p-1$ into $p-1$ permutation matrices with even product, minus the number of ways with odd product, is 1 (mod $p$). It follows that the number of even Latin squares of order $p-1$ is not equal to the number of odd Latin squares of that order. Thus Rota's basis conjecture is true for a vector space of dimension $p-1$ over any field of characteristic zero or $p$, and all other characteristics except possibly a finite number. It is also shown where there is a mistake in a published proof that claimed to multiply the known dimensions by powers of two, and that also claimed that the number of even Latin squares is greater than the number of odd Latin squares. Now, 26 is the smallest unknown case where Rota's basis conjecture for vector spaces of even dimension over a field is unsolved.

56 citations


Journal ArticleDOI
TL;DR: In this article, a second-order statistics-based algorithm is proposed for two-dimensional (2D) direction-of-arrival (DOA) estimation of coherent signals, which is solved by arranging the elements of the correlation matrix of the signal received from a uniform rectangular array to a block Hankel matrix.
Abstract: A second-order statistics-based algorithm is proposed for two-dimensional (2D) direction-of-arrival (DOA) estimation of coherent signals. The problem is solved by arranging the elements of the correlation matrix of the signal received from a uniform rectangular array to a block Hankel matrix. In noiseless cases, it is shown that the rank of the block Hankel matrix equals the number of the DOAs and is independent of the coherency of the incoming waves. Therefore, the signal subspace of the block Hankel matrix can be estimated properly and spans the same column space as the array response matrix. Two matrix pencil pairs containing the DOA parameters are extracted from the signal subspace. This matrix pencil-based estimation problem is then resolved using our previously proposed pairing-free 2D parameter estimation algorithm. Simulation results show that the proposed algorithm outperforms the spatial smoothing method in terms of mean square error (MSE).

Journal ArticleDOI
TL;DR: This work proves numerical stability of the randomized preprocessing approach and extends it to solving nonsingular linear systems, inversion and generalized (Moore–Penrose) inversion of general and structured matrices by means of Newton’s iteration.


Journal ArticleDOI
TL;DR: In this paper, explicit representations for the Drazin inverse of M are presented under the condition that BDiC = 0 for i = 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, n−1, where n is the order of D.

Journal ArticleDOI
TL;DR: In this article, bounds on the variance of a finite universe were derived for the roots of the polynomial equations and bounds for the largest and smallest eigenvalues of a square matrix with real spectrum.
Abstract: We derive bounds on the variance of a finite universe. Some related inequalities for the roots of the polynomial equations and bounds for the largest and smallest eigenvalues of a square matrix with real spectrum are obtained.

Proceedings ArticleDOI
29 Jul 2010
TL;DR: In this article, the minimum dwell time of switched linear systems was investigated by exploiting homogeneous polynomial Lyapunov functions and convex optimization problems based on linear matrix inequalities (LMIs).
Abstract: This paper investigates the minimum dwell time for switched linear systems. It is shown that a sequence of upper bounds of the minimum dwell time can be computed by exploiting homogeneous polynomial Lyapunov functions and convex optimization problems based on linear matrix inequalities (LMIs). This sequence is obtained by adopting two possible representations of homogeneous polynomials, one based on Kronecker products, and the other on the square matrix representation (SMR). Some examples illustrate the use and the potentialities of the proposed approach. It is also conjectured that the proposed approach is asymptotically nonconservative, i.e. the exact minimum dwell time is obtained by using homogeneous polynomials with sufficiently large degree.

Journal ArticleDOI
TL;DR: In this article, Cramer's rules for some left, right and two-sided quaternion matrix equations are obtained within the framework of the theory of the column and row determinants.

Journal ArticleDOI
TL;DR: In this article, the stability analysis of a closed-loop SISO linear system with a controller described by the equations mentioned is investigated, and a stability condition based on a transient denominator matrix condition number is proposed.
Abstract: Variable, fractional-order backward difference is a generalisation of commonly known difference or sum. Equations with these differences can be used to describe a variable-, fractional order digital control strategies. One should mention, that classical tools such as a state-space description and discrete transfer function cannot be used in the analysis and synthesis of such a type of systems. Equations describing a closed-loop system are proposed. They contain square matrices imitating the action of matrices in the system polynomial matrix description. This paper focuses on the stability analysis of a closed-loop SISO linear system with a controller described by the equations mentioned. A stability condition based on a transient denominator matrix condition number is proposed. Investigations are supported by two numerical examples.

Journal ArticleDOI
TL;DR: In this paper, it was shown that determinantal varieties defined by maximal minors of a generic matrix have a non-commutative desingularization, in that they construct a maximal Cohen-Macaulay module over such a variety whose endomorphism ring is Cohen-MACaulay and has finite global dimension.
Abstract: We show that determinantal varieties defined by maximal minors of a generic matrix have a non-commutative desingularization, in that we construct a maximal Cohen-Macaulay module over such a variety whose endomorphism ring is Cohen-Macaulay and has finite global dimension. In the case of the determinant of a square matrix, this gives a non-commutative crepant resolution.

Journal ArticleDOI
TL;DR: In this paper, a detailed study of finite-dimensional modules defined on bicomplex numbers is presented, including linear operators, linear bases, orthogonal bases, self-adjoint operators and Hilbert spaces.
Abstract: This paper is a detailed study of finite-dimensional modules defined on bicomplex numbers. A number of results are proved on bicomplex square matrices, linear operators, orthogonal bases, self-adjoint operators and Hilbert spaces, including the spectral decomposition theorem. Applications to concepts relevant to quantum mechanics, like the evolution operator, are pointed out.

01 Jan 2010
TL;DR: In this paper, the linear complementarity problem (LCP) for a given n-vector q and a real square matrix M ∈ IR n×n has been studied.
Abstract: For a given n-vector q and a real square matrix M ∈ IR n×n , the linear complementarity problem, denoted LCP(M,q), is that of finding nonnegative vector z ∈ IR n such that z T (Mz+q) = 0 and Mz+q 0. In this paper we suppose that the matrix M must be a symmetric and positive definite and the set S = {z ∈ IR n / z> 0 and Mz+ q> 0};

Journal ArticleDOI
TL;DR: This work derives and exploits key properties of partial $(M,N)$-isometries and orthosymmetric pairs of scalar products, and also employs an appropriate generalized Moore-Penrose pseudoinverse.
Abstract: The polar decomposition of a square matrix has been generalized by several authors to scalar products on $\mathbb{R}^n$ or $\mathbb{C}^n$ given by a bilinear or sesquilinear form. Previous work has focused mainly on the case of square matrices, sometimes with the assumption of a Hermitian scalar product. We introduce the canonical generalized polar decomposition $A = WS$, defined for general $m\times n$ matrices $A$, where $W$ is a partial $(M,N)$-isometry and $S$ is $N$-selfadjoint with nonzero eigenvalues lying in the open right half-plane, and the nonsingular matrices $M$ and $N$ define scalar products on $\mathbb{C}^m$ and $\mathbb{C}^n$, respectively. We derive conditions under which a unique decomposition exists and show how to compute the decomposition by matrix iterations. Our treatment derives and exploits key properties of partial $(M,N)$-isometries and orthosymmetric pairs of scalar products, and also employs an appropriate generalized Moore-Penrose pseudoinverse. We relate commutativity of the factors in the canonical generalized polar decomposition to an appropriate definition of normality. We also consider a related generalized polar decomposition $A = WS$, defined only for square matrices $A$ and in which $W$ is an automorphism; we analyze its existence and the uniqueness of the selfadjoint factor when $A$ is singular.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the matrix equation X + ∑ i = 1 m A i ∗ X - 1 A i = I, where Ai (i = 1, 2,,…, m) is square matrices, and obtained some conditions for the existence of positive definite solution of this equation.

Journal ArticleDOI
TL;DR: In this paper, an explicit representation of the Drazin inverse of a 2 × 2 block matrix M = A B C D, where A and D are square matrices, was established.

Journal ArticleDOI
TL;DR: In this article, the geometric relationship between the eigenspaces of a matrix and its adjoint is used to determine whether a matrix having distinct eigenvalues is unitarily equivalent to a complex symmetric matrix.
Abstract: Wedevelop several methods, based on the geometric relationship between the eigenspaces of a matrix and its adjoint, for determining whether a square matrix having distinct eigenvalues is unitarily equivalent to a complex symmetric matrix. Equivalently, we characterize those matrices having distinct eigenvalues which lie in the unitary orbit of the complex symmetric matrices.

Journal ArticleDOI
TL;DR: In this article, the authors developed a new inversion-free method for obtaining the minimal Hermitian positive definite solution of the matrix rational equation X + A ∗ X - 1 A = I, where I is the identity matrix and A is a given nonsingular matrix.

Journal ArticleDOI
TL;DR: The iterative method proposed in this paper has faster convergence and higher accuracy than the iterative methods proposed in [G.-X. Huang, F. Yin, and K. Guo, An iteration method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB=C], which is based on the inequality of the following:.
Abstract: This paper is a matrix iterative method presented to compute the solutions of the matrix equation, AXB=C, with unknown matrix X∈S, where S is the constrained matrices set like symmetric, symmetric-R-symmetric and (R, S)-symmetric. By this iterative method, for any initial matrix X0∈S, a solution X* can be obtained within finite iteration steps if exact arithmetics were used, and the solution X* with the minimum Frobenius norm can be obtained by choosing a special kind of initial matrix. The solution [image omitted] , which is nearest to a given matrix ~X in Frobenius norm, can be obtained by first finding the minimum Frobenius norm solution of a new compatible matrix equation. The numerical examples given here show that the iterative method proposed in this paper has faster convergence and higher accuracy than the iterative methods proposed in [G.-X. Huang, F. Yin, and K. Guo, An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB=C, J. Comput. Appl. Math. 212 (2008), pp. 231-244; Y. Lei and A.-P. Liao, A minimal residual algorithm for the inconsistent matrix equation AXB=C over symmetric matrices, Appl. Math. Comput. 188 (2007), pp. 499-513; Z.-Y. Peng, An iterative method for the least squares symmetric solution of the linear matrix equation AXB=C, Appl. Math. Comput. 170 (2005), pp. 711-723; Y.-X. Peng, X.-Y. Hu, and L. Zhang, An iteration method for the symmetric solutions and the optimal approximation solution of the matrix equation AXB=C, Appl. Math. Comput. 160 (2005), pp. 763-777].

Journal ArticleDOI
TL;DR: In this article, the authors study orthogonal and symplectic matrix models with polynomial potentials and multi-interval supports of the equilibrium measure and find the bounds on the convergence rate of linear eigenvalue statistics and variance of the variance.
Abstract: We study orthogonal and symplectic matrix models with polynomial potentials and multi interval supports of the equilibrium measure. For these models we find the bounds (similar to the case of hermitian matrix models) for the rate of convergence of linear eigenvalue statistics and for the variance of linear eigenvalue statistics and find the logarithms of partition functions up to the order O(1). We prove also universality of local eigenvalue statistics in the bulk.

Journal ArticleDOI
TL;DR: An iterative algorithm is presented for solving a class of complex matrix equations, in which there exist the conjugate and the transpose of the unknown matrices, which includes some previously investigated matrix equations as its special cases.

Journal ArticleDOI
TL;DR: Extrapolation procedures on the complex field may give a practical and efficient way to compute the PageRank vector when $c$ is close to $1.