scispace - formally typeset
Search or ask a question
Topic

Square matrix

About: Square matrix is a research topic. Over the lifetime, 5000 publications have been published within this topic receiving 92428 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: High-speed pulse amplitude modulated (pam) data transmission over telephone channels is only possible when adaptive equalization is used to mitigate the linear distortion found on the (initially unknown) channel.
Abstract: High-speed pulse amplitude modulated (pam) data transmission over telephone channels is only possible when adaptive equalization is used to mitigate the linear distortion found on the (initially unknown) channel. At the beginning of the equalization procedure, the tap weights are adjusted to minimize the inter symbol interference between pulses. The “stochastic gradient” algorithm is an iterative procedure commonly used for setting the coefficients in these and other adaptive filters, but a proper understanding of the convergence has never been obtained. It has been common analytical practice to invoke an assumption stating that a certain sequence of random vectors which direct the “hunting” of the equalizer are statistically independent. Everyone acknowledges this assumption to be far from true, just as everyone agrees that the final predictions made using it are in excellent agreement with experiments and simulations. We take the resolution of this question as our main problem When one begins to analyze the performance of the algorithm, one sees that the average mean-square error after the nth iteration requires knowing, as an intermediate step, the mathematical expectation of the product of a sequence of statistically dependent matrices. We transform the latter problem to a space of sufficiently high dimension where the required average may be obtained from a canonical equation V n+1 = A(α)V n + Here A(α) is a square matrix, depending on the “step-size” α of the original algorithm, and V n and F are vectors. The mean-square error is calculable from the solution V n .

237 citations

Journal ArticleDOI
TL;DR: This paper introduces a new non-iterative method for linear array synthesis based on the matrix pencil method (MPM), which can synthesize a nonuniform linear array with a reduced number of elements, and can be also used to reduce the number of element for linear arrays designed by other synthesis techniques.
Abstract: The synthesis of a nonuniform antenna array with as few elements as possible has considerable practical applications. This paper introduces a new non-iterative method for linear array synthesis based on the matrix pencil method (MPM). The method can synthesize a nonuniform linear array with a reduced number of elements, and can be also used to reduce the number of elements for linear arrays designed by other synthesis techniques. In the proposed method, the desired radiation pattern is first sampled to form a discrete pattern data set. Then we organize the discrete data set in a form of Hankel matrix and perform the singular value decomposition (SVD) of the matrix. By discarding the non-principal singular values, we obtain an optimal lower-rank approximation of the Hankel matrix. The lower-rank matrix actually corresponds to fewer antenna elements. The matrix pencil method is then utilized to reconstruct the excitation and location distributions from the approximated matrix. Numerical examples show the effectiveness and advantages of the proposed synthesis method.

232 citations

Journal ArticleDOI
TL;DR: In this paper, the conditions under which unique differentiable functions λ(X) and u(X), respectively, exist in a neighborhood of a square matrix (complex or otherwise) satisfying the equations Xu = λu, λO, and Xu = ǫ, were investigated.
Abstract: Let X0 be a square matrix (complex or otherwise) and u0 a (normalized) eigenvector associated with an eigenvalue λo of X0, so that the triple (X0, u0, λ0) satisfies the equations Xu = λu, . We investigate the conditions under which unique differentiable functions λ(X) and u(X) exist in a neighborhood of X0 satisfying λ(X0) = λO, u(X0) = u0, Xu = λu, and . We obtain the first and second derivatives of λ(X) and the first derivative of u(X). Two alternative expressions for the first derivative of λ(X) are also presented.

224 citations

Journal ArticleDOI
TL;DR: In this paper, two transformation matrices are introduced, L and D, which contain zero and unit elements only, and they are used for maximum likelihood estimation of the multivariate normal distribution, the evaluation of Jacobians of transformations with symmetric or lower triangular matrix arguments, and the solution of matrix equations.
Abstract: Two transformation matrices are introduced, L and D, which contain zero and unit elements only. If A is an arbitrary $( n,n )$ matrix, L eliminates from vecA the supradiagonal elements of A, while D performs the inverse transformation for symmetricA. Many properties of L and D are derived, in particular in relation to Kronecker products. The usefulness of the two matrices is demonstrated in three areas of mathematical statistics and matrix algebra: maximum likelihood estimation of the multivariate normal distribution, the evaluation of Jacobians of transformations with symmetric or lower triangular matrix arguments, and the solution of matrix equations.

220 citations

Journal ArticleDOI
TL;DR: In this article, it was shown that the smoothed precision required to solve Ax = b, for any b, using Gaussian elimination without pivoting is logarithmic.
Abstract: Let A be an arbitrary matrix and let A be a slight random perturbation of A. We prove that it is unlikely that A has a large condition number. Using this result, we prove that it is unlikely that A has large growth factor under Gaussian elimination without pivoting. By combining these results, we show that the smoothed precision necessary to solve Ax = b, for any b, using Gaussian elimination without pivoting is logarithmic. Moreover, when A is an all-zero square matrix, our results significantly improve the average-case analysis of Gaussian elimination without pivoting performed by Yeung and Chan (SIAM J. Matrix Anal. Appl., 18 (1997), pp. 499-517).

219 citations


Network Information
Related Topics (5)
Matrix (mathematics)
105.5K papers, 1.9M citations
84% related
Polynomial
52.6K papers, 853.1K citations
84% related
Eigenvalues and eigenvectors
51.7K papers, 1.1M citations
81% related
Bounded function
77.2K papers, 1.3M citations
80% related
Hilbert space
29.7K papers, 637K citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202322
202244
2021115
2020149
2019134
2018145