scispace - formally typeset
Search or ask a question
Topic

Spectrum of a matrix

About: Spectrum of a matrix is a research topic. Over the lifetime, 1064 publications have been published within this topic receiving 19841 citations. The topic is also known as: matrix spectrum.


Papers
More filters
Patent
Teruyoshi Washizawa1
04 Jun 2007
TL;DR: In this article, a variance-covariance matrix of a matrix having a combination of multivariate data and objective variables is obtained, and multiple eigenvalues and their corresponding eigenvectors are calculated by eigenvalue decomposition of the variance covariance matrix.
Abstract: A variance-covariance matrix of a matrix having a combination of multivariate data and objective variables is obtained, and multiple eigenvalues and their corresponding eigenvectors are calculated by eigenvalue decomposition of the variance-covariance matrix. Accumulated contributions are calculated from the multiple eigenvalues in descending order of absolute value of the eigenvalues. Regression coefficients are calculated from eigenvalues and eigenvectors that correspond to accumulated contributions that exceed a predetermined value.

7 citations

Book ChapterDOI
TL;DR: In this article, the authors discuss the theoretical, numerical, and experimental aspects of using four well-known linear operators and their eigenvalues for shape recognition, including the Laplacian operator under Dirichlet and Neumann boundary conditions.
Abstract: Recently there has been a surge in the use of the eigenvalues of linear operators in problems of pattern recognition. In this chapter, we discuss the theoretical, numerical, and experimental aspects of using four wellknown linear operators and their eigenvalues for shape recognition. In particular, the eigenvalues of the Laplacian operator under Dirichlet and Neumann boundary conditions, as well as those of the clamped plate and buckling of a clamped plate, are examined. Since the ratios of eigenvalues for each of these operators are translation, rotation, and scale invariant, four feature vectors are extracted for the purpose of shape recognition. These feature vectors are then fed into a basic neural network for training and measuring the performance of each of the feature vectors, which in turn were all shown to be reliable features for shape recognition. We also offer a review of the literature on finite difference schemes for these operators and summarize key facts about their eigenvalues that are of relevance in image recognition.

7 citations

Journal ArticleDOI
TL;DR: In this article, the dominant poles (eigenvalues) of system matrices are used extensively in determining the power system stability analysis, and an accurate and efficient way of computing these dominant poles, especially for large power systems, is presented.
Abstract: The dominant poles (eigenvalues) of system matrices are used extensively in determining the power system stability analysis. The challenge is to find an accurate and efficient way of computing these dominant poles, especially for large power systems. Here we present a novel way for finding the system stability based on inverse covariance principal component analysis (ICPCA) to compute the eigenvalues of large system matrices. The efficacy of the proposed method is shown by numerical calculations over realistic power system data and we also prove the possibility of using ICPCA to determine the eigenvalues closest to any damping ratio and repeated eigenvalues. Our proposed method can also be applied for stability analysis of other engineering applications.

7 citations

Journal ArticleDOI
TL;DR: In this paper, the Wielandt inequality is used to obtain interesting inequalities about covariance matrix and various correlation coefficients including the canonical correlation, multiple and simple correlation, where the canonical covariance is a generalized inverse of the simple correlation.
Abstract: Suppose thatA is ann ×n positive definite Hemitain matrix. LetX andY ben ×p andn ×q matrices (p + q≤ n), such thatX* Y = 0. The following inequality is proved $$X^* AY(Y^* AY)^ - Y^* AX \leqslant \left( {\frac{{\lambda _1 - \lambda _n }}{{\lambda _1 + \lambda _n }}} \right)^2 X^* AX,$$ where λ1, and λn, are respectively the largest and smallest eigenvalues ofA, andM- stands for a generalized inverse ofM. This inequality is an extension of the well-known Wielandt inequality in which bothX andY are vectors. The inequality is utilized to obtain some interesting inequalities about covariance matrix and various correlation coefficients including the canonical correlation, multiple and simple correlation.

7 citations


Network Information
Related Topics (5)
Eigenvalues and eigenvectors
51.7K papers, 1.1M citations
81% related
Bounded function
77.2K papers, 1.3M citations
80% related
Linear system
59.5K papers, 1.4M citations
80% related
Differential equation
88K papers, 2M citations
80% related
Matrix (mathematics)
105.5K papers, 1.9M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20238
20229
20202
20193
20187
201731