scispace - formally typeset
Search or ask a question

Showing papers on "Matrix (mathematics) published in 2005"


Journal ArticleDOI
TL;DR: It is shown how derivatives of the GPW energy functional, namely ionic forces and the Kohn–Sham matrix, can be computed in a consistent way and the computational cost is scaling linearly with the system size, even for condensed phase systems of just a few tens of atoms.

4,047 citations


BookDOI
01 Jan 2005
TL;DR: Schur complements in statistics and probability have been used in Numerical Analysis as mentioned in this paper, where the Schur Complement has been applied in statistical and probability analysis. But their application is limited to statistical analysis.
Abstract: Historical Introduction: Issai Schur and the Early Development of the Schur Complement.- Basic Properties of the Schur Complement.- Eigenvalue and Singular Value Inequalities of Schur Complements.- Block Matrix Techniques.- Closure Properties.- Schur Complements and Matrix Inequalities: Operator-Theoretic Approach.- Schur complements in statistics and probability.- Schur Complements and Applications in Numerical Analysis.

1,465 citations


Journal ArticleDOI
TL;DR: This work has recently developed a general procedure for transforming a standard matrix into one appropriate for the comparison of two sequences with arbitrary, and possibly differing compositions.
Abstract: Almost all protein database search methods use amino acid substitution matrices for scoring, optimizing, and assessing the statistical significance of sequence alignments. Much care and effort has therefore gone into constructing substitution matrices, and the quality of search results can depend strongly upon the choice of the proper matrix. A long-standing problem has been the comparison of sequences with biased amino acid compositions, for which standard substitution matrices are not optimal. To address this problem, we have recently developed a general procedure for transforming a standard matrix into one appropriate for the comparison of two sequences with arbitrary, and possibly differing compositions. Such adjusted matrices yield, on average, improved alignments and alignment scores when applied to the comparison of proteins with markedly biased compositions. Here we review the application of compositionally adjusted matrices and consider whether they may also be applied fruitfully to general purpose protein sequence database searches, in which related sequence pairs do not necessarily have strong compositional biases. Although it is not advisable to apply compositional adjustment indiscriminately, we describe several simple criteria under which invoking such adjustment is on average beneficial. In a typical database search, at least one of these criteria is satisfied by over half the related sequence pairs. Compositional substitution matrix adjustment is now available in NCBI's protein-protein version of BLAST.

1,017 citations


Journal ArticleDOI
TL;DR: In this paper, the limiting distribution of the largest eigenvalue of a complex Gaussian covariance matrix was studied in terms of a sequence of new distribution functions that generalize the Tracy-Widom distribution of random matrix theory.
Abstract: We compute the limiting distributions of the largest eigenvalue of a complex Gaussian samplecovariance matrix when both the number of samples and the number of variables in each samplebecome large. When all but finitely many, say r, eigenvalues of the covariance matrix arethe same, the dependence of the limiting distribution of the largest eigenvalue of the samplecovariance matrix on those distinguished r eigenvalues of the covariance matrix is completelycharacterized in terms of an infinite sequence of new distribution functions that generalizethe Tracy-Widom distributions of the random matrix theory. Especially a phase transitionphenomena is observed. Our results also apply to a last passage percolation model and aqueuing model. 1 Introduction Consider M independent, identically distributed samples y 1 ,...,~y M , all of which are N ×1 columnvectors. We further assume that the sample vectors ~y k are Gaussian with mean µ and covarianceΣ, where Σ is a fixed N ×N positive matrix; the density of a sample ~y isp(~y) =1(2π)

883 citations


Journal Article
TL;DR: In this paper, a low-rank approximation to an n × n Gram matrix G is presented, where the probability distribution used to sample the columns is a judiciously-chosen and data-dependent nonuniform probability distribution.
Abstract: A problem for many kernel-based methods is that the amount of computation required to find the solution scales as O(n3), where n is the number of training examples. We develop and analyze an algorithm to compute an easily-interpretable low-rank approximation to an n × n Gram matrix G such that computations of interest may be performed more rapidly. The approximation is of the form ~Gk = CWk+CT, where C is a matrix consisting of a small number c of columns of G and Wk is the best rank-k approximation to W, the matrix formed by the intersection between those c columns of G and the corresponding c rows of G. An important aspect of the algorithm is the probability distribution used to randomly sample the columns; we will use a judiciously-chosen and data-dependent nonuniform probability distribution. Let ||·||2 and ||·||F denote the spectral norm and the Frobenius norm, respectively, of a matrix, and let Gk be the best rank-k approximation to G. We prove that by choosing O(k/e4) columns||G-CWk+CT||ξ ≤ ||G-Gk||ξ + e Σi=1n Gii2 ,both in expectation and with high probability, for both ξ = 2, F, and for all k: 0 ≤ k ≤ rank(W). This approximation can be computed using O(n) additional space and time, after making two passes over the data from external storage. The relationships between this algorithm, other related matrix decompositions, and the Nystrom method from integral equation theory are discussed.

826 citations


Book
12 Jan 2005
TL;DR: A review of elementary matrix algebra can be found in this article, with a focus on matrix multiplication and matrix factorizations and Martrix Norms, as well as generalized inverses.
Abstract: Preface. 1. A Review of Elementary Matrix Algebra. 2. Vector Spaces. 3. Eigenvalues and Eigenvectors. 4. Matrix Factorizations and Martrix Norms. 5. Generalized Inverses. 6. Systems of Linear Equations. 7. Partitioned Matrices. 8. Special Matrices and Matrix Operations. 9. Matrix Derivatives and Related Topics. 10. Some Special Topics Related to Quadratic Forms. References. Index.

790 citations


Book
14 Mar 2005
TL;DR: This book brings together a vast body of results on matrix theory for easy reference and immediate application with hundreds of identities, inequalities, and matrix facts stated rigorously and clearly.
Abstract: "Matrix Mathematics" is a reference work for users of matrices in all branches of engineering, science, and applied mathematics. This book brings together a vast body of results on matrix theory for easy reference and immediate application. Each chapter begins with the development of relevant background theory followed by a large collection of specialized results. Hundreds of identities, inequalities, and matrix facts are stated rigorously and clearly with cross references, citations to the literature, and illuminating remarks. Twelve chapters cover all of the major topics in matrix theory: preliminaries; basic matrix properties; matrix classes and transformations; matrix polynomials and rational transfer functions; matrix decompositions; generalized inverses; Kronecker and Schur algebra; positive-semidefinite matrices; norms; functions of matrices and their derivatives; the matrix exponential and stability theory; and linear systems and control theory. A detailed list of symbols, a summary of notation and conventions, an extensive bibliography with author index, and an extensive index are provided for ease of use. The book will be useful for students at both the undergraduate and graduate levels, as well as for researchers and practitioners in all branches of engineering, science, and applied mathematics.

676 citations


Book
01 Jan 2005
TL;DR: It is shown that outward k-neighborliness is equivalent to the statement that, whenever y = Ax has a non negative solution with at most k nonzeros, it is the nonnegative solution to y =Ax having minimal sum.
Abstract: Consider an underdetermined system of linear equations y = Ax with known y and d × n matrix A. We seek the nonnegative x with the fewest nonzeros satisfying y = Ax. In general, this problem is NP-hard. However, for many matrices A there is a threshold phenomenon: if the sparsest solution is sufficiently sparse, it can be found by linear programming. We explain this by the theory of convex polytopes. Let aj denote the jth column of A, 1 ≤ j ≤ n, let a0 = 0 and P denote the convex hull of the aj. We say the polytope P is outwardly k-neighborly if every subset of k vertices not including 0 spans a face of P. We show that outward k-neighborliness is equivalent to the statement that, whenever y = Ax has a nonnegative solution with at most k nonzeros, it is the nonnegative solution to y = Ax having minimal sum. We also consider weak neighborliness, where the overwhelming majority of k-sets of ajs not containing 0 span a face of P. This implies that most nonnegative vectors x with k nonzeros are uniquely recoverable from y = Ax by linear programming. Numerous corollaries follow by invoking neighborliness results. For example, for most large n by 2n underdetermined systems having a solution with fewer nonzeros than roughly half the number of equations, the sparsest solution can be found by linear programming.

639 citations


Journal ArticleDOI
TL;DR: It is proved that if a component of the response signal of a controllable linear time-invariant system is persistently exciting of sufficiently high order, then the windows of the signal span the full system behavior.

615 citations


Journal ArticleDOI
TL;DR: In this article, the authors prove that the real four-dimensional Euclidean noncommutative ϕ4 model is renormalisable to all orders in perturbation theory.
Abstract: We prove that the real four-dimensional Euclidean noncommutative ϕ4-model is renormalisable to all orders in perturbation theory. Compared with the commutative case, the bare action of relevant and marginal couplings contains necessarily an additional term: an harmonic oscillator potential for the free scalar field action. This entails a modified dispersion relation for the free theory, which becomes important at large distances (UV/IR-entanglement). The renormalisation proof relies on flow equations for the expansion coefficients of the effective action with respect to scalar fields written in the matrix base of the noncommutative ℝ4. The renormalisation flow depends on the topology of ribbon graphs and on the asymptotic and local behaviour of the propagator governed by orthogonal Meixner polynomials.

536 citations


Journal ArticleDOI
TL;DR: Simulation results substantiate the theoretical analysis and demonstrate the efficacy of the neural model on time-varying matrix inversion, especially when using a power-sigmoid activation function.
Abstract: Following the idea of using first-order time derivatives, this paper presents a general recurrent neural network (RNN) model for online inversion of time-varying matrices. Different kinds of activation functions are investigated to guarantee the global exponential convergence of the neural model to the exact inverse of a given time-varying matrix. The robustness of the proposed neural model is also studied with respect to different activation functions and various implementation errors. Simulation results, including the application to kinematic control of redundant manipulators, substantiate the theoretical analysis and demonstrate the efficacy of the neural model on time-varying matrix inversion, especially when using a power-sigmoid activation function.

Patent
24 Jan 2005
TL;DR: In this paper, a term-by-document matrix is compiled from a corpus of documents representative of a particular subject matter that represents the frequency of occurrence of each term per document.
Abstract: A term-by-document matrix is compiled from a corpus of documents representative of a particular subject matter that represents the frequency of occurrence of each term per document. A weighted term dictionary is created using a global weighting algorithm and then applied to the term-by-document matrix forming a weighted term-by-document matrix. A term vector matrix and a singular value concept matrix are computed by singular value decomposition of the weighted term-document index. The k largest singular concept values are kept and all others are set to zero thereby reducing to the concept dimensions in the term vector matrix and a singular value concept matrix. The reduced term vector matrix, reduced singular value concept matrix and weighted term-document dictionary can be used to project pseudo-document vectors representing documents not appearing in the original document corpus in a representative semantic space. The similarities of those documents can be ascertained from the position of their respective pseudo-document vectors in the representative semantic space.

Journal ArticleDOI
TL;DR: In this article, a data-space variant of the Occam approach is used for 3D magnetotelluric (MT) minimum structure inversion, where matrix dimensions depend on the size of the data set, rather than the number of model parameters.

Journal ArticleDOI
TL;DR: A preconditioning strategy based on the symmetric\slash skew-symmetric splitting of the coefficient matrix is proposed, and some useful properties of the preconditionsed matrix are established.
Abstract: In this paper we consider the solution of linear systems of saddle point type by preconditioned Krylov subspace methods. A preconditioning strategy based on the symmetric\slash skew-symmetric splitting of the coefficient matrix is proposed, and some useful properties of the preconditioned matrix are established. The potential of this approach is illustrated by numerical experiments with matrices from various application areas.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the global asymptotic stability analysis problem for a class of neural networks with discrete and distributed time-delays and derived sufficient conditions for the neural networks to be globally stable in terms of a linear matrix inequality.

Book ChapterDOI
27 Jun 2005
TL;DR: This work studies the rank, trace-norm and max-norm as complexity measures of matrices, focusing on the problem of fitting a matrix with matrices having low complexity, and presents generalization error bounds for predicting unobserved entries that are based on these measures.
Abstract: We study the rank, trace-norm and max-norm as complexity measures of matrices, focusing on the problem of fitting a matrix with matrices having low complexity. We present generalization error bounds for predicting unobserved entries that are based on these measures. We also consider the possible relations between these measures. We show gaps between them, and bounds on the extent of such gaps.

Proceedings Article
05 Dec 2005
TL;DR: A new algorithm called Tensor Subspace Analysis (TSA) is proposed that detects the intrinsic local geometrical structure of the tensor space by learning a lower dimensional tensor subspace and achieves better recognition rate, while being much more efficient.
Abstract: Previous work has demonstrated that the image variations of many objects (human faces in particular) under variable lighting can be effectively modeled by low dimensional linear spaces. The typical linear sub-space learning algorithms include Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), and Locality Preserving Projection (LPP). All of these methods consider an n1 x n2 image as a high dimensional vector in ℝn1 x n2, while an image represented in the plane is intrinsically a matrix. In this paper, we propose a new algorithm called Tensor Subspace Analysis (TSA). TSA considers an image as the second order tensor in Rn1 ⊗ Rn2, where Rn1 and Rn2 are two vector spaces. The relationship between the column vectors of the image matrix and that between the row vectors can be naturally characterized by TSA. TSA detects the intrinsic local geometrical structure of the tensor space by learning a lower dimensional tensor subspace. We compare our proposed approach with PCA, LDA and LPP methods on two standard databases. Experimental results demonstrate that TSA achieves better recognition rate, while being much more efficient.

Journal ArticleDOI
TL;DR: A method is presented by which the wavenumbers for a one-dimensional waveguide can be predicted from a finite element (FE) model, which involves postprocessing a conventional, but low order, FE model, the mass and stiffness matrices of which are typically found using a conventional FE package.
Abstract: A method is presented by which the wavenumbers for a one-dimensional waveguide can be predicted from a finite element (FE) model. The method involves postprocessing a conventional, but low order, FE model, the mass and stiffness matrices of which are typically found using a conventional FE package. This is in contrast to the most popular previous waveguide/FE approach, sometimes termed the spectral finite element approach, which requires new spectral element matrices to be developed. In the approach described here, a section of the waveguide is modeled using conventional FE software and the dynamic stiffness matrix formed. A periodicity condition is applied, the wavenumbers following from the eigensolution of the resulting transfer matrix. The method is described, estimation of wavenumbers, energy, and group velocity discussed, and numerical examples presented. These concern wave propagation in a beam and a simply supported plate strip, for which analytical solutions exist, and the more complex case of a viscoelastic laminate, which involves postprocessing an ANSYS FE model. The method is seen to yield accurate results for the wavenumbers and group velocities of both propagating and evanescent waves.

Journal ArticleDOI
TL;DR: The authors present the implementations of gradient projection algorithms, both orthogonal and oblique, as well as a catalogue of rotation criteria and corresponding gradients and examples of rotation methods presented by applying them to a loading matrix from Wehmeyer and Palmer.
Abstract: Almost all modern rotation of factor loadings is based on optimizing a criterion, for example, the quartimax criterion for quartimax rotation. Recent advancements in numerical methods have led to general orthogonal and oblique algorithms for optimizing essentially any rotation criterion. All that is required for a specific application is a definition of the criterion and its gradient. The authors present the implementations of gradient projection algorithms, both orthogonal and oblique, as well as a catalogue of rotation criteria and corresponding gradients. Software for these is downloadable and free; a specific version is given for each of the computing environments used most by statisticians. Examples of rotation methods are presented by applying them to a loading matrix from Wehmeyer and Palmer.

Journal ArticleDOI
TL;DR: The Dirac operator in a matrix representation in a kinetically balanced basis is transformed to a quasirelativistic Hamiltonian matrix, that has the same electronic eigenstates as the original Dirac matrix.
Abstract: The Dirac operator in a matrix representation in a kinetically balanced basis is transformed to a quasirelativistic Hamiltonian matrix, that has the same electronic eigenstates as the original Dirac matrix. This transformation involves a matrix X, for which an exact identity is derived, and which can be constructed either in a noniterative way or by various iteration schemes, without requiring an expansion parameter. The convergence behavior of five different iteration schemes is studied numerically, with very promising results.

Journal ArticleDOI
TL;DR: This property enables such compression schemes to be used in certain situations where the singular value decomposition (SVD) cannot be used efficiently.
Abstract: A procedure is reported for the compression of rank-deficient matrices A matrix A of rank k is represented in the form $A = U \circ B \circ V$, where B is a $k\times k$ submatrix of A, and U, V are well-conditioned matrices that each contain a $k\times k$ identity submatrix This property enables such compression schemes to be used in certain situations where the singular value decomposition (SVD) cannot be used efficiently Numerical examples are presented

Journal ArticleDOI
TL;DR: This letter solves the problem of identifying matrices S and A knowing only their multiplication X = AS, under some conditions, expressed either in terms of A and sparsity of S (identifiability conditions), or in Terms of X (sparse component analysis (SCA) conditions).
Abstract: In this letter, we solve the problem of identifying matrices S /spl isin/ /spl Ropf//sup n/spl times/N/ and A /spl isin/ /spl Ropf//sup m/spl times/n/ knowing only their multiplication X = AS, under some conditions, expressed either in terms of A and sparsity of S (identifiability conditions), or in terms of X (sparse component analysis (SCA) conditions). We present algorithms for such identification and illustrate them by examples.

Journal ArticleDOI
TL;DR: A two-stage logarithmic goal programming (TLGP) method is proposed to generate weights from interval comparison matrices, which can be either consistent or inconsistent, and is applicable to fuzzy comparison matrix when they are transformed into interval comparison Matrices using @a-level sets and the extension principle.

Journal ArticleDOI
TL;DR: The results suggest that SCV for full bandwidth matrices is the most reliable of the CV methods, and observe that experience from the univariate setting can sometimes be a misleading guide for understanding bandwidth selection in the multivariate case.
Abstract: The performance of multivariate kernel density estimates depends crucially on the choice of bandwidth matrix, but progress towards developing good bandwidth matrix selectors has been relatively slow. In particular, previous studies of cross-validation (CV) methods have been restricted to biased and unbiased CV selection of diagonal bandwidth matrices. However, for certain types of target density the use of full (i.e. unconstrained) bandwidth matrices offers the potential for significantly improved density estimation. In this paper, we generalize earlier work from diagonal to full bandwidth matrices, and develop a smooth cross-validation (SCV) meth- odology for multivariate data. We consider optimization of the SCV technique with respect to a pilot bandwidth matrix. All the CV methods are studied using asymptotic analysis, simulation experiments and real data analysis. The results suggest that SCV for full bandwidth matrices is the most reliable of the CV methods. We also observe that experience from the univariate setting can sometimes be a misleading guide for understanding bandwidth selection in the multivariate case.

Journal ArticleDOI
TL;DR: A new splitting is introduced, called positive-definite and skew-Hermitian splitting (PSS), and a class of PSS methods similar to the HSS and NSS method for iteratively solving the positive- definite systems of linear equations are established.
Abstract: By further generalizing the concept of Hermitian (or normal) and skew-Hermitian splitting for a non-Hermitian and positive-definite matrix, we introduce a new splitting, called positive-definite and skew-Hermitian splitting (PSS), and then establish a class of PSS methods similar to the Hermitian (or normal) and skew-Hermitian splitting (HSS or NSS) method for iteratively solving the positive-definite systems of linear equations. Theoretical analysis shows that the PSS method converges unconditionally to the exact solution of the linear system, with the upper bound of its convergence factor dependent only on the spectrum of the positive-definite splitting matrix and independent of the spectrum of the skew-Hermitian splitting matrix as well as the eigenvectors of all matrices involved. When we specialize the PSS to block triangular (or triangular) and skew-Hermitian splitting (BTSS or TSS), the PSS method naturally leads to a BTSS or TSS iteration method, which may be more practical and efficient than the HSS and NSS iteration methods. Applications of the BTSS method to the linear systems of block two-by-two structures are discussed in detail. Numerical experiments further show the effectiveness of our new methods.

Journal ArticleDOI
TL;DR: In this paper, the intrinsic geometric flexibility of framework structures incorporating linear metal-cyanide-consuming linkages using a reciprocal-space dynamical matrix approach was analyzed, and it was shown that this structural motif is capable of imparting a significant negative thermal expansion sNTEd effect upon such materials.
Abstract: We analyze the intrinsic geometric flexibility of framework structures incorporating linear metal–cyanide– metal sM–CN–M8d linkages using a reciprocal-space dynamical matrix approach. We find that this structural motif is capable of imparting a significant negative thermal expansion sNTEd effect upon such materials. In particular, we show that the topologies of a number of simple cyanide-containing framework materials support a very large number of low-energy rigid-unit phonon modes, all of which give rise to NTE behavior. We support our analysis by presenting experimental verification of this behavior in the family of compounds ZnxCd1−xsCNd2, which we show to exhibit a NTE effect over the temperature range 25–375 K more than double that of materials such as ZrW2O8.

Journal ArticleDOI
TL;DR: The lowest level of sensitivity of system outputs to system inputs is defined as an H"- index, defined in terms of matrix equalities and inequalities, as a dual of the Bounded Real Lemma.

Book
19 Dec 2005
TL;DR: In this article, the authors present an overview of linear algebraic Riccati Equations and their application in algebraic logic, including linear transformations, linear transformations of first order and differential and difference Equations of higher order.
Abstract: Preface.- 1. Introduction and Outline.- 2. Indefinite Inner Products.- 3. Orthogonalization and Orthogonal Polynomials.- 4. Classes of Linear Transformations.- 5. Canonical Forms.- 6. Real H-Selfadjoint Matrices.- 7. Functions of H-Selfadjoint Matrices.- 8. H-Normal Matrices.- 9. General Perturbations. Stability of Diagonalizable Matrices.- 10. Definite Invariant Subspaces.- 11. Differential Equations of First Order.- 12. Matrix Polynomials.- 13. Differential and Difference Equations of Higher Order.- 14. Algebraic Riccati Equations.- Appendix: Topics from Linear Algebra.- Bibliography.- Index

Posted Content
TL;DR: In this paper, the spectral norm and cut-norm of random submatrices of a large matrix A are approximated with O(r log r) sample complexity, where r is the numerical rank of A. The numerical rank is always bounded by, and is a stable relaxation of, the rank.
Abstract: We study random submatrices of a large matrix A. We show how to approximately compute A from its random submatrix of the smallest possible size O(r log r) with a small error in the spectral norm, where r = ||A||_F^2 / ||A||_2^2 is the numerical rank of A. The numerical rank is always bounded by, and is a stable relaxation of, the rank of A. This yields an asymptotically optimal guarantee in an algorithm for computing low-rank approximations of A. We also prove asymptotically optimal estimates on the spectral norm and the cut-norm of random submatrices of A. The result for the cut-norm yields a slight improvement on the best known sample complexity for an approximation algorithm for MAX-2CSP problems. We use methods of Probability in Banach spaces, in particular the law of large numbers for operator-valued random variables.

Journal ArticleDOI
TL;DR: Yang et al. as discussed by the authors proposed a modified version of the CQ algorithm by adopting Armijo-like searches and showed convergence of the modified algorithm under mild conditions, where the objective function does not require matrix inverses and the largest eigenvalue of the matrix ATA.
Abstract: Let C and Q be nonempty closed convex sets in N and M, respectively, and A an M ? N real matrix. The split feasibility problem (SFP) is to find x C with Ax Q, if such x exist. Byrne (2002 Inverse Problems 18 441?53) proposed a CQ algorithm with the following iterative scheme: where ? (0, 2/L), L denotes the largest eigenvalue of the matrix ATA, and PC and PQ denote the orthogonal projections onto C and Q, respectively. In his algorithm, Byrne assumed that the projections PC and PQ are easily calculated. However, in some cases it is impossible or needs too much work to exactly compute the orthogonal projection. Recently, Yang (2004 Inverse Problems 20 1261?6) presented a relaxed CQ algorithm, in which he replaced PC and PQ by and , that is, the orthogonal projections onto two halfspaces Ck and Qk, respectively. Clearly, the latter is easy to implement. One common advantage of the CQ algorithm and the relaxed CQ algorithm is that computation of the matrix inverses is not necessary. However, they use a fixed stepsize related to the largest eigenvalue of the matrix ATA, which sometimes affects convergence of the algorithms. In this paper, we present modifications of the CQ algorithm and the relaxed CQ algorithm by adopting Armijo-like searches. The modified algorithms need not compute the matrix inverses and the largest eigenvalue of the matrix ATA, and make a sufficient decrease of the objective function at each iteration. We also show convergence of the modified algorithms under mild conditions.