scispace - formally typeset
Search or ask a question

Showing papers on "Matrix analysis published in 1997"


Book
01 Mar 1997
TL;DR: In this article, an integrated treatment of the theory of nonnegative matrices and some related classes of positive matrices, concentrating on connections with game theory, combinatorics, inequalities, optimisation and mathematical economics is presented.
Abstract: This book provides an integrated treatment of the theory of nonnegative matrices (matrices with only positive numbers or zero as entries) and some related classes of positive matrices, concentrating on connections with game theory, combinatorics, inequalities, optimisation and mathematical economics. The wide variety of applications, which include price fixing, scheduling and the fair division problem, have been carefully chosen both for their elegant mathematical content and for their accessibility to students with minimal preparation. Many results in matrix theory are also presented. The treatment is rigorous and almost all results are proved completely. These results and applications will be of great interest to researchers in linear programming, statistics and operations research. The minimal prerequisites also make the book accessible to first-year graduate students.

555 citations


Book
01 Oct 1997

369 citations


Journal ArticleDOI
TL;DR: In this paper, a general method of matrix analysis, namely, eigenvalue decomposition, is applied to the Miyazawa-Jernigan (MJ) matrix, which reveals an intrinsic regularity of the MJ matrix, revealing basic information about the nature of the driving force for protein folding.
Abstract: Proteins fold into specific three dimensional structures to perform their diverse biological functions. It is now well established that for small proteins the information contained in the amino acid sequence is sufficient to determine the folded structure, which is the structure with minimum free energy [1]. Thus the native structure is dictated by the physical interactions between amino acids in the sequence, and understanding the nature of such interactions is crucial for protein structure prediction. As a protein contains thousands of atoms and interacts with huge number of water molecules, it is not feasible to calculate the free energy function from first principles. An often adapted practical approach is to derive a coarse grained potential (often on the level of amino acids) using the known structures in the existing protein data banks. In such an approach, the energy of a particular substructure in proteins is derived from the number of its appearances in the structure data bank via a Boltzmann factor [2 ‐ 4]. A classic example of such a statistical potential is the Miyazawa-Jernigan (MJ) matrix, a 20 3 20 inter-residue contact-energy matrix derived by Miyazawa and Jernigan [2,5]. This matrix tabulates the interaction strength between any two types of amino acids in proteins, and has been widely applied in protein design and folding simulations [6,7]. In this Letter, we apply a general method of matrix analysis, namely, eigenvalue decomposition, to the MJ matrix. The analysis reveals an intrinsic regularity of the MJ matrix, which yields basic information about the nature of the driving force for protein folding. We show that despite the complicated interactions in proteins, the major driving force is hydrophobic interaction and a force of demixing, the latter obeying Hildebrand’s solubility theory of simple liquids [8]. The result allows us to attribute the interactions responsible for folding to quantifiable properties of individual amino acids. These properties suggest further experimental tests, and can be used for analyzing sequence-structure relation. Eigenvalue decomposition is a general approach to analyzing matrices. A given N 3 N real symmetric ma

197 citations


01 Mar 1997
TL;DR: A general method of matrix analysis, namely, eigenvalue decomposition, is applied to the Miyazawa-Jernigan matrix, revealing an intrinsic regularity of the MJ matrix, which yields basic information about the nature of the driving force for protein folding.
Abstract: Proteins fold into specific three dimensional structures to perform their diverse biological functions. It is now well established that for small proteins the information contained in the amino acid sequence is sufficient to determine the folded structure, which is the structure with minimum free energy [1]. Thus the native structure is dictated by the physical interactions between amino acids in the sequence, and understanding the nature of such interactions is crucial for protein structure prediction. As a protein contains thousands of atoms and interacts with huge number of water molecules, it is not feasible to calculate the free energy function from first principles. An often adapted practical approach is to derive a coarse grained potential (often on the level of amino acids) using the known structures in the existing protein data banks. In such an approach, the energy of a particular substructure in proteins is derived from the number of its appearances in the structure data bank via a Boltzmann factor [2 ‐ 4]. A classic example of such a statistical potential is the Miyazawa-Jernigan (MJ) matrix, a 20 3 20 inter-residue contact-energy matrix derived by Miyazawa and Jernigan [2,5]. This matrix tabulates the interaction strength between any two types of amino acids in proteins, and has been widely applied in protein design and folding simulations [6,7]. In this Letter, we apply a general method of matrix analysis, namely, eigenvalue decomposition, to the MJ matrix. The analysis reveals an intrinsic regularity of the MJ matrix, which yields basic information about the nature of the driving force for protein folding. We show that despite the complicated interactions in proteins, the major driving force is hydrophobic interaction and a force of demixing, the latter obeying Hildebrand’s solubility theory of simple liquids [8]. The result allows us to attribute the interactions responsible for folding to quantifiable properties of individual amino acids. These properties suggest further experimental tests, and can be used for analyzing sequence-structure relation. Eigenvalue decomposition is a general approach to analyzing matrices. A given N 3 N real symmetric ma

169 citations


Journal ArticleDOI
TL;DR: In this article, the statistical properties of complex eigenvalues of random matrices describing a crossover from Hermitian matrices characterized by the Wigner-Dyson statistics of real eigen values to strongly non-Hermitian ones were studied by Ginibre.
Abstract: By using the method of orthogonal polynomials, we analyze the statistical properties of complex eigenvalues of random matrices describing a crossover from Hermitian matrices characterized by the Wigner-Dyson statistics of real eigenvalues to strongly non-Hermitian ones whose complex eigenvalues were studied by Ginibre. Two-point statistical measures [as, e.g., spectral form factor, number variance, and small distance behavior of the nearest neighbor distance distribution $p(s)$] are studied in more detail. In particular, we found that the latter function may exhibit unusual behavior $p(s)\ensuremath{\propto}{s}^{5/2}$ for some parameter values.

153 citations


MonographDOI
01 Jan 1997

142 citations


Journal ArticleDOI
TL;DR: In this paper, an ensemble of large non-Hermitian random matrices of the form H + i A s, where H and A s are Hermitian statistically independent random N × N matrices are considered.

138 citations


Journal ArticleDOI
Kefu Liu1
TL;DR: In this article, the authors used the singular value decomposition (SVD) of a general Hankel matrix to identify successive discrete transition matrices that have the same eigenvalues as the original transition matrix.

109 citations


Journal ArticleDOI
TL;DR: A general approach to the model-based analysis of sets of spectroscopic data that is built upon the techniques of matrix analysis is described and extensions of the matrix-based least-squares procedures applicable to situations in which measurement errors may not be assumed to be normally distributed are suggested.

91 citations


Journal ArticleDOI
TL;DR: In this article, the correlation functions of the distribution of the eigenvalues of random matrices have been studied in two classes of random Hermitian matrix models: the one-matrix model and the two-dimensional model, and it has been observed that the correlation function of this distribution possesses universal properties, independent of the probability law of the stochastic matrix.

89 citations


Journal ArticleDOI
TL;DR: In this paper, a precise time-step integration method for dynamic problems is presented, where the second-order differential equations are manipulated directly and a general damping matrix is considered.
Abstract: In this paper, a precise time-step integration method for dynamic problems is presented. The second-order differential equations for dynamic problems are manipulated directly. A general damping matrix is considered. The transient responses are expressed in terms of the steady-state responses, the given initial conditions and the step-response and impulsive-response matrices. The steady-state responses for various types of excitations are readily obtainable. The computation of the step-response and impulsive-response matrices and their time derivatives are studied in this paper. A direct computation of these matrices using the Taylor series solutions is not efficient when the time-step size Δt is not small. In this paper, the recurrence formulae relating the response matrices at t=Δt to those at t=Δt/2 are constructed. A recursive procedure is proposed to evaluate these matrices at t=Δt from the matrices at t=Δt/2m. The matrices at t=Δt/2m are obtained from the Taylor series solutions. To improve the computational efficiency, the relations between the response matrices and their time derivatives are investigated. In addition, these matrices are expressed in terms of two symmetric matrices that can also be evaluated recursively. Besides, from the physical point of view, these matrices should be banded for small Δt. Both the stability and accuracy characteristics of the present algorithm are studied. Three numerical examples are used to illustrate the highly precise and stable algorithm. © 1997 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: A new quantity for real matrices, the sign-real spectral radius, is defined and investigated, and various characterizations, bounds, and properties of it are derived.

Proceedings ArticleDOI
01 Jul 1997
TL;DR: The matrix structure is exploited and the time complexity of constructing such matrices is decreased to roughly quadratic inthe matrix dimension, whereas the previous methods had cubic complexity.
Abstract: Resultants characterize the existence of roots of systems of multivariatc nonlinear polynomial equations, while their matrices reduce the computation of all common zeros to a problemi nlirmaralgebra. Sparse elimination theory ha.s introduced the sparse resultant, which takes into account the sparse structurr of the polynomials. The construction of sparse resultant, or Newton, matrices is a critical step in the computation of the resultant and the solution of the system. We exploit the matrix structure and decrease the time complexity of constructing such matrices to roughly quadratic inthe matrix dimension, whereas the previous methods had cubic complexity. The space complexity is also decreased by one order of magnitude. These results imply similar improvements in the complexity of computing the resultant itself and of solving zero-dimensional systems. We apply some novel techniques for determining the rank of rectangularmatrices byanexact or numerical computation. Finally, we improve theexisting complexity forpolynomid multiplication under our model of sparseness, offering bounds linear in the number of variables and the number of nonzero terms.

Journal ArticleDOI
TL;DR: Triples of matrices and state spacecics of minimal state space models are defined and explored and used to study balancing, Hankel singular values, and simultaneous model order reduction for a set of systems.
Abstract: In the paper [CL1] the notion of a convex invertible cone,cic, of matrices was introduced and its geometry was studied In that paper close connections were drawn between thiscic structure and the algebraic Lyapunov equation In the present paper the same geometry is extended to triples of matrices andcics of minimal state space models are defined and explored This structure is then used to study balancing, Hankel singular values, and simultaneous model order reduction for a set of systems State spacecics are also examined in the context of the so-called matrix sign function algorithm commonly used to solve the algebraic Lyapunov and Riccati equations

Journal ArticleDOI
TL;DR: In this paper, a fast O(n 2 )-approximation algorithm for symmetric Gaussian elimination with partial diagonal pivoting for Hermitian Toeplitz-like matrices is presented.

Journal ArticleDOI
TL;DR: In this paper, it was shown that a square matrix over a ring with identity has the consecutive-column property if for all k, all relevant submatrices having k consecutive rows and the first k columns are invertible.

Journal ArticleDOI
TL;DR: In this paper, it is shown that cubic eigenvalue repulsion in the complex plane is universal with respect to the probability distribution of matrices, and the density of eigenvalues, all correlation functions, and level spacing statistics are calculated.
Abstract: Random matrix models consisting of normal matrices, defined by the sole constraint $[N^{\dag},N]=0$, will be explored. It is shown that cubic eigenvalue repulsion in the complex plane is universal with respect to the probability distribution of matrices. The density of eigenvalues, all correlation functions, and level spacing statistics are calculated. Normal matrix models offer more probability distributions amenable to analytical analysis than complex matrix models where only a model wth a Gaussian distribution are solvable. The statistics of numerically generated eigenvalues from gaussian distributed normal matrices are compared to the analytical results obtained and agreement is seen.

Journal ArticleDOI
TL;DR: In this article, the authors considered the case of semiseparable matrices of order one and developed a FastO(N) algorithm with the only requirement that the considered matrix is invertible and its determinant is not close to zero.
Abstract: Here are considered matrices represented as a sum of diagonal and semiseparable ones. These matrices belong to the class of structured matrices which arises in numerous applications. FastO(N) algorithms for their inversion were developed before under additional restrictions which are a source of instability. Our aim is to eliminate these restrictions and to develop reliable and stable numerical algorithms. In this paper we obtain such algorithms with the only requirement that the considered matrix is invertible and its determinant is not close to zero. The case of semiseparable matrices of order one was considered in detail in an earlier paper of the authors.

Journal ArticleDOI
TL;DR: In this paper, the V-transform of normalized spectral function (n.s.f.) was used for non-Hermitian random matrices in some problems of spin glasses and neural nets.
Abstract: We review some results obtained in series of papers on non Hermitian random matrices in some problems of spin glasses and neural nets. We present new theory of such matrices on the basis of the V-transform of normalized spectral function (n.s.f.) i^n(^,2/) of the eigenvalues of non symmetric matrix Ξ with n.s.f. /xn(x,r) of the eigenvalues of the Hermitian G-matrix (Ξ τ/) (Ξ τ/)*, τ = t + is : 7 q φ. Ο, and the modified V^-transform:

Journal ArticleDOI
TL;DR: In this paper, a new approach to constructing mass matrices is presented, based on expressing it through use of a variable parameter, which allows the mass matrix to be adjusted in such a way that a simple eigenvalue problem get the best solution possible in terms of some error measure.
Abstract: A new approach to constructing mass matrices is presented, based on expressing it through use of a variable parameter. This allows the mass matrix to be adjusted in such a way that a simple eigenvalue problem get the best solution possible in terms of some error measure. This procedure is used to create both diagonal mass matrices and mixed mass matrices. © 1997 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This work specifies some initial assumptions that guarantee rapid refinement of a rough initial approximation to the inverse of a Cauchy-like matrix by means of Newton's iteration, where the input, output, and all the auxiliary matrices are represented with their short generators defined by the associated scaling operators.

Journal ArticleDOI
TL;DR: The minimal eigenvalues of a class of block-tridiagonal matrices from telecommunication system analysis are studied and an eigenvalue analysis for two- user systems and efficient estimates for m-user systems are presented.
Abstract: In this correspondence, we study the minimal eigenvalues of a class of block-tridiagonal matrices from telecommunication system analysis. We present an eigenvalue analysis for two-user systems and efficient estimates for m-user systems.

Proceedings ArticleDOI
R.W. Freund1
10 Dec 1997
TL;DR: A Lanczos-type procedure that reduces a given realization of a finite sequence of (moment) matrices to a minimal partial realization and avoids explicit formulation of and the usually unstable computation with the moment matrices.
Abstract: We describe a Lanczos-type procedure that reduces a given realization of a finite sequence of (moment) matrices to a minimal partial realization. A key feature of this procedure is that the underlying Lanczos-type algorithm is directly applied to the matrix triplet describing the given realization, rather than to the moment matrices. It thus avoids explicit formulation of and the usually unstable computation with the moment matrices.

Journal ArticleDOI
TL;DR: A new and fast algorithm for solving the surface smoothing problem using a membrane, athin-plate, or a thin-plate—membrane spline for data containing discontinuities using the capacitance-matrix method based on the Sherman-Morrison-Woodbury formula of matrix analysis.

Proceedings ArticleDOI
10 Dec 1997
TL;DR: In this paper, the eigenvalue placement in a stable subregion of the s-plane is investigated for multivariable linear dynamic systems, and a theorem that relates the bounds of a semi-annular region to the LQ performance index weighting matrices is used to develop a design procedure to relocate the eigvalues.
Abstract: Eigenvalue placement in a stable subregion of the s-plane is investigated for multivariable linear dynamic systems. A theorem that relates the bounds of a semi-annular region to the LQ performance index weighting matrices is used to develop a design procedure to relocate the eigenvalues. An example is included to demonstrate the successful implementation of the new method.

Journal ArticleDOI
TL;DR: In this article, the authors extended Siegel's matrix analysis of membrane transport in the Laplace domain to include the case of nonzero initial distribution, which leads to a more general transport equation.
Abstract: Siegel’s matrix analysis of membrane transport in the Laplace domain [J. Phys. Chem. 95, 2556 (1991)], which is restricted to zero initial distribution, has been extended to including the case of nonzero initial distribution. This extension leads to a more general transport equation with Siegel’s results as a special case. The new transport equation allows us to formulate the mean-first-passage time t for various boundary conditions, if the initial distribution is stipulated to be of the Dirac delta-function type; and the steady-state permeability P and time lag tL, if zero initial distribution is employed. Based on this matrix analysis we also propose an algorithm for quick and effective numerical computations of P, tL, and t. Examples are given to demonstrate the application of this algorithm, and the numerical results are compared with the theoretical ones. The validity of the transport equation is also checked by a Green’s function.

Journal ArticleDOI
TL;DR: In this paper, a characterization of perfect 0, ± 1 matrices is given in terms of a family of matrices, which is a special case of perfect graphs and perfect 0-1 matrices.

Journal ArticleDOI
TL;DR: Rodman and Shalom as mentioned in this paper showed that one of them is not true in general, and they proved its validity for some particular cases, such as partial Hessenberg matrices.

Journal ArticleDOI
TL;DR: New algorithms for the derivation of the transfer-function matrices of two-dimensional (2-D) discrete systems from the Roesser and Fornasini-Marchesini state-space models are presented and are computationally efficient and reliable.
Abstract: New algorithms for the derivation of the transfer-function matrices of two-dimensional (2-D) discrete systems from the Roesser and Fornasini-Marchesini state-space models are presented. Two key steps in developing the algorithms are as follows. First, the transfer-function matrix is reformulated in terms of the characteristic polynomials of the matrices involved. Second, an efficient algorithm for the determination of 1-D polynomial coefficients is developed and is, in turn, used to determine the coefficient matrices of the 2-D transfer-function matrix. The proposed algorithms are computationally efficient and reliable. The efficiency of the algorithms is illustrated by comparing the proposed method with two existing methods through examples.

Journal ArticleDOI
TL;DR: In this article, structural condition numbers are introduced and discussed for the proper scaling of nonsymmetric and symmetric matrices, and the significance of structural condition number for proper scaling is discussed.