scispace - formally typeset
Search or ask a question

Showing papers in "Electronic Journal of Linear Algebra in 2007"


Journal ArticleDOI
TL;DR: The maximum nullity of simple graphs with n vertices and e edges, M(n, e), is also discussed in this paper, where the authors obtain an upper bound of M n, e for which the upper bound is achieved.
Abstract: The nullity of a graph G, denoted by η(G), is the multiplicity of the eigenvalue zero in its spectrum It is known that η(G) ≤ n − 2 if G is a simple graph on n vertices and G is notisomorphic to nK1 In this paper, we characterize the extremal graphs attaining the upper bound n − 2 and the second upper bound n − 3 The maximum nullity of simple graphs with n vertices and e edges, M(n, e), is also discussed We obtain an upper bound of M(n, e), and characterize n and e for which the upper bound is achieved

111 citations


Journal ArticleDOI
TL;DR: In this article, a method for computing first order derivatives of the eigenvalues and eigenvectors for a general complex-valued, non-defective matrix is presented.
Abstract: In many engineering applications, the physical quantities that have to be computed are obtained by solving a related eigenvalue problem. The matrix under consideration and thus its eigenvalues usually depend on some parameters. A natural question then is how sensitive the physical quantity is with respect to (some of) these parameters, i.e., how it behaves for small changes in the parameters. To find this sensitivity, eigenvalue and/or eigenvector derivatives with respect to those parameters need to be found. A method is provided to compute first order derivatives of the eigenvalues and eigenvectors for a general complex-valued, non-defective matrix.

80 citations


Journal ArticleDOI
TL;DR: In this paper, a graph G is defined to be singular of nullity if the dimension of the null space of its adjacency matrix is η(G) ≥ 1, and sufficient and necessary conditions are determined for a graph to be a singular in terms of admissible induced subgraphs.
Abstract: Characterization of singular graphs can be reduced to the non-trivial solutions of a system of linear homogeneous equations Ax = 0 for the 0-1 adjacency matrix A A graph G is singular of nullity η(G) ≥ 1, if the dimension of the nullspace ker(A) of its adjacency matrix A is η(G) Necessary and sufficient conditions are determined for a graph to be singular in terms of admissible induced subgraphs

77 citations


Journal ArticleDOI
TL;DR: In this paper, the singular value inequality for matrix sum is studied and a sharp bound for the change in graph energy when the edges of a nonsingular induced subgraph are removed is established.
Abstract: The energy of a graph is the sum of the singular values of its adjacency matrix. A classic inequality for singular values of a matrixsum, including its equality case, is used to study how the energy of a graph changes when edges are removed. One sharp bound and one bound that is never sharp, for the change in graph energy when the edges of a nonsingular induced subgraph are removed, are established. A graph is nonsingular if its adjacency matrixis nonsingular. 1. Singular value inequality for matrix sum. Let X be an n × n complex matrix and denote its singular values by s1(X) ≥ s2(X) ≥ · ·· ≥sn(X) ≥ 0. If X has real eigenvalues only, denote its eigenvalues by λ1(X) ≥ λ2(X) ≥ · ·· ≥λn(X). Define |X| = √ XX ∗ which is positive semi-definite, and note that λi(|X| )= si(X) for all i .W e w riteX ≥ 0t o meanX is positive semi-definite. We are interested in the following singular value inequality for a matrix sum: n � i=1 si(A + B) ≤ n � i=1 si(A )+ n � i=1 si(B)

55 citations


Journal ArticleDOI
TL;DR: In this article, it is conjectured that for connected graphs of order n ≥ 3, the principal ratio is always attained by one of the lollipop graphs obtained by attaching a path graph to a vertex of a complete graph.
Abstract: Let G be a connected graph. Thispaper s extreme entriesof the principal eigenvector x of G, the unique positive unit eigenvector corresponding to the greatest eigenvalue λ1 of the adjacency matrix of G.I fG hasmaximum degree ∆, the greates t entry xmax of x isat mos t 1/ 1+ λ 2 /∆. This improves a result of Papendieck and Recht. The least entry xmin of x aswell asthe principal ratio xmax/xmin are studied. It is conjectured that for connected graphs of order n ≥ 3, the principal ratio isalwaysattained by one of the lollipop graphsobtained by attaching a path graph to a vertex of a complete graph.

54 citations


Journal ArticleDOI
TL;DR: In this article, the problem of relating the eigenvalues of the normalized Laplacian for a weighted graph G and GH,f orH a subgraph of G is considered.
Abstract: The problem of relating the eigenvalues of the normalized Laplacian for a weighted graph G and GH ,f orH a subgraph of G is considered. It is shown that these eigenvalues interlace and that the tightness of the interlacing is dependent on the number of nonisolated vertices of H. Weak coverings of a weighted graph are also defined and interlacing results for the normalized Laplacian for such a covering are given. In addition there is a discussion about interlacing for the Laplacian of directed graphs.

53 citations


Journal ArticleDOI
TL;DR: In this article, a symmetric version of Rado's extension is given, which allows us to obtain a new, more general, sufficient condition for the existence of symmetric nonnegative matrices with prescribed spectrum.
Abstract: A perturbation result, due to R. Rado and presented by H. Perfect in 1955, shows how to modify r eigenvalues of a matrix of order n, r ≤ n, via a perturbation of rank r, without changing any of the n − r remaining eigenvalues. This result extended a previous one, due to Brauer, on perturbations of rank r = 1. Both results have been exploited in connection with the nonnegative inverse eigenvalue problem. In this paper a symmetric version of Rado’s extension is given, which allows us to obtain a new, more general, sufficient condition for the existence of symmetric nonnegative matrices with prescribed spectrum.

38 citations


Journal ArticleDOI
TL;DR: In this article, a relation between the multiplicity m of the second eigenvalue λ 2 of a Laplacian on a graph and a discrete analogue of Courant's nodal line theorem is discussed.
Abstract: A relation between the multiplicity m of the second eigenvalue λ2 of a Laplacian on ag raphG, tight mappings of G and a discrete analogue of Courant's nodal line theorem is discussed. For a certain class of graphs, it is shown that the m-dimensional eigenspace of λ2 is tight and thus defines a tight mapping of G into an m-dimensional Euclidean space. The tightness of the mapping is shown to set Colin de Verdiere's upper bound on the maximal λ2-multiplicity, m ≤ chr(γ(G)) − 1, where chr(γ(G)) is the chromatic number and γ(G )i s the genus ofG.

35 citations


Journal ArticleDOI
TL;DR: In this paper, necessary and sufficient conditions are presented on the vector − n so that a value of τ ∈ R can be found to achieve perfect conditioning of A. A simpl e test to check the condition is derived and the corresponding values of τ is found.
Abstract: Let K, M ∈ N with K

29 citations


Journal ArticleDOI
TL;DR: In this article, the authors considered an ordered pair of linear transformations A : V → V and A ∗ :V → V that satisfy the following conditions: (i) there exists a basis for V with respect to wh ichth e matrix representing A is irreducible tridiagonal.
Abstract: Let K denote a field and let V denote a vector space over K withfinite positive dimension. An ordered pair is considered of linear transformations A : V → V and A ∗ : V → V that satisfy (i) and (ii) below: (i) There exists a basis for V withrespect to wh ichth e matrix representing A is irreducible tridiagonal and the matrix representing A ∗ is diagonal. (ii) There exists a basis for V withrespect to wh ichth e matrix representing A ∗ is irreducible tridiagonal and the matrix representing A is diagonal. Sucha pair is called a Leonard pair on V.L etξ, ζ, ξ ∗ ,ζ ∗ denote scalars in K with ξ, ξ ∗ nonzero, and note that ξA+ ζI, ξ ∗ A ∗ + ζ ∗ I is a Leonard pair on V. Necessary and sufficient conditions are given for this Leonard pair to be isomorphic to A, A ∗ . Also given are necessary and sufficient conditions for this Leonard pair to be isomorphic to the Leonard pair A ∗ ,A.

25 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that all potentially nilpotent full sign patterns are spectrally arbitrary and a related result for sign patterns all of whose zeros lie on the main diagonal is also given.
Abstract: It is shown that all potentially nilpotent full sign patterns are spectrally arbitrary. A related result for sign patterns all of whose zeros lie on the main diagonal is also given. 1. Full Spectrally Arbitrary Patterns. In what follows, Mn denotes the topological vector space of all n × n matrices with real entries and Pn denotes the set of all polynomials with real coefficients of degree n or less. The superdiagonal of an n × n matrix consists of the n − 1 elements that are in the ith row and (i +1 )st column for some i ,1 ≤ i ≤ n − 1. A sign pattern is a matrix with entries in {+, 0, −} .G iven twon×n sign patterns A and B ,w e say thatB is a superpattern of A if bij = aij whenever aij � Note that a sign pattern is always a superpattern of itself. We define the function sign : R →{ +, 0, −} in the obvious way: sign(x )=+i f x> 0, sign(0) = 0, and sign(x )= − if x< 0. Given a real matrix A, sign(A) is the sign pattern with the same dimensions as A whose (i, j)th entry is sign(aij). For every sign pattern A ,w e define its associated sign pattern class to be the inverse image Q(A )= sign −1 (A). A sign pattern is said to be full if none of its entries are zero (8). A sign pattern class

Journal ArticleDOI
TL;DR: Inertially arbitrary nonzero patterns of order at most 4 are characterized and some of these patterns are demonstrated to be inertially arbitrary but not spectrally arbitrary.
Abstract: Inertially arbitrary nonzero patterns of order at most 4 are characterized. Some of these patterns are demonstrated to be inertially arbitrary but not spectrally arbitrary. The order 4 sign patterns which are inertially arbitrary and have a nonzero pattern that is not spectrally arbitrary are also described. There exists an irreducible nonzero pattern which is inertially arbitrary but has no signing that is inertially arbitrary. In fact, up to equivalence, this pattern is unique among the irreducible order 4 patterns with this property.

Journal ArticleDOI
TL;DR: The question of what happens to the eigenvalues of the Laplacian of a graph when we delete a vertex is addressed in this paper, where the average number of leaves in a random spanning tree F(G) > 2|E|e �1 α λn,i f λ2 >α n.
Abstract: The question of what happens to the eigenvalues of the Laplacian of a graph when we delete a vertex is addressed. It is shown that λi − 1 ≤ λ v ≤ λi+1, where λi is the ith smallest eigenvalues of the Laplacian of the original graph and λ v is the ith smallest eigenvalues of the Laplacian of the graph G(V − v); i.e., the graph obtained after removing the vertex v. It is shown that the average number of leaves in a random spanning tree F(G) > 2|E|e �1 α λn ,i f λ2 >α n.

Journal ArticleDOI
TL;DR: In this article, the k-subdirect sum of a doubly diagonally dominant matrix (DDD) is also a DDD matrix, and sufficient conditions are given.
Abstract: The problem of when the k-subdirect sum of a doubly diagonally dominant matrix (DDD matrix) is also a DDD matrix is studied. Some sufficient conditions are given. The same situation is analyzed for diagonally dominant matrices and strictly diagonally dominant matrices. Additionally, some conditions are also derived when card(S)>card(S1) which was not studied by Bru, Pedroche and Szyld (Electron. J. Linear Algebra, 15:201-209, 2006). Examples are given to illustrate the conditions presented.

Journal ArticleDOI
TL;DR: In this paper, a characterization of Schur multiplicative maps on com- plex matrices preserving the spectral radius, numerical radius, or spectral norm is given for all matrices A and B. Similar results are obtained for maps under weaker assumptions.
Abstract: Characterizations are obtained for Schur (Hadamard) multiplicative maps on com- plex matrices preserving the spectral radius, numerical radius, or spectral norm. Similar results are obtained for maps under weaker assumptions. Furthermore, a characterization is given for maps f satisfyingA ◦ B� = � f(A) ◦ f(B)� for all matrices A and B.

Journal ArticleDOI
TL;DR: In this paper, necessary and sufficient conditions are established for a group of mixed-type reverse-order laws for generalized inverses of a triple matrixproduct to hold. But they are not applicable to generalized generalized inverse matrices.
Abstract: Necessary and sufficient conditions are established for a group of mixed-type reverse- order laws for generalized inverses of a triple matrixproduct to hold. Some applications of the reverse-order laws to generalized inverses of the sum of two matrices are also given.

Journal ArticleDOI
TL;DR: In this paper, the minimum rank of a tree is shown to be independent of the field and the tree is defined as the smallest possible rank over all symmetric matrices A ∈ F n×n whose (i, j)th entry (for ij) is nonzero whenever {i, n} is an edge in G and is zero otherwise.
Abstract: For a field F and graph G of order n, the minimum rank of G over F is defined to be the smallest possible rank over all symmetric matrices A ∈ F n×n whose (i, j)th entry (for ij) is nonzero whenever {i, j} is an edge in G and is zero otherwise. It is shown that the minimum rank of a tree is independent of the field.

Journal ArticleDOI
TL;DR: In this paper, necessary and sufficient conditions for the equivalence and congruence of matrices under the action of standard parabolic subgroups are discussed, where the subgroup is defined as a set of subgroups.
Abstract: Necessary and sufficient conditions for the equivalence and congruence of matrices under the action of standard parabolic subgroups are discussed.

Journal ArticleDOI
TL;DR: Many interesting properties of Euclideandistance matrices to block distance matrices are extended and distance matrix of trees with matrix weights are investigated.
Abstract: In this paper, block distance matrices are introduced. Suppose F is a square block matrix in which each block is a symmetric matrix of some given order. If F is positive semidefinite, the block distance matrix D is defined as a matrix whose (i, j)-block is given by D ij = F ii +F jj -2F ij . When each block in F is 1×1 (i.e., a real number), D is a usual Euclidean distance matrix. Many interesting properties of Euclidean distance matrices to block distance matrices are extended in this paper. Finally, distance matrices of trees with matrix weights are investigated.


Journal ArticleDOI
TL;DR: In this article, general expressions of weighted least-squares estimators (WLSEs) of parameter matrices were given under a general growth curve model, and some algebraic and statistical properties of the estimators were also derived through the matrix rank method.
Abstract: Growth curve models are used to analyze repeated measures data (longitudinal data), which are functions of time. General expressions of weighted least-squares estimators (WLSEs) of parameter matrices were given under a general growth curve model. Some algebraic and statistical properties of the estimators are also derived through the matrix rank method. AMS subject classifications. 62F11, 62H12, 15A03, 15A09.

Journal ArticleDOI
TL;DR: In this article, the equivalence relations of strict equivalence and congruence of real and complex matrix pencils with symmetries are compared, depending on whether the matrix matrices are real, complex, or quaternionic.
Abstract: The equivalence relations of strict equivalence and congruence of real and complex matrix pencils with symmetries are compared, depending on whether the congruence matrices are real, complex, or quaternionic. The obtained results are applied to comparison of congruences of matrices, over the reals, the complexes, and the quaternions.

Journal ArticleDOI
TL;DR: In this article, the weak Hawkins-Simon condition was studied for real square matrices and sufficient conditions for the condition to hold after reorderings of both rows and columns were given.
Abstract: A real square matrix satisfies the weak Hawkins-Simon condition if its leading principal minors are positive (the condition was first studied by the French mathematician Maurice Potron). Three characterizations are given. Simple sufficient conditions ensure that the condition holds after a suitable reorderingof columns. A full characterization of this set of matrices should take into account the group of transforms which leave it invariant. A simple algorithm able, in some cases, to implement a suitable permutation of columns is also studied. The nonsingular Stiemke matrices satisfy the WHS condition after reorderings of both rows and columns.

Journal ArticleDOI
TL;DR: In this paper, the authors prove several evaluations of determinants of matrices, the entries of which are given by the recurrence ai,j = ai−1,j−1 +ai−1,j, i, j ≥ 2, with various choices for the first row a 1,j and first column a i,1.
Abstract: The purpose of this article is to prove several evaluations of determinants of matrices,the entries of which are given by the recurrence ai,j = ai−1,j−1 +ai−1,j , i, j ≥ 2, with various choices for the first row a1,j and first column ai,1.

Journal ArticleDOI
TL;DR: In this paper, the authors characterized matrices that are products of two (or more) commuting square-zero matrices and two commuting nilpotent matrices on an infinite dimensional Hilbert space.
Abstract: Matrices that are products of two (or more) commuting square-zero matrices and matrices that are products of two commuting nilpotent matrices are characterized. Also given are characterizations of operators on an infinite dimensional Hilbert space that are products of two (or more) commuting square-zero operators, as well as operators on an infinite-dimensional vector space that are products of two commuting nilpotent operators.

Journal ArticleDOI
TL;DR: In this article, the authors derived sharp estimates for the absolute values of entries of a matrix valued function of finite and infinite matrices, and applied these estimates to differential equations, where the eigenvalues are not required.
Abstract: Sharp estimates for the absolute values of entries of matrix valued functions of finite and infinite matrices are derived. These estimates give us bounds for various norms of matrix valued functions. Applications of the obtained estimates to differential equations are also discussed. Gel'fand and G.E. Shilov have established an estimate for the norm of a regular matrix valued function in connection with their investigations of partial differential equations. However that estimate is not sharp, it is not attained for any matrix. The problem of obtaining a precise estimate for the norm of a matrix function has been repeatedly discussed in the literature, cf. (1). In the paper (6) (see also (7)) the author has derived a precise estimate for the Euclidean norm which is attained in the case of normal matrices. But that estimate requires bounds for the eigenvalues. In this paper we derive sharp estimates for the absolute values of entries of a matrix valued function. They are attained in the case of diagonal matrices. Besides, bounds for the eigenvalues are not required. These estimates give us bounds for various norms of matrix valued functions. Our results supplement the very interesting recent investigations of matrix valued functions (3, 4, 9, 11). A feww ords about the contents. The paper consists of 4 sections. In this section we consider finite matrices and formulate the main result of the paper-Theorem 1.1. It is proved in Section 2. Section 3 deals with applications of Theorem 1.1 to differential equations. In Section 4 we generalize Theorem 1.1 to some classes of infinite matrices. Let C n be a complex Euclidean space with the scalar product (., .) and the unit matrix I.L etσ(A) be the spectrum of a linear operator (a matrix) A and

Journal ArticleDOI
TL;DR: In this paper, the Fourier-Motzkin elimination is considered as a matrix operation and properties of this operation are established for combinatorial matrices, defined as (0, 1, 1)-matrices.
Abstract: Fourier-Motzkin elimination is a classical method for solving linear inequalities in which one variable is eliminated in each iteration. This method is considered here as a matrix operation and properties of this operation are established. In particular, the focus is on situations where this matrix operation preserves combinatorial matrices (defined here as (0,1,�1)-matrices).

Journal ArticleDOI
TL;DR: In this article, the Q-property of a multiplicative transformation AXAT in semidefinite linear complementarity problems is characterized when A is normal, where A is the normal complementarity.
Abstract: The Q-property of a multiplicative transformation AXAT in semidefinite linear complementarity problems is characterized when A is normal

Journal ArticleDOI
TL;DR: In this article, the authors considered a bound that relates the distance between X and Y to the eigenvalues of the normalized Laplacian matrix for G,t he volumes ofX and Y, and the volumes of their complements.
Abstract: Let G be a connected graph, and let X and Y be subsets of its vertex set. A previously published bound is considered that relates the distance between X and Y to the eigenvalues of the normalized Laplacian matrix for G ,t he volumes ofX and Y , and the volumes of their complements. A counterexample is given to the bound, and then a corrected version of the bound is provided.

Journal ArticleDOI
TL;DR: In this article, it was shown that if the error in the eigenvalue is sufficiently small, then the error of the approximate eigenvector produced by the least square method is also small.
Abstract: The least-squares method can be used to approximate an eigenvector for a matrix when only an approximation is known for the corresponding eigenvalue. In this paper, this technique is analyzed and error estimates are established proving that if the error in the eigenvalue is sufficiently small, then the error in the approximate eigenvector produced by the least-squares method is also small. Also reported are some empirical results based on using the algorithm. 1. Notation. We use upper case, bold letters to represent complex matrices, and lower case bold letters to represent vectors in C k . We consider a vector v to be a column, and so its adjoint v ∗ is a row vector. Hence v ∗v2 yields the complex dot product v2 · v1. The vector ei is the vector having 1 in its ith coordinate and 0 elsewhere, and In is the n × n identity matrix. We usevto represent the 2-norm on vectors;that isv� 2 = v ∗ v .A lso,|||F||| represents the spectral matrix norm of a square matrix F ,a nd soFv� ≤ ||| F||| � vfor every vector v. Finally, for an n × n Hermitian matrix F, we will write each of the n (not necessarily distinct) real eigenvalues for F as λi(F), where λ1(F) ≤ λ2(F) ≤ · ·· ≤λn(F). 2. The Method and Our Goal. Suppose M is an arbitrary n×n matrix having λ as an eigenvalue, and let A = λIn − M. Generally, one can find an eigenvector for M corresponding to λ by solving the homogeneous system Ax = 0. However, the computation of an eigenvalue does not always result in an exact answer, either because a numerical technique was used for its computation, or due to roundoff error. Suppose λ � is the approximate, known value for the actual eigenvalue λ .I fλ � λ � , then the known matrix K = λ � In − M is most likely nonsingular, and so the homogeneous system Kx = 0 has only the trivial solution. This situation occurs frequently when attempting to solve small eigenvalue problems on calculators. Let � = λ � − λ .T henK = A + � In. Our goal is to approximate a vector in the kernel of A when only the matrix K is known. We assume that M has no other eigenvalues within |� | units of λ ,s o thatK is nonsingular, and thus has trivial kernel. Let u be a unit vector in ker(A). Although we know that u exists, u is unknown. Let v be an arbitrarily chosen unit vector in C n such that w = v ∗ u � 0. In practice, when choosing v ,t he value ofw is unknown, but if v is chosen at random, the probability that w = 0 is zero. Let B be the (n +1 )× n matrix formed by appending the row v ∗