scispace - formally typeset
Search or ask a question

Showing papers on "Square matrix published in 1999"


Journal ArticleDOI
TL;DR: It is shown that this bound is sharp in order, as far as the dependence on m is concerned, and that a~feasible solution x to (P) with x^TAx\ge \frac{{\Opt(\hbox{{\rm SDP}})}}{{2\ln(2m^2)}} \eqno{(*)}$$ can be found efficiently.
Abstract: We demonstrate that if A1;:::; Am are symmetric positive semidefinite nn matrices with positive definite sum and A is an arbitrary symmetric n n matrix, then the relative accuracy, in terms of the optimal value, of the semidefinite relaxation

195 citations


01 Jan 1999
TL;DR: The SPOOLES software package provides a choice of three sparse matrix orderings (minimum degree, nested dissection and multisection), supports pivoting for numerical stability, can compute direct or drop tolerance factorizations, and the computations are based on BLAS3 numerical kernels to take advantage of high performance computing architectures.
Abstract: 1 Overview Solving sparse linear systems of equations is a common and important component of a multitude of scientific and engineering applications. The SPOOLES software package1 provides this functionality with a collection of software objects and methods. The package provides a choice of three sparse matrix orderings (minimum degree, nested dissection and multisection), supports pivoting for numerical stability (when required), can compute direct or drop tolerance factorizations, and the computations are based on BLAS3 numerical kernels to take advantage of high performance computing architectures. The factorizations and solves are supported in serial, multithreaded (using POSIX threads) and MPI environments. The first step to solving a linear system AX = B is to construct “objects” to hold the entries and structure of A, and the entries of X and B. SPOOLES provides a flexible set of methods to assemble a sparse matrix. The “input matrix” object allows a choice of coordinate systems (by rows, by columns, and other ways), flexible input (input by single entries, (partial) rows or columns, dense submatrices, or any combination), resizes itself as necessary, and assembles, sorts and permutes its entries. It is also a distributed object for MPI environments. Matrix entries can be created and assembled on different processors, and methods exist to assemble and redistribute the matrix entries as necessary. There are three methods to order a sparse matrix: minimum degree, generalized nested dissection and multisection. The latter two orderings depend on a domain/separator tree that is constructed using a graph partitioning method. Domain decomposition is used to find an initial separator, and a sequence of network flow problems are solved to smooth the separator. The qualities of our nested dissection and multisection orderings are comparable to other state of the art packages. Factorizations of square matrices have the form A = PLDUQ and A = PLDLP T , where P and Q are permutation matrices. Square systems of the form A + σB may also be factored and solved (as found in shift-and-invert eigensolvers), as well as full rank overdetermined linear systems, where a QR factorization is computed and the solution found by solving the semi-normal equations.

124 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that a real symmetric positive definite matrix V is congruent to a diagonal matrix modulo a pseudo-orthogonal [pseudo-unitary] matrix in SO(m,n) for any choice of partition N =m+n.
Abstract: It is shown that a N×N real symmetric [complex Hermitian] positive definite matrix V is congruent to a diagonal matrix modulo a pseudo-orthogonal [pseudo-unitary] matrix in SO(m,n)[SU(m,n)], for any choice of partition N=m+n. It is further shown that the method of proof in this context can easily be adapted to obtain a rather simple proof of Williamson’s theorem which states that if N is even then V is congruent also to a diagonal matrix modulo a symplectic matrix in Sp(N,R)[Sp(N,C)]. Applications of these results considered include a generalization of the Schweinler–Wigner method of “orthogonalization based on an extremum principle” to construct pseudo-orthogonal and symplectic bases from a given set of linearly independent vectors.

101 citations


Patent
John Wallis1
03 Jun 1999
TL;DR: In this paper, a linear solver for solving systems of non-linear partial differential equations and systems of linear equations representing physical characteristics of an oil and/or gas reservoir is presented.
Abstract: A Linear Solver method and apparatus, embodied in a Simulator and adapted for solving systems of non-linear partial differential equations and systems of linear equations representing physical characteristics of an oil and/or gas reservoir, includes receiving a first signal representing physical characteristics of a reservoir, obtaining a residual vector {right arrow over (r)} 0 from the first signal (representing errors associated with a system of nonlinear equations describing the reservoir) and a first matrix A 0 (representing the sensitivity of the residual vector to changes in a system of nonlinear equations), recursively decomposing matrix A 0 into a lower block triangular matrix, an upper block triangular matrix, and a diagonal matrix, and generating a second matrix M 0 that is an approximation to matrix A 0 . A solution to the systems of non-linear partial differential equations may then be found by using certain values that were used to produce the matrix M 0 , and that solution does not require the direct computation of A 0 {right arrow over (x)}={right arrow over (b)} (representing the system of linear equations) as required by conventional methods.

90 citations


Journal ArticleDOI
TL;DR: A polynomial-time algorithm is developed for the problems of matrix balancing and double-stochastic scaling of a square nonnegative matrix, thus derivingPolynomial time solvability of a number of generic scaling problems for nonnegative multiindex arrays.

54 citations


Journal ArticleDOI
28 Sep 1999
TL;DR: In this article, the problem of minimizing the spectral abscissa (the largest real part of an eigenvalue) of a square matrices was studied and an example whose optimal solution has Jordan form consisting of a single Jordan block was given.
Abstract: Given an affine subspace of square matrices, we consider the problem of minimizing the spectral abscissa (the largest real part of an eigenvalue). We give an example whose optimal solution has Jordan form consisting of a single Jordan block, and we show, using non-lipschitz variational analysis, that this behaviour persists under arbitrary small perturbations to the example. Thus although matrices with nontrivial Jordan structure are rare in the space of all matrices, they appear naturally in spectral abscissa minimization.

50 citations


Patent
17 May 1999
TL;DR: In this paper, a diagonal normalizing matrix having shaft direction change functions to be imparted to each basic elastic coefficient value as coefficients so as to store each value within a uniform interval fixing a center in the vicinity of a specific average value.
Abstract: PROBLEM TO BE SOLVED: To obtain an elastic deformation detecting method within a flexible organization by normalizing elastic coefficient estimated amount by using a diagonal normalizing matrix having shaft direction change functions to be imparted to each basic elastic coefficient value as coefficients so as to store each value within a uniform interval fixing a center in the vicinity of a specific average value. SOLUTION: The definition of a normalizing matrix R is to uniformly keep vector formed by a distance from an average value of an element e1 of an elastic coefficient vector (e). Uniform interval is defined and the normalizing matrix R is obtained from a sensitivity matrix S by using a simple matrix calculation to be executed in a system 200. At the last of a calculation 222 calculating the matrix R, an electronic system 200 related to an ultrasonograph executes a calculation 223 calculating a matrix M of estimated amount of the vector (e) and continuously executes a calculation 224 of a matrix multiplication of the matrix M and vector (d). By a repetition, a method to be enforced in the electronic system 200 related to the ultrasonograph is utilized and a reproduced image of the elastic coefficient (e) can be acquired based on change vector.

46 citations


Journal ArticleDOI
TL;DR: The matrix volume can be used in change-of-variables formulae, instead of the determinant, in rectangular matrices, if the Jacobi matrix of the underlying transformation is rectangular.
Abstract: The matrix volume is a generalization, to rectangular matrices, of the absolute value of the determinant. In particular, the matrix volume can be used in change-of-variables formulae, instead of the determinant (if the Jacobi matrix of the underlying transformation is rectangular). This result is applicable to integration on surfaces, illustrated here by several examples.

45 citations


Journal ArticleDOI
TL;DR: An iterative Jacobi type method is formulated and its convergence has been proved, under certain conditions on the interval matrix, to solve systems of linear equations involving an interval square matrix and an interval right-hand side vector using interval arithmetic.

42 citations


Journal ArticleDOI
TL;DR: In this article, the problem of computing the transition matrix (T matrix) in the framework of the null-field method with discrete sources is treated, and numerical experiments are performed to investigate the symmetry property of the T matrix when localized and distributed vector spherical functions are used for solution construction.
Abstract: The problem of computing the transition matrix (T matrix) in the framework of the null-field method with discrete sources is treated. Numerical experiments are performed to investigate the symmetry property of the T matrix when localized and distributed vector spherical functions are used for solution construction.

42 citations


Journal ArticleDOI
TL;DR: This paper gives an elementary and self-contained proof for the fact that an ill-conditioned matrix is also not far from a singular matrix in a componentwise sense and this is shown to be true for any weighting of the componentwise distance.
Abstract: For a square matrix normed to 1, the normwise distance to singularity is well known to be equal to the reciprocal of the condition number. In this paper we give an elementary and self-contained proof for the fact that an ill-conditioned matrix is also not far from a singular matrix in a componentwise sense. This is shown to be true for any weighting of the componentwise distance. In other words, for matrix inversion, "ill conditioned" means "nearly ill posed" in the normwise and also in the componentwise sense.

Journal ArticleDOI
TL;DR: In this paper, it was shown that for any complex matrix B with the same zero pattern as A, W(B), the numerical range of B is a circular disk centered at the origin.

Patent
Jungwoo Lee1
28 Sep 1999
TL;DR: In this paper, the authors proposed a parameterized Q matrix adaptation algorithm for MPEG-2 compression, where the Q matrix for the current frame is generated based on DCT coefficient data from the previous encoded frame of the same type (e.g., I, P, or B).
Abstract: In a video compression processing, such as MPEG-2 compression processing, the quantization (Q) matrix used to quantize discrete cosine transform (DCT) coefficients is updated from frame to frame based on a parameterized Q matrix adaptation algorithm. According to the algorithm, the Q matrix for the current frame is generated based on DCT coefficient data (108) from the previous encoded frame of the same type (e.g., I, P, or B) as the current frame. In particular, the Q matrix is generated using a function based on shape parameters (e.g., the slope of the diagonal of the Q matrix and/or the convexity of the diagonal of the Q matrix), where the diagonal slope for the Q matrix of the current frame is generated based on the diagonal slope of a DCT map (106) for the previously encoded frame. Before using the generated Q matrix to quantize the DCT coefficients for the current frame, the Q matrix is preferably adjusted for changes in the target mean from the previously encoded frame to the current frame.

Journal ArticleDOI
TL;DR: In this article, a matrix continued fraction (MSF) is defined and used for the approximation of a function F known as a power series in 1/z with matrix coefficientsp×q, or equivalently by a matrix of functions holomorphic at infinity.

Journal ArticleDOI
TL;DR: The inverse eigenvalue problem for distance matrices is solved in certain special cases, including n=3,4,5,6 , any n for which there exists a Hadamard matrix, and some other cases.

Proceedings ArticleDOI
05 Sep 1999
TL;DR: In this article, the generalized Pascal matrix is introduced, and the coefficients of transfer functions of the continuous-time and discrete-time linear circuits can be recalculated on the assumption that both circuits are connected by a general first-order S-Z transformation.
Abstract: The generalized Pascal matrix is introduced in this contribution. Using this matrix, the coefficients of transfer functions of the continuous-time and discrete-time linear circuits can be recalculated on the assumption that both circuits are connected by a general first-order S-Z transformation.

Journal ArticleDOI
TL;DR: The map from a matrix to a vector is the invertible map between a subspace represented as the row space of the matrix A and the Grassman vector representing that subspace.
Abstract: A method for finding the best approximation of a matrix A by a full rank Hankel matrix is given. The initial problem of best approximation of one matrix by another is transformed to a problem involving best approximation of a given vector by a second vector whose elements are constrained so that its inverse image is a Hankel matrix. The map from a matrix to a vector is the invertible map between a subspace represented as the row space of the matrix A and the Grassman vector representing that subspace. The relation between the principle angles associated with a pair of subspaces and the angle between the Grassman vectors associated with the subspaces is established.

Journal ArticleDOI
TL;DR: In this paper, the Smith-MacMillan form of a rational matrix is used for evaluating the degree of the determinant of a polynomial matrix using numerically reliable techniques.
Abstract: An early result on the Smith-MacMillan form of a rational matrix is used for evaluating the degree of the determinant of a polynomial matrix using numerically reliable techniques. This allows for accurate determinant zeroing and determinant interpolation, thus improving existing numerical methods for polynomial matrix determinant computation.

Journal ArticleDOI
Chuanqing Gu1
TL;DR: In this paper, a generalized inverse matrix rational interpolant (GMRI) is proposed for matrix rational extrapolation, which is based on an axiomatic definition for the GMRI, and is constructed in the following two forms: (i) Thiele-type continued fraction expression; (ii) an explicit determinantal formula for the denominator scalar polynomials and for the numerator matrix poynomials, which are of Lagrange-type expression.


Journal ArticleDOI
01 Mar 1999
TL;DR: A complete and unified approach is proposed for the study of their structural properties by using techniques of linear algebra of matrices to prove several sufficient and/or necessary conditions for structural boundedness, liveness, repetitiveness, conservativeness, and consistency of Petri nets.
Abstract: The purpose of the paper is to consider some special types of Petri nets, introduced by Lien (1976), and to propose a complete and unified approach for the study of their structural properties by using techniques of linear algebra of matrices. We distinguish four subclasses: forward-conflict-free, backward-conflict-free, forward-concurrent-free, and backward-concurrent-free Petri nets. A modification of the classical incidence matrix results in a square matrix, called a modified incidence matrix, with nonpositive (nonnegative) off-diagonal elements when backward-(forward-) conflict-free or concurrent-free Petri nets are considered. The modified incidence matrix eigenvalues are computed and theorems on matrices of this type are used to prove several sufficient and/or necessary conditions for structural boundedness, liveness, repetitiveness, conservativeness, and consistency of these four subclasses of Petri nets.

Journal ArticleDOI
TL;DR: Weakest linear conditions on the rows of a square matrix of arbitrary dimension to ensure that its determinant is positive are described and analyzed in this article, where the row mean is larger than all the off-diagonal entries in that row.

Journal ArticleDOI
TL;DR: The ‘All Minors Matrix Tree Theorem’ is extended here to algebraic structures much more general than the field of real numbers, namely semirings, since a semiring is no longer assumed to be a group with respect to the first law (addition).

Journal ArticleDOI
TL;DR: An efficient algorithm for transposing large matrices in place is developed and nearly an order of magnitude increase in speed over the previously published algorithm by Gate and Twigg is demonstrated.
Abstract: We have developed an efficient algorithm for transposing large matrices in place. The algorithm is efficient because data are accessed either sequentially in blocks or randomly within blocks small enough to fit in cache, and because the same indexing calculations are shared among identical procedures operating on independent subsets of the data. This inherent parallelism makes the method well suited for a multiprocessor computing environment. The algorithm is easy to implement because the same two procedures are applied to the data in various groupings to carry out the complete transpose operation. Using only a single processor, we have demonstrated nearly an order of magnitude increase in speed over the previously published algorithm by Gate and Twigg (1977) for transposing a large rectangular matrix in place. With multiple processors operating in parallel, the processing speed increases almost linearly with the number of processors. A simplified version of the algorithm for square matrices is presented as well as an extension for matrices large enough to require virtual memory.

Journal Article
TL;DR: In this article, exact estimates for the absolute values of elements and estimates of norms of inverse matrices are presented satisfying the Hadamard, Brauer, and Ostrowski regularity criteria.
Abstract: Exact estimates for the absolute values of elements and estimates of norms of inverse matrices are presented satisfying the Hadamard, Brauer, and Ostrowski regularity criteria.

Journal ArticleDOI
Bit Shun Tam1
TL;DR: In this paper, it was shown that if the digraph of a square complex matrix A contains at least one cycle with nonzero signed length, then the following conditions are each equivalent to the m-cyclicity of A: (i) A is diagonally similar to e 2π i /m A ; (ii) all cycles in A have signed length an integral multiple of m; and (iii) if A is an irreducible nonnegative matrix whose index of imprimitivity is m.

Proceedings ArticleDOI
01 Jul 1999
TL;DR: An efficient algorithm conlputing the z-espansiou of the eigcnvalucs in forinal Laurent-Puiseux series is provided, for which the computation of the characteristic polynomial is not rccluired.
Abstract: In this article, we study square matrices perturbed by a parameter $\epsilon$. An efficient algorithm computing the $\epsilon$-expansion of the eigenvalues in formal Laurent-Puiseux series is provided, for which the computation of the characteristic polynomial is not required. We show how to reduce the initial matrix so that the Lidskii-Edelman-Ma perturbation theory can be applied. We also explain why this approach may simplify the perturbed eigenvector problem. The implementation of the algorithm in the computer algebra system Maple has been used in a quantum mechanics context to diagonalize some perturbed matrices and is available.

Journal ArticleDOI
TL;DR: In this article, it was shown that any square matrix over a field is the product of at most three triangular matrices, and explicit LUL factorizations for all 2×2 and 3×3 matrices over fields were given.

Journal ArticleDOI
TL;DR: In this article, the Sing-Thompson theorem characterizes the relationship between the diagonal entries and the singular values of an arbitrary matrix, and it is shown that such a matrix can be constructed numerically by a fast recursive algorithm, provided that the given singular values and diagonal elements satisfy the Sing Thompson conditions.

Posted Content
TL;DR: In this paper, a structural characterization of feasible bipartite graphs with a Pfaffian orientation was given, which implies a polynomial-time algorithm to solve all of the problems.
Abstract: Given a 0-1 square matrix A, when can some of the 1's be changed to -1's in such a way that the permanent of A equals the determinant of the modified matrix? When does a real square matrix have the property that every real matrix with the same sign pattern (that is, the corresponding entries either have the same sign or are both zero) is nonsingular? When is a hypergraph with n vertices and n hyperedges minimally nonbipartite? When does a bipartite graph have a "Pfaffian orientation"? Given a digraph, does it have no directed circuit of even length? Given a digraph, does it have a subdivision with no even directed circuit? It is known that all of the above problems are equivalent. We prove a structural characterization of the feasible instances, which implies a polynomial-time algorithm to solve all of the above problems. The structural characterization says, roughly speaking, that a bipartite graph has a Pfaffian orientation if and only if it can be obtained by piecing together (in a specified way) planar bipartite graphs and one sporadic nonplanar bipartite graph.