scispace - formally typeset
Search or ask a question

Showing papers on "Square matrix published in 1997"


Book
05 Nov 1997
TL;DR: In the complex case, Jacobians of matrix transformations Jacobians in orthogonal and related transformations as discussed by the authors are special functions of matrix argument functions of the matrix argument of a matrix argument in a complex case.
Abstract: Jacobians of matrix transformations Jacobians in orthogonal and related transformations Jacobians in the complex case transformations involving Eigenvalues and unitary matrices some special functions of matrix argument functions of matrix argument in the complex case.

269 citations


Journal ArticleDOI
TL;DR: It is shown that apparently innocuous algorithmic modifications to the Padé iteration can lead to instability, and a perturbation analysis is given to provide some explanation.
Abstract: Any matrix with no nonpositive real eigenvalues has a unique square root for which every eigenvalue lies in the open right half-plane. A link between the matrix sign function and this square root is exploited to derive both old and new iterations for the square root from iterations for the sign function. One new iteration is a quadratically convergent Schulz iteration based entirely on matrix multiplication; it converges only locally, but can be used to compute the square root of any nonsingular M-matrix. A new Pade iteration well suited to parallel implementation is also derived and its properties explained. Iterative methods for the matrix square root are notorious for suffering from numerical instability. It is shown that apparently innocuous algorithmic modifications to the Pade iteration can lead to instability, and a perturbation analysis is given to provide some explanation. Numerical experiments are included and advice is offered on the choice of iterative method for computing the matrix square root.

207 citations


Journal ArticleDOI
TL;DR: It is concluded that the behavior of the residuals in inverse iteration is governed by the departure of the matrix from normality rather than by the conditioning of a Jordan basis or the defectiveness of eigenvalues.
Abstract: The purpose of this paper is two-fold: to analyze the behavior of inverse iteration for computing a single eigenvector of a complex square matrix and to review Jim Wilkinson's contributions to the development of the method. In the process we derive several new results regarding the convergence of inverse iteration in exact arithmetic. In the case of normal matrices we show that residual norms decrease strictly monotonically. For eighty percent of the starting vectors a single iteration is enough. In the case of non-normal matrices, we show that the iterates converge asymptotically to an invariant subspace. However, the residual norms may not converge. The growth in residual norms from one iteration to the next can exceed the departure of the matrix from normality. We present an example where the residual growth is exponential in the departure of the matrix from normality. We also explain the often significant regress of the residuals after the first iteration: it occurs when the non-normal part of the matrix is large compared to the eigenvalues of smallest magnitude. In this case computing an eigenvector with inverse iteration is exponentially ill conditioned (in exact arithmetic). We conclude that the behavior of the residuals in inverse iteration is governed by the departure of the matrix from normality rather than by the conditioning of a Jordan basis or the defectiveness of eigenvalues.

154 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that square matrices are diagonalizable over all known classes of (von Neumann) regular rings, which is equivalent to a cancellation property for finitely generated projective modules which conceivably holds over all regular rings.

63 citations


Patent
02 Jul 1997
TL;DR: In this article, a method and apparatus for converting frequency-coefficient matrices between a configuration in which the matrices are transforms of unoverlapped image-data matrices and a configuration where the matrixrices are transformations of overlapped image data matrices, the image data terms corresponding to pixels from an original image, is described.
Abstract: A method and apparatus are disclosed for converting frequency-coefficient matrices between a configuration in which the matrices are transforms of unoverlapped image-data matrices and a configuration in which the matrices are transforms of overlapped image-data matrices, the image-data matrices comprising image-data terms corresponding to pixels from an original image, the method comprising the steps of: deriving a conversion matrix; transposing the conversion matrix; matrix multiplying a first frequency-coefficient matrix of one configuration by the conversion matrix; matrix multiplying a second frequency-coefficient matrix of the same configuration by the transpose conversion matrix; and combining the product results to form a matrix formatted in the other configuration.

37 citations


Journal ArticleDOI
TL;DR: A new upper matrix bound of the solution for the discrete algebraic matrix Riccati equation is developed and is used to derive bounds on the eigenvalues, trace, and determinant of the same solution.
Abstract: A new upper matrix bound of the solution for the discrete algebraic matrix Riccati equation is developed. This matrix bound is then used to derive bounds on the eigenvalues, trace, and determinant of the same solution. It is shown that these eigenvalue bounds are less restrictive than previous results.

34 citations


Journal ArticleDOI
TL;DR: In this article, the bounds for λ kl λ l and λ K + λ + l, involving k, l, n, tr A, and det A only, are presented.

32 citations


Journal ArticleDOI
TL;DR: Special interest attaches to the continuity properties of the factors, and it is shown that conditions for discontinuous behaviour can be given using the factor D, which is important in computing the Moore-Penrose inverse of a matrix containing symbolic entries.
Abstract: The Turing factorization is a generalization of the standard LU factoring of a square matrix. Among other advantages, it allows us to meet demands that arise in a symbolic context. For a rectangular matrix A, the generalized factors are written PA = LDU R, where R is the row-echelon form of A. For matrices with symbolic entries, the LDU R factoring is superior to the standard reduction to row-echelon form, because special case information can be recorded in a natural way. Special interest attaches to the continuity properties of the factors, and it is shown that conditions for discontinuous behaviour can be given using the factor D. We show that this is important, for example, in computing the Moore-Penrose inverse of a matrix containing symbolic entries.We also give a separate generalization of LU factoring to fraction-free Gaussian elimination.

29 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the Hermitian matrix can be reduced to a tridiagonal form by a finite sequence of unitary similarities, namely Householder reflections.

25 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that a square matrix over a ring with identity has the consecutive-column property if for all k, all relevant submatrices having k consecutive rows and the first k columns are invertible.

25 citations


Journal ArticleDOI
TL;DR: An estimate is given for the support of each component function of a compactly supported scaling vector satisfying a matrix refinement equation with finite number of terms based on the highest and lowest degrees of each polynomial in the corresponding matrix symbol.
Abstract: An estimate is given for the support of each component function of a compactly supported scaling vector satisfying a matrix refinement equation with finite number of terms. The estimate is based on the highest and lowest degrees of each polynomial in the corresponding matrix symbol. Only basic techniques from matrix theory are involved in the derivation.

Proceedings ArticleDOI
20 Jul 1997
TL;DR: The known algorithms for the sequential multiplication of rectangular matrix multiplication are accelerated to yield an improvement of the current recordymptotically bounds on the deterministic arithmetic NC pr~ ceaaor complexit of the four former ones.
Abstract: Galil and Pan, 1984, reduced parallel evaluation of the inverse, the determinant and the characteristic polynomial of a matrix and solving a nonsingular linear system of equw tions to sequential multiplication of rectangular matrices. We asymptotically accelerate the known algorithms for the latter problem to yield an improvement of the current record ~ymptotic bounds on the deterministic arithmetic NC pr~ ceaaor complexit of the four former ones, from order of 2 n2a5’ to 0(n2E 7). The improvement of rectangular matrix multiplication has alao impact on the record complexity estimates for polynomial factorization in finite fields.

Journal ArticleDOI
TL;DR: In this article, the authors introduce an infinitesimal approach to the construction of robust designs for linear models, subject to satisfying a robustness constraint, they minimize the determinant of the mean squared error matrix of the least squares estimator at the ideal model, in the direction of a contaminating response function or autocorrelation structure.
Abstract: We introduce an infinitesimal approach to the construction of robust designs for linear models. The resulting designs are robust against small departures from the assumed linear regression response and/or small departures from the assumption of uncorrelated errors. Subject to satisfying a robustness constraint, they minimize the determinant of the mean squared error matrix of the least squares estimator at the ideal model. The robustness constraint is quantified in terms of boundedness of the Gateaux derivative of this determinant, in the direction of a contaminating response function or autocorrelation structure. Specific examples are considered. If the aforementioned bounds are sufficiently large, then (permutations of) the classically optimal designs, which minimize variance alone at the ideal model, meet our robustness criteria. Otherwise, new designs are obtained.

Journal ArticleDOI
TL;DR: An extension of the Matrix-Tree Theorem to algebraic structures much more general than the field of real numbers, namely commutative semirings, where the first law (addition) is not assumed to be invertible.

Journal ArticleDOI
TL;DR: In this article, it was shown that every unit matrix can be decomposed into the product of just three shears, U 0 LU 1, and a canonical form for this decomposition is presented.


Journal ArticleDOI
TL;DR: In this paper, the authors explored several methods for matrix enlarging, where an enlarged matrixA is constructed from a given matrixA. The methods explored include matrix primitization, stretching and node splitting.
Abstract: This paper explores several methods for matrix enlarging, where an enlarged matrixA is constructed from a given matrixA. The methods explored include matrix primitization, stretching and node splitting. Graph interpretations of these methods are provided. Solving linear problems using enlarged matrices yields the answer to the originalAx=b problem.A can exhibit several desirable properties. For example,A can be constructed so that the valence of any row and/or column is smaller than some desired number (≥4). This is beneficial for algorithms that depend on the square of the number of entries of a row or column. Most particularly, matrix enlarging can results in a reduction of the fill-in in theR matrix which occurs during orthogonal factorization as a result of dense rows. Numerical experiments support these conjectures.

Journal ArticleDOI
TL;DR: The authors supply the derivative of an orthogonal matrix of eigenvectors of a real symmetric matrix, which is then used to get the asymptotic distribution of the Orthogonal eigenmatrix of the random matrix.

Proceedings ArticleDOI
24 Oct 1997
TL;DR: This paper proposes the use of complex-orthogonal transformations for finding the eigenvalues of a complex symmetric matrix using these special transformations to significantly reduce computational costs.
Abstract: In this paper, we propose the use of complex-orthogonal transformations for finding the eigenvalues of a complex symmetric matrix. Using these special transformations can significantly reduce computational costs because the tridiagonal structure of a complex symmetric matrix is maintained.

Patent
04 Sep 1997
TL;DR: In this article, a marine seismic signal is truncated in time, transformed into the frequency domain and represented by matrix D T Eigenvalue decomposition D T =S ΛS -1 of matrix DT is computed.
Abstract: A marine seismic signal is transformed from time domain into frequency domain and represented by matrix D. The marine data signal is truncated in time, transformed into the frequency domain and represented by matrix D T Eigenvalue decomposition D T=S ΛS -1 of matrix D T is computed. Matrix product D S is computed and saved in memory. Conjugate transpose [D S]* is computed and saved in memory. Matrix product [D S]*[D S] is computed and saved in memory. Matrix inverse S -1 is computed and saved in memory. Conjugate transpose (S -1 )* is computed and saved in memory. Matrix product S -1 (S -1 )* is computed and saved in memory. An initial estimate of the source wavelet w is made. Source wavelet w is optimized by iterating the steps of computing the diagonal matrix [I-w -1 Λ], computing matrix inverse [I-w -1 Λ] -1 , computing conjugate transpose [(I-w -1 Λ) -1 ]*, retrieving matrix products [D S]*[D S] and S -1 (S -1 )* from memory, and minimizing the total energy in trace of matrix product S.sup.-1 (S.sup.-1)*[(I-w.sup.-1 Λ).sup.-1 ]*[D S][D S][I-w.sup.-1 Λ] -1 . Primary matrix P representing the wavefield free of surface multiples is computed by inserting computed value for w into the expression [D S][I-w -1 Λ] -1 S -1 . Primary matrix P is inverse transformed from frequency domain into time domain.

Patent
19 Sep 1997
TL;DR: In this paper, a method for selecting a sequence of cells of current sources inside a cell matrix structure of a digital-analog converter and also to the corresponding converter is described.
Abstract: The invention relates to a method for selecting a sequence of cells of current sources inside a cell matrix structure of a digital-analog converter and also to the corresponding converter. Symmetries are used with regard to the centre (C) of a rectangular or preferably square matrix structure, with regard to a symmetry point (S) located at a quarter of the length of a diagonal (D1) from the centre and with regard to one of the two mean perpendiculars (M1) of the structure for selecting the mapping areas (1, 2, . . . ) for the consecutive cells.

Proceedings ArticleDOI
13 Jul 1997
TL;DR: In this paper, the authors fine tune this technique to FSS, in order to maximize the accuracy over the largest possible bandwidth (as much as an 8:1 bandwidth with only 3 samples).
Abstract: The method of moments (MoM) is the most popular numerical technique for periodic structures such as frequency selective surfaces (FSS). The solution time for MoM in FSS applications is usually dominated by the calculation of the Z matrix elements rather than the matrix inversion. Therefore, reducing the time involved in calculating the Z matrix elements can significantly reduce the total solution time. One way to reduce the calculation time of the Z matrix elements is interpolation. The idea is to first sample the matrix at N (usually 3) distinct frequency points, and then interpolate the matrix elements at all other frequencies in the interpolation band. Once the matrix is found using interpolation, the matrix is inverted to find the induced currents, from which the scattered field can be calculated. The concept of interpolating the Z matrix was first proposed by Newman (1988). We fine tune this technique to FSS, in order to maximize the accuracy over the largest possible bandwidth (as much as an 8:1 bandwidth with only 3 samples).

Journal ArticleDOI
TL;DR: Schmidt and Mirsky's theorems identify the matrix with a specified rank that lies closest to another matrix, with distances measured by any matrix norm invariant under the unitary group as discussed by the authors.

Journal ArticleDOI
01 Jan 1997
TL;DR: In this article, the tensor product of two square matrices A = (aij) and B of order n can be represented as the matrix allB... alnB (1) 1@B anl B.*.** aB / which is of order 2n.
Abstract: The operator convex functions of two variables are characterized in terms of a non-commutative generalization of Jensen's inequality. 1. FUNCTIONAL CALCULUS FOR FUNCTIONS OF SEVERAL VARIABLES The tensor product of two square matrices A = (aij) and B of order n can be represented as the matrix allB ... alnB (1) 1@B anl B .*.** aB / which is of order n2. However, if A and B are block matrices ( A21 A22) B = B21 B22) of order 2n, then it is often more convenient to represent the tensor product A 0 B as the block matrix / All 0 Bil All 0 B12 A12 0 Bil A12 0) B12 (2) A11? B21 A11 0 B22 A12 0 B21 A12 0 B22 (2) A21 B11 A21 0 B12 A22 X B1i1 A22 $' B12 A21 0 B21 A21 0 B22 A22 0 B21 A22 0 B22 / The definition according to (2) is unitarily equivalent to the definition according to (1), and no confusion will occur as long as the two representations are not mixed. The latter representation has the benefit of rendering formulas for block matrices more transparent and will be used throughout this paper. A similar representation will be used for tensor products of block matrices of bounded linear operators on a Hilbert space. Koranyi [10] considered functional calculus for functions of two variables. Let f: I x J -* R be a function of two variables defined on the product of two intervals, and let A, B be selfadjoint linear operators with finite spectra on a Hilbert space. If the spectrum of A is contained in I, and the spectrum of B is contained in J, Received by the editors February 2, 1996. 1991 Mathematics Subject Classification. Primary 47A63; Secondary 47A80, 47Bxx. ( 1997 American Mathematical Society 2093 This content downloaded from 207.46.13.176 on Mon, 20 Jun 2016 05:17:33 UTC All use subject to http://about.jstor.org/terms

Book ChapterDOI
01 Jan 1997
TL;DR: A Hadamard matrix, H = (h ij ) is defined as a square matrix of dimension nxn where any two distinct rows are orthogonal as discussed by the authors.
Abstract: A Hadamard matrix, H = (h ij ) is defined as a square matrix of dimension nxn where: i All entries are ±1. ii Any two distinct rows are orthogonal, ie, ∀ i, j, i ≠ j \( \sum\limits_k {{h_{{ik}}}} {h_{{jk}}} = 0 \)

Journal ArticleDOI
TL;DR: In this paper, it was shown that singular P1-matrices are in E∗ and those that are not in E ∗ are Umatrices, and in the classes of adequate matrices and Zmatrices.

Journal ArticleDOI
TL;DR: In this paper, necessary and sufficient conditions for the existence of sequences and matrices with elements in given intervals and with prescribed lower and upper bounds on the element sums corresponding to the sets of an orthogonal pair of partitions are presented.

Patent
06 Mar 1997
TL;DR: In this article, a weighted calculation in which a cosine-transformed coefficient is multiplied with diagonal matrixes from the right and left direction is carried out, a new transform matrix is obtained by previously multiplying a weighting diagonal matrix and the cosine transform matrix and input data is transformed by using the new transformation matrix.
Abstract: When a weighted calculation in which a cosine-transformed coefficient is multiplied with diagonal matrixes from the right and left direction is carried out, a new transform matrix is obtained by previously multiplying a weighting diagonal matrix and the cosine transform matrix and input data is transformed by using the new transform matrix. Thus, a circuit scale can be reduced, the processing steps can be simplified, and the cost can be reduced. When a weighted calculation in which a cosine transformed coefficient C is multiplied with a diagonal matrix W from the right and left directions is carried out, the weighted cosine transform is carried out by using a new transform matrix Fw which results from previously multiplying the weighting diagonal matrix W and the cosine transform matrix F.

Journal ArticleDOI
TL;DR: The definition and properties of cumulants of random matrices are used to obtain the expressions for the higher‐order cumulant and spectral vectors of a linear vector process and it is shown that linearity of a vector process implies constancy of the modulus square of its normalized higher-order spectra.
Abstract: A stationary multivariate time series {Xt} is defined as linear if it can be written in the form Xt = ∑∞j=−∞Ajet−j where Aj are square matrices and et are independent and identically distributed random vectors If the et} are normally distributed, then {Xtis a multivariate Gaussian linear process This paper is concerned with the testing of departures of a vector stationary process from multivariate Gaussianity and linearity using the bispectral approach First the definition and properties of cumulants of random matrices are used to obtain the expressions for the higher-order cumulant and spectral vectors of a linear vector process as defined above Then it is shown that linearity of a vector process implies constancy of the modulus square of its normalized higher-order spectra whereas the component of such a vector process does not necessarily have a linear representation Finally, statistics for the testing of multivariate Gaussianity and linearity are proposed

Journal ArticleDOI
TL;DR: In this paper, Cauchy's theorem is used to generate a complex variable boundary element method (CVBEM) formulation for steady, two-dimensional potential problems, which is mathematically equivalent to Real Variable BEM which employs Green's second identity and the respective fundamental solution.
Abstract: Cauchy's theorem is used to generate a Complex Variable Boundary Element Method (CVBEM) formulation for steady, two-dimensional potential problems. CVBEM uses the complex potential, w=ϕ+iψ, to combine the potential function, ϕ, with the stream function, ψ. The CVBEM formulation, using Cauchy's theorem, is shown to be mathematically equivalent to Real Variable BEM which employs Green's second identity and the respective fundamental solution. CVBEM yields an overdetermined system of equations that are commonly solved using implicit and explicit methods that reduce the overdetermined matrix to a square matrix by selectively excluding equations. Alternatively, Ordinary Least Squares (OLS) can be used to minimize the Euclidean norm square of the residual vector that arises due to the approximation of boundary potentials and geometries. OLS uses all equations to form a square matrix that is symmetric, positive definite and diagonally dominant. OLS is more accurate than existing methods and can estimate the approximation error at boundary nodes. The approximation error can be used to determine the adequacy of boundary discretization schemes. CVBEM/OLS provides greater flexibility for boundary conditions by allowing simultaneous specification of both fluid potentials and stream functions, or their derivatives, along boundary elements. © 1997 by John Wiley & Sons, Ltd.