scispace - formally typeset
Search or ask a question

Showing papers in "SIAM Journal on Matrix Analysis and Applications in 1996"


Journal ArticleDOI
TL;DR: This note gives the required Jacobi angles in close form for simultaneous diagonalization of several matrices.
Abstract: Simultaneous diagonalization of several matrices can be implemented by a Jacobi-like technique. This note gives the required Jacobi angles in close form.

903 citations


Journal ArticleDOI
TL;DR: In this article, a new method for the iterative computation of a few extremal eigenvalues of a symmetric matrix and their associated eigenvectors is proposed, based on an old and almost unknown method of Jacobi.
Abstract: In this paper we propose a new method for the iterative computation of a few of the extremal eigenvalues of a symmetric matrix and their associated eigenvectors. The method is based on an old and almost unknown method of Jacobi. Jacobi's approach, combined with Davidson's method, leads to a new method that has improved convergence properties and that may be used for general matrices. We also propose a variant of the new method that may be useful for the computation of nonextremal eigenvalues as well.

900 citations


Journal ArticleDOI
TL;DR: An approximate minimum degree (AMD) ordering algorithm for preordering a symmetric sparse matrix prior to numerical factorization is presented and produces results that are comparable in quality with the best orderings from other minimum degree algorithms.
Abstract: An approximate minimum degree (AMD) ordering algorithm for preordering a symmetric sparse matrix prior to numerical factorization is presented. We use techniques based on the quotient graph for matrix factorization that allow us to obtain computationally cheap bounds for the minimum degree. We show that these bounds are often equal to the actual degree. The resulting algorithm is typically much faster than previous minimum degree ordering algorithms and produces results that are comparable in quality with the best orderings from other minimum degree algorithms.

779 citations


Journal ArticleDOI
TL;DR: A deflation procedure is introduced that is designed to improve the convergence of an implicitly restarted Arnoldi iteration for computing a few eigenvalues of a large matrix and implicitly deflates the converged approximations from the iteration.
Abstract: A deflation procedure is introduced that is designed to improve the convergence of an implicitly restarted Arnoldi iteration for computing a few eigenvalues of a large matrix. As the iteration progresses, the Ritz value approximations of the eigenvalues converge at different rates. A numerically stable scheme is introduced that implicitly deflates the converged approximations from the iteration. We present two forms of implicit deflation. The first, a locking operation, decouples converged Ritz values and associated vectors from the active part of the iteration. The second, a purging operation, removes unwanted but converged Ritz pairs. Convergence of the iteration is improved and a reduction in computational effort is also achieved. The deflation strategies make it possible to compute multiple or clustered eigenvalues with a single vector restart method. A block method is not required. These schemes are analyzed with respect to numerical stability, and computational results are presented.

654 citations


Journal ArticleDOI
TL;DR: This work introduces some new path-following methods for the solution of the linear complementarity problem, and proves a global convergence result for some implementable noninterior continuation methods, and reports some numerical results obtained with these methods.
Abstract: We introduce some new path-following methods for the solution of the linear complementarity problem. We call these methods noninterior continuation methods since, in contrast to interior-point methods, not all iterates have to stay in the positive orthant. This is possible since we reformulate certain perturbed complementarity problems as a nonlinear system of equations. However, similar to interior-point methods, we also try to follow the central path. We present some conditions which guarantee the existence of this central path, prove a global convergence result for some implementable noninterior continuation methods, and report some numerical results obtained with these methods. We also prove global error bound results for the perturbed linear complementarity problems.

382 citations


Journal ArticleDOI
TL;DR: It is shown that there exists an matrix $A$ and a vector r^0 such that f(k) is the residual at step $k$ of the GMRES algorithm applied to the linear system $Ax=b$, with initial residual $r^0 = b - A x^0$.
Abstract: Given a nonincreasing positive sequence $f(0) \geq f(1) \geq \cdots \geq f(n-1) > 0$, it is shown that there exists an $n$ by $n$ matrix $A$ and a vector $r^0$ with $\| r^0 \| = f(0)$ such that $f(k) = \| r^k \|$, $k=1, \ldots , n-1$, where $r^k$ is the residual at step $k$ of the GMRES algorithm applied to the linear system $Ax=b$, with initial residual $r^0 = b - A x^0$. Moreover, the matrix $A$ can be chosen to have any desired eigenvalues.

246 citations


Journal ArticleDOI
TL;DR: Results confirm that the STLN algorithm is an effective method for solving problems where A or b has a special structure or where errors can occur in only some of the elements of $A$ and $b$.
Abstract: A new formulation and algorithm is described for computing the solution to an overdetermined linear system, $Ax \approx b$, with possible errors in both $A$ and $b$. This approach preserves any affine structure of $A$ or $[A \:|\:b]$, such as Toeplitz or sparse structure, and minimizes a measure of error in the discrete $L_p$ norm, where $p=1,2$, or $\infty$. It can be considered as a generalization of total least squares and we call it structured total least norm (STLN). The STLN problem is formulated, the algorithm for its solution is presented and analyzed, and computational results that illustrate the algorithm convergence and performance on a variety of structured problems are summarized. For each test problem, the solutions obtained by least squares, total least squares, and STLN with $p = 1,2,$ and $\infty$ were compared. These results confirm that the STLN algorithm is an effective method for solving problems where $A$ or $b$ has a special structure or where errors can occur in only some of the elements of $A$ and $b$.

186 citations


Journal ArticleDOI
TL;DR: It is shown that the ADI iterative method is well suited for the solution of these Sylvester's equations, and is illustrated with computed examples for the case when the image is described by a separable first-order Markov process.
Abstract: The restoration of two-dimensional images in the presence of noise by Wiener's minimum mean square error filter requires the solution of large linear systems of equations. When the noise is white and Gaussian, and under suitable assumptions on the image, these equations can be written as a Sylvester's equation \[ T_1^{-1}\hat{F}+\hat{F}T_2=C \] for the matrix $\hat{F}$ representing the restored image. The matrices $T_1$ and $T_2$ are symmetric positive definite Toeplitz matrices. We show that the ADI iterative method is well suited for the solution of these Sylvester's equations, and illustrate this with computed examples for the case when the image is described by a separable first-order Markov process. We also consider generalizations of the ADI iterative method, propose new algorithms for the generation of iteration parameters, and illustrate the competitiveness of these schemes.

173 citations


Journal ArticleDOI
TL;DR: By extending the cyclic reduction technique to infinite block matrices, this work devise a new algorithm for computing the solution of the matrix equation G=\sum_{i=0}^{+\infty}G^iA_i$ arising in a wide class of queueing problems and shows a dramatic reduction of the complexity of solving the given matrix equation.
Abstract: By extending the cyclic reduction technique to infinite block matrices we devise a new algorithm for computing the solution $G_0$ of the matrix equation $G=\sum_{i=0}^{+\infty}G^iA_i$ arising in a wide class of queueing problems. Here $A_i$, $i=0,1,\ldots,$ are $k\times k$ nonnegative matrices such that $\sum_{i=0}^{+\infty}A_i$ is column stochastic. Our algorithm, which under mild conditions generates a sequence of matrices converging quadratically to $G_0$, can be fully described in terms of simple operations between matrix power series, i.e., power series in $z$ having matrix coefficients. Such operations, like multiplication and reciprocation modulo $z^m$, can be quickly computed by means of FFT-based fast polynomial arithmetic; here $m$ is the degree where the power series are numerically cut off in order to reduce them to polynomials. These facts lead to a dramatic reduction of the complexity of solving the given matrix equation; in fact, $O(k^3m+k^2 m \log m)$ arithmetic operations are sufficient to carry out each iteration of the algorithm. Numerical experiments and comparisons performed with the customary techniques show the effectiveness of our algorithm. For a problem arising from the modelling of metropolitan networks, our algorithm was about 30 times faster than the algorithms customarily used in the applications. Cyclic reduction applied to quasi-birth--death (QBD) problems, i.e., problems where $A_i= O$ for $i>2$, leads to an algorithm similar to the one of [Latouche and Ramaswami, J. Appl. Probab., 30 (1993), pp. 650--674], but which has a lower computational cost.

144 citations


Journal ArticleDOI
TL;DR: A stability theory for (possibly infinite) linear inequality systems defined on a finite-dimensional space is developed, analyzing certain continuity properties of the solution set mapping and conditions under which sufficiently small perturbations of the data in a consistent system produce systems belonging to the same class.
Abstract: This paper develops a stability theory for (possibly infinite) linear inequality systems defined on a finite-dimensional space, analyzing certain continuity properties of the solution set mapping. It also provides conditions under which sufficiently small perturbations of the data in a consistent (inconsistent) system produce systems belonging to the same class.

91 citations


Journal ArticleDOI
TL;DR: A relationship between the solution of the linear system and that of the least squares problem is used to relate the residual norms in Galerkin processes to the norms of the quantities minimized in the corresponding norm-minimizing processes.
Abstract: Several iterative methods for solving linear systems $Ax=b$ first construct a basis for a Krylov subspace and then use the basis vectors, together with the Hessenberg (or tridiagonal) matrix generated during that construction, to obtain an approximate solution to the linear system. To determine the approximate solution, it is necessary to solve either a linear system with the Hessenberg matrix as coefficient matrix or an extended Hessenberg least squares problem. In the first case, referred to as a Galerkin method, the residual is orthogonal to the Krylov subspace, whereas in the second case, referred to as a norm-minimizing method, the residual (or a related quantity) is minimized over the Krylov subspace. Examples of such pairs include the full orthogonalization method (FOM) (Arnoldi) and generalized minimal residual (GMRES) algorithms, the biconjugate gradient (BCG) and quasi-minimal residual (QMR) algorithms, and their symmetric equivalents, the Lanczos and minimal residual (MINRES) algorithms. A relationship between the solution of the linear system and that of the least squares problem is used to relate the residual norms in Galerkin processes to the norms of the quantities minimized in the corresponding norm-minimizing processes. It is shown that when the norm-minimizing process is converging rapidly, the residual norms in the corresponding Galerkin process exhibit similar behavior, whereas when the norm-minimizing process is converging very slowly, the residual norms in the corresponding Galerkin process are significantly larger. This is a generalization of the relationship established between Arnoldi and GMRES residual norms in P. N. Brown, A theoretical comparison of the Arnoldi and GMRES algorithms, SIAM J. Sci. Statist. Comput., 12, 1991, pp. 58--78. For MINRES and Lanczos, and for two nonsymmetric bidiagonalization procedures, we extend the arguments to incorporate the effects of finite precision arithmetic.

Journal ArticleDOI
TL;DR: In this paper, the authors established characterization and representation for the Drazin inverse of an arbitrary square matrix which reduce to the well-known result if the matrix is nonsingular.
Abstract: We establish characterization and representation for the Drazin inverse of an arbitrary square matrix which reduce to the well-known result if the matrix is nonsingular.

Journal ArticleDOI
TL;DR: A framework for answering geometric and analytic questions about such a function by answering the corresponding question for the (much simpler) function appearing in the decomposition is developed.
Abstract: Certain interesting classes of functions on a real inner product space are invariant under an associated group of orthogonal linear transformations. This invariance can be made explicit via a simple decomposition. For example, rotationally invariant functions on {\bf R}$^2$ are just even functions of the Euclidean norm, and functions on the Hermitian matrices (with trace inner product) which are invariant under unitary similarity transformations are just symmetric functions of the eigenvalues. We develop a framework for answering geometric and analytic (both classical and nonsmooth) questions about such a function by answering the corresponding question for the (much simpler) function appearing in the decomposition. The aim is to understand and extend the foundations of eigenvalue optimization, matrix approximation, and semidefinite programming.

Journal ArticleDOI
TL;DR: Conditions are derived under which Cholesky factorization is stable for quasidefinite systems, closely related to an unsymmetric positive-definite matrix, for which anLDM^{T}$ factorization exists.
Abstract: Sparse linear equations $Kd=r$ are considered, where $K$ is a specially structured symmetric indefinite matrix that arises in numerical optimization and elsewhere. Under certain conditions, $K$ is quasidefinite. The Cholesky factorization $PKP^{T} = LDL^{T}$ is then known to exist for any permutation $P$, even though $D$ is indefinite. Quasidefinite matrices have been used successfully by Vanderbei within barrier methods for linear and quadratic programming. An advantage is that for a sequence of $K$'s, $P$ may be chosen once and for all to optimize the sparsity of $L$, as in the positive-definite case. A preliminary stability analysis is developed here. It is observed that a quasidefinite matrix is closely related to an unsymmetric positive-definite matrix, for which an $LDM^{T}$ factorization exists. Using the Golub and Van Loan analysis of the latter, conditions are derived under which Cholesky factorization is stable for quasidefinite systems. Some numerical results confirm the predictions.

Journal ArticleDOI
TL;DR: Upper and lower bounds are established for the minimal eigenvalue of A, for its corresponding eigenvector, and for the entries of the inverse of $A$ for both the $\ell_{1}$-norm and the weighted Perron-norm of the solution $x(t)$ to the linear differential system $\dot{x}=-Ax$.
Abstract: Let $A$ be a real weakly diagonally dominant $M$-matrix. We establish upper and lower bounds for the minimal eigenvalue of $A$, for its corresponding eigenvector, and for the entries of the inverse of $A$. Our results are applied to find meaningful two-sided bounds for both the $\ell_{1}$-norm and the weighted Perron-norm of the solution $x(t)$ to the linear differential system $\dot{x}=-Ax$, $x(0)=x_{0}>0$. These systems occur in a number of applications, including compartmental analysis and RC electrical circuits. A detailed analysis of a model for the transient behaviour of digital circuits is given to illustrate the theory.

Journal ArticleDOI
TL;DR: It is shown that diagonal ill-conditioning may be characterized by the property of strict $t$-diagonal dominance, which generalizes the idea of diagonal dominance to matrices whose diagonals are substantially larger in magnitude than the off-diagonals.
Abstract: Many interior methods for constrained optimization obtain a search direction as the solution of a symmetric linear system that becomes increasingly ill-conditioned as the solution is approached In some cases, this ill-conditioning is characterized by a subset of the diagonal elements becoming large in magnitude It has been shown that in this situation the solution can be computed accurately regardless of the size of the diagonal elements In this paper we discuss the formulation of several interior methods that use symmetric diagonally ill-conditioned systems It is shown that diagonal ill-conditioning may be characterized by the property of strict $t$-diagonal dominance, which generalizes the idea of diagonal dominance to matrices whose diagonals are substantially larger in magnitude than the off-diagonals A perturbation analysis is presented that characterizes the sensitivity of $t$-diagonally dominant systems under a certain class of structured perturbations Finally, we give a rounding-error analysis of the symmetric indefinite factorization when applied to $t$-diagonally dominant systems This analysis resolves the (until now) open question of whether the class of perturbations used in the sensitivity analysis is representative of the rounding error made during the numerical solution of the barrier equations

Journal ArticleDOI
TL;DR: It turns out that the spectrum of face dimensions is lacunary and that $\mathcal{E}_{n \times n} $ has polyhedral faces of dimension up to $ \approx \sqrt {2n} $.
Abstract: We study the facial structure of the set $\mathcal{E}_{n \times n} $ of correlation matrices (i.e., the positive semidefinite matrices with diagonal entries equal to 1). In particular, we determine the possible dimensions for a face, as well as for a polyhedral face, of $\mathcal{E}_{n \times n} $. It turns out that the spectrum of face dimensions is lacunary and that $\mathcal{E}_{n \times n} $ has polyhedral faces of dimension up to $ \approx \sqrt {2n} $. As an application, we describe in detail the faces of $\mathcal{E}_{4 \times 4} $. We also discuss results related to optimization over $\mathcal{E}_{n \times n} $.

Journal ArticleDOI
TL;DR: This work considers computing the real logarithm of a real matrix, analyzes and implemented a novel method to estimate the Frechét derivative of the $\log$, which proved very successful for condition estimation.
Abstract: In this work, we consider computing the real logarithm of a real matrix. We pay attention to general conditioning issues, provide careful implementation for several techniques including scaling issues, and finally test and compare the techniques on a number of problems. All things considered, our recommendation for a general purpose method goes to the Schur decomposition approach with eigenvalue grouping, followed by square roots and diagonal Padé approximants of the diagonal blocks. Nonetheless, in some cases, a well-implemented series expansion technique outperformed the other methods. We have also analyzed and implemented a novel method to estimate the Frechét derivative of the $\log$, which proved very successful for condition estimation.

Journal ArticleDOI
TL;DR: A perturbation analysis is used to indicate the best accuracy that can be expected from a finite-precision algorithm that uses the generator matrix as the input data and shows that the modified Schur algorithm is backward stable for a large class of structured matrices.
Abstract: This paper provides a detailed analysis that shows how to stabilize the {\em generalized} Schur algorithm, which is a fast procedure for the Cholesky factorization of positive-definite structured matrices $R$ that satisfy displacement equations of the form $R-FRF^T=GJG^T$, where $J$ is a $2\times 2$ signature matrix, $F$ is a stable lower-triangular matrix, and $G$ is a generator matrix. In particular, two new schemes for carrying out the required hyperbolic rotations are introduced and special care is taken to ensure that the entries of a Blaschke matrix are computed to high relative accuracy. Also, a condition on the smallest eigenvalue of the matrix, along with several computational enhancements, is introduced in order to avoid possible breakdowns of the algorithm by assuring the positive-definiteness of the successive Schur complements. We use a perturbation analysis to indicate the best accuracy that can be expected from {\em any} finite-precision algorithm that uses the generator matrix as the input data. We then show that the modified Schur algorithm proposed in this work essentially achieves this bound when coupled with a scheme to control the generator growth. The analysis further clarifies when pivoting strategies may be helpful and includes illustrative numerical examples. For all practical purposes, the major conclusion of the analysis is that the modified Schur algorithm is backward stable for a large class of structured matrices.

Journal ArticleDOI
TL;DR: In this article, conditions for the existence of the first and higher derivatives of a function are presented together with formulae that represent these derivatives as a submatrix of a larger block Toeplitz matrix.
Abstract: Let $f$ be a not necessarily analytic function and let $A(t)$ be a family of $n \times n$ matrices depending on the parameter $t$. Conditions for the existence of the first and higher derivatives of $f(A(t))$ are presented together with formulae that represent these derivatives as a submatrix of $f(B)$, where $B$ is a larger block Toeplitz matrix. This block matrix representation of the first derivative is shown to be useful in the context of condition estimation for matrix functions. The results presented here are slightly stronger than those in the literature and are proved in a considerably simpler way.

Journal ArticleDOI
TL;DR: It is demonstrated that for matrices unitarily equivalent to an upper triangular Toeplitz matrix, a similar result holds; namely, either both methods converge or both fail to converge, although this result cannot be generalized to all matrices.
Abstract: This paper compares the convergence behavior of two popular iterative methods for solving systems of linear equations: the $\rf$-step restarted minimal residual method (commonly implemented by algorithms such as GMRES($\rf$)) and $(s-1)$-degree polynomial preconditioning. It is known that for normal matrices, and in particular for symmetric positive definite matrices, the convergence bounds for the two methods are the same. In this paper we demonstrate that for matrices unitarily equivalent to an upper triangular Toeplitz matrix, a similar result holds; namely, either both methods converge or both fail to converge. However, we show this result cannot be generalized to all matrices. Specifically, we develop a method, based on convexity properties of the generalized field of values of powers of the iteration matrix, to obtain examples of real matrices for which GMRES($\rf$) converges for every initial vector, but every $(s-1)$-degree polynomial preconditioning stagnates or diverges for some initial vector.

Journal ArticleDOI
TL;DR: This paper describes a much simpler generalized Schur-type algorithm to compute similar low-rank approximants of a matrix H such that H - \Ha has 2-norm less than $\epsilon$.
Abstract: The usual way to compute a low-rank approximant of a matrix $H$ is to take its singular value decomposition (SVD) and truncate it by setting the small singular values equal to 0. However, the SVD is computationally expensive. This paper describes a much simpler generalized Schur-type algorithm to compute similar low-rank approximants. For a given matrix $H$ which has $d$ singular values larger than $\epsilon$, we find all rank $d$ approximants $\Ha$ such that $H - \Ha$ has 2-norm less than $\epsilon$. The set of approximants includes the truncated SVD approximation. The advantages of the Schur algorithm are that it has a much lower computational complexity (similar to a QR factorization), and directly produces a description of the column space of the approximants. This column space can be updated and downdated in an on-line scheme, amenable to implementation on a parallel array of processors.

Journal ArticleDOI
TL;DR: Preconditioning strategies for linear systems with positive-definite matrices of the form $Z^{T} GZ$ are studied, which are suitable within algorithms for constrained optimization problems in the calculus of variations.
Abstract: We study preconditioning strategies for linear systems with positive-definite matrices of the form $Z^{T} GZ$, where $Z$ is rectangular and $G$ is symmetric but not necessarily positive definite. The preconditioning strategies are designed to be used in the context of a conjugate-gradient iteration, and are suitable within algorithms for constrained optimization problems. The techniques have other uses, however, and are applied here to a class of problems in the calculus of variations. Numerical tests are also included.

Journal ArticleDOI
TL;DR: The solution of the unconstrained weighted linear least-squares problem is known to be a convex combination of the basic solutions formed by the nonsingular subsystems if the weight matrix is diagonal andpositive definite, which implies that the norm of this solution is uniformly bounded for any diagonal and positive definite weight matrix.
Abstract: The solution of the unconstrained weighted linear least-squares problem is known to be a convex combination of the basic solutions formed by the nonsingular subsystems if the weight matrix is diagonal and positive definite. In particular, this implies that the norm of this solution is uniformly bounded for any diagonal and positive definite weight matrix. In addition, the solution set is known to be the relative interior of a finite set of polytopes if the weight matrix varies over the set of positive definite diagonal matrices. In this paper, these results are reviewed and generalized to the set of weight matrices that are symmetric, positive semidefinite, and diagonally dominant and that give unique solution to the least-squares problem. This is done by means of a particular symmetric diagonal decomposition of the weight matrix, giving a finite number of diagonally weighted problems but in a space of higher dimension. Extensions to equality-constrained weighted linear least-squares problems are given. A discussion of why the boundedness properties do not hold for general symmetric positive definite weight matrices is given. The motivation for this research is from interior methods for optimization.

Journal ArticleDOI
TL;DR: It is proved that all finite normal Toeplitz matrices are either generalised circulants or are obtained from Hermitian Toepler matrices by rotation and translation.
Abstract: It is well known from the work of Brown and Halmos [J. Reine Angew. Math., 213 (1963/1964), pp. 89--102] that an infinite Toeplitz matrix is normal if and only if it is a rotation and translation of a Hermitian Toeplitz matrix. In the present article we prove that all finite normal Toeplitz matrices are either generalised circulants or are obtained from Hermitian Toeplitz matrices by rotation and translation.

Journal ArticleDOI
TL;DR: New preconditioning techniques for the solution by the preconditionsed conjugate gradient (PCG) method of Hermitian Toeplitz systems with real and nondefinite generating functions are presented and the convergence speed is shown to be independent of the dimension of the involved matrices.
Abstract: In this paper we present new preconditioning techniques for the solution by the preconditioned conjugate gradient (PCG) method of Hermitian Toeplitz systems with real and nondefinite generating functions: actually we extend some results of Chan [IMA J. Numer. Anal., 11 (1991), pp. 333--345] and Di Benedetto, Fiorentino, and Serra [Comput. Math. Appl., 25 (1993), pp. 33--45] proved for positive definite Toeplitz systems. Moreover we demonstrate some density properties of the spectra of the preconditioned matrices. Finally, we show that the convergence speed of this PCG method is independent of the dimension of the involved matrices.

Journal ArticleDOI
TL;DR: P perturbation bounds for the subspaces associated with a general two-sided orthogonal decomposition of a numerically rank-deficient matrix are derived and the results imply theSubspaces are only slightly more sensitive to perturbations than singular subspaced, provided the norm of the off-diagonal blocks of the middle matrices are sufficiently small with respect to the size of the perturbative.
Abstract: Two-sided (or complete) orthogonal decompositions are good alternatives to the singular value decomposition (SVD) because they can yield good approximations to the fundamental subspaces associated with a numerically rank-deficient matrix. In this paper we derive perturbation bounds for the subspaces associated with a general two-sided orthogonal decomposition of a numerically rank-deficient matrix. The results imply the subspaces are only slightly more sensitive to perturbations than singular subspaces, provided the norm of the off-diagonal blocks of the middle matrices are sufficiently small with respect to the size of the perturbation. We consider regularizing the solution to the ill-conditioned least squares problem by truncating the decomposition and present perturbation theory for the minimum norm solution of the resulting least squares problem. The main results can be specialized to well known SVD-based perturbation bounds for singular subspaces as well as the truncated least squares solution.

Journal ArticleDOI
TL;DR: Perturbations devised to impose a certain nongeneric structure are computed in a way that guarantees one will find a Kronecker canonical form (KCF) on the closure of the orbit of the intended KCF.
Abstract: The set (or family) of 2-by-3 matrix pencils $A - \lambda B$ comprises 18 structurally different Kronecker structures (canonical forms). The algebraic and geometric characteristics of the generic and the 17 nongeneric cases are examined in full detail. The complete closure hierarchy of the orbits of all different Kronecker structures is derived and presented in a closure graph that shows how the structures relate to each other in the 12-dimensional space spanned by the set of 2-by-3 pencils. Necessary conditions on perturbations for transiting from the orbit of one Kronecker structure to another in the closure hierarchy are presented in a labeled closure graph. The node and arc labels shows geometric characteristics of an orbit's Kronecker structure and the change of geometric characteristics when transiting to an adjacent node, respectively. Computable normwise bounds for the smallest perturbations $(\delta A, \delta B)$ of a generic 2-by-3 pencil $A - \lambda B$ such that $(A + \delta A) - \lambda (B + \delta B)$ has a specific nongeneric Kronecker structure are presented. First, explicit expressions for the perturbations that transfer $A - \lambda B$ to a specified nongeneric form are derived. In this context tractable and intractable perturbations are defined. Second, a modified GUPTRI that computes a specified Kronecker structure of a generic pencil is used. Perturbations devised to impose a certain nongeneric structure are computed in a way that guarantees one will find a Kronecker canonical form (KCF) on the closure of the orbit of the intended KCF. Both approaches are illustrated by computational experiments. Moreover, a study of the behaviour of the nongeneric structures under random perturbations in finite precision arithmetic (using the GUPTRI software) show for which sizes of perturbations the structures are invariant and also that structure transitions occur in accordance with the closure hierarchy. Finally, some of the results are extended to the general $m$-by-$(m+1)$ case.

Journal ArticleDOI
TL;DR: It is shown that f has a zero when it has a unique zero (at the origin) with a nonvanishing index and the global error bound property of a piecewise affine function is characterized in terms of the recession cones of the zero sets of the function and its recession function.
Abstract: For a piecewise affine function $f:\Rn\goes \Rm$, the recession function is defined by \finf (x):=\lim_{\lambda\goes \infty}\frac{f(\lambda x)}{\lambda}. In this paper, we study the zero set and error bound properties of $f$ via $\finf$. We show, for example, that $f$ has a zero when $\finf$ has a unique zero (at the origin) with a nonvanishing index. We also characterize the global error bound property of a piecewise affine function in terms of the recession cones of the zero sets of the function and its recession function.

Journal ArticleDOI
TL;DR: This paper discusses the problem of constructing a Jacobi matrix such that its eigenvalues are given distinct values and proposes a new fast algorithm for solving this problem.
Abstract: In this paper, we discuss the problem of constructing a $2n \times 2n$ Jacobi matrix $J_{2n}$ such that its eigenvalues are given distinct values $\lambda_1, \lambda_2, \ldots, \lambda_{2n}$ and its leading $n \times n$ principal submatrix is a given $n \times n$ Jacobi matrix $J_n$. We give some sufficient and necessary conditions for the solubility of the problem and propose a new fast algorithm for solving this problem. We also present some numerical results.