scispace - formally typeset
Search or ask a question

Showing papers in "SIAM Journal on Matrix Analysis and Applications in 1998"


Journal ArticleDOI
TL;DR: The problem of maximizing the determinant of a matrix subject to linear matrix inequalities (LMIs) arises in many fields, including computational geometry, statistics, system identification, experiment design, and information and communication theory as discussed by the authors.
Abstract: The problem of maximizing the determinant of a matrix subject to linear matrix inequalities (LMIs) arises in many fields, including computational geometry, statistics, system identification, experiment design, and information and communication theory. It can also be considered as a generalization of the semidefinite programming problem. We give an overview of the applications of the determinant maximization problem, pointing out simple cases where specialized algorithms or analytical solutions are known. We then describe an interior-point method, with a simplified analysis of the worst-case complexity and numerical results that indicate that the method is very efficient, both in theory and in practice. Compared to existing specialized algorithms (where they are available), the interior-point method will generally be slower; the advantage is that it handles a much wider variety of problems.

716 citations


Journal ArticleDOI
TL;DR: An error bound for a family of problems arising from the elliptic method of lines is derived and shows that, for the same approximation quality, the diagonal variant of the extended subspaces requires about the square root of the dimension of the standard Krylov subspace using only positive or negative matrix powers.
Abstract: We introduce an economical Gram--Schmidt orthogonalization on the extended Krylov subspace originated by actions of a symmetric matrix and its inverse. An error bound for a family of problems arising from the elliptic method of lines is derived. The bound shows that, for the same approximation quality, the diagonal variant of the extended subspaces requires about the square root of the dimension of the standard Krylov subspaces using only positive or negative matrix powers. An example of an application to the solution of a 2.5-D elliptic problem attests to the computational efficiency of the method for large-scale problems.

244 citations


Journal ArticleDOI
TL;DR: This work considers two popular spectral separator algorithms and provides counterexamples showing that these algorithms perform poorly on certain graphs, and introduces some facts about the structure of eigenvectors of certain types of Laplacian and symmetric matrices.
Abstract: Computing graph separators is an important step in many graph algorithms. A popular technique for finding separators involves spectral methods. However, there has not been much prior analysis of the quality of the separators produced by this technique; instead it is usually claimed that spectral methods "work well in practice." We present an initial attempt at such an analysis. In particular, we consider two popular spectral separator algorithms and provide counterexamples showing that these algorithms perform poorly on certain graphs. We also consider a generalized definition of spectral methods that allows the use of some specified number of the eigenvectors corresponding to the smallest eigenvalues of the Laplacian matrix of a graph, and we show that if such algorithms use a constant number of eigenvectors, then there are graphs for which they do no better than using only the second smallest eigenvector. Furthermore, using the second smallest eigenvector of these graphs produces partitions that are poor with respect to bounds on the gap between the isoperimetric number and the cut quotient of the spectral separator. Even if a generalized spectral algorithm uses $n^\epsilon$ for \mbox{$0 < \epsilon < \frac{1}{4}$} eigenvectors, there exist graphs for which the algorithm fails to find a separator with a cut quotient within \mbox{$n^{\frac{1}{4} - \epsilon} - 1$} of the isoperimetric number. We also introduce some facts about the structure of eigenvectors of certain types of Laplacian and symmetric matrices; these facts provide the basis for the analysis of the counterexamples. Finally, we discuss some developments in spectral partitioning that have occurred since these results first appeared.

173 citations


Journal ArticleDOI
TL;DR: In this paper, the authors formulate and solve a new parameter estimation problem in the presence of data uncertainties, which is suitable when a priori bounds on the uncertain data are available, and its solution leads to more meaningful results, especially when compared with other methods such as total least squares and robust estimation.
Abstract: We formulate and solve a new parameter estimation problem in the presence of data uncertainties. The new method is suitable when a priori bounds on the uncertain data are available, and its solution leads to more meaningful results, especially when compared with other methods such as total least-squares and robust estimation. Its superior performance is due to the fact that the new method guarantees that the effect of the uncertainties will never be unnecessarily over-estimated, beyond what is reasonably assumed by the a priori bounds. A geometric interpretation of the solution is provided, along with a closed form expression for it. We also consider the case in which only selected columns of the coefficient matrix are subject to perturbations.

172 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that small eigenvalues (singular values) are determined to high relative accuracy by the data much more accurately than the classical perturbation theory would indicate.
Abstract: The classical perturbation theory for Hermitian matrix eigenvalue and singular value problems provides bounds on the absolute differences between approximate eigenvalues (singular values) and the true eigenvalues (singular values) of a matrix These bounds may be bad news for small eigenvalues (singular values), which thereby suffer worse relative uncertainty than large ones However, there are situations where even small eigenvalues are determined to high relative accuracy by the data much more accurately than the classical perturbation theory would indicate In this paper, we study how eigenvalues of a Hermitian matrix A change when it is perturbed to $\wtd A=D^*AD$, where D is close to a unitary matrix, and how singular values of a (nonsquare) matrix B change when it is perturbed to $\wtd B=D_1^*BD_2$, where D1 and D2 are nearly unitary It is proved that under these kinds of perturbations small eigenvalues (singular values) suffer relative changes no worse than large eigenvalues (singular values) Many well-known perturbation theorems, including the Hoffman--Wielandt and Weyl--Lidskii theorems, are extended

139 citations


Journal ArticleDOI
TL;DR: It is shown how to use some of the important properties of generalized reflexive (antireflexive) matrices to decompose linear least-squares problems whose coefficient matrices are generalized Reflexive into two smaller and independent subproblems.
Abstract: The main purpose of this paper is to introduce and exploit special properties of two special classes of rectangular matrices A and B that have the relations A = PAQ {\rm and} B = -PBQ, \qquad A, B \in {\cal C}^{n \times m}, where P and Q are two generalized reflection matrices. The matrices A (B), a generalization of reflexive (antireflexive) matrices and centrosymmetric matrices, are referred to in this paper as generalized reflexive (antireflexive) matrices. After introducing these two classes of matrices and developing general theories associated with them, we then show how to use some of the important properties to decompose linear least-squares problems whose coefficient matrices are generalized reflexive into two smaller and independent subproblems. Numerical examples are presented to demonstrate their usefulness.

137 citations


Journal ArticleDOI
TL;DR: A new modified Cholesky algorithm based on a symmetric indefinite factorization computed using a new pivoting strategy of Ashcraft, Grimes, and Lewis is proposed, showing that the algorithm is competitive with the existing algorithms of Gill, Murray, and Wright and Schnabel and Eskow.
Abstract: Given a symmetric and not necessarily positive definite matrix A, a modified Cholesky algorithm computes a Cholesky factorization P(A+E)PT = RT R, where P is a permutation matrix and E is a perturbation chosen to make A+E positive definite. The aims include producing a small-normed E and making A+E reasonably well conditioned. Modified Cholesky factorizations are widely used in optimization. We propose a new modified Cholesky algorithm based on a symmetric indefinite factorization computed using a new pivoting strategy of Ashcraft, Grimes, and Lewis. We analyze the effectiveness of the algorithm, both in theory and practice, showing that the algorithm is competitive with the existing algorithms of Gill, Murray, and Wright and Schnabel and Eskow. Attractive features of the new algorithm include easy-to-interpret inequalities that explain the extent to which it satisfies its design goals, and the fact that it can be implemented in terms of existing software.

110 citations


Journal ArticleDOI
TL;DR: A robust reordering scheme for sparse matrices relies on the notion of multisection, a generalization of bisection, to have consistently good performance in terms of fill reduction when compared with multiple minimum degree and generalized nested dissection.
Abstract: In this paper we provide a robust reordering scheme for sparse matrices. The scheme relies on the notion of multisection, a generalization of bisection. The reordering strategy is demonstrated to have consistently good performance in terms of fill reduction when compared with multiple minimum degree and generalized nested dissection. Experimental results show that by using multisection, we obtain an ordering which is consistently as good as or better than both for a wide spectrum of sparse problems.

86 citations


Journal ArticleDOI
TL;DR: A modified algorithm is presented that can perform a fast variation of Gaussian elimination with complete pivoting (GECP) and is both efficient and numerically stable, provided that the element growth in the computed factorization is not large.
Abstract: Recent research shows that structured matrices such as Toeplitz and Hankel matrices can be transformed into a different class of structured matrices called Cauchy-like matrices using the FFT or other trigonometric transforms. Gohberg, Kailath, and Olshevsky [Math. Comp., 64 (1995), pp. 1557--1576] demonstrate numerically that their fast variation of the straightforward Gaussian elimination with partial pivoting (GEPP) procedure on Cauchy-like matrices is numerically stable. Sweet and Brent [Adv. Signal Proc. Algorithms, 2363 (1995), pp. 266--280] show that the error growth in this variation could be much larger than would be encountered with straightforward GEPP in certain cases. In this paper, we present a modified algorithm that avoids such extra error growth and can perform a fast variation of Gaussian elimination with complete pivoting (GECP). Our analysis shows that it is both efficient and numerically stable, provided that the element growth in the computed factorization is not large. We also present a more efficient variation of this algorithm and discuss implementation techniques that further reduce execution time. Our numerical experiments show that this variation is highly efficient and numerically stable.

85 citations


Journal ArticleDOI
TL;DR: It is shown that an implementation of algorithm CGLS in which the residual sk=AT(b-Axk) of the normal equations is recurred will not in general achieve accurate solutions, and the same conclusion holds for the method based on Lanczos bidiagonalization with starting vector ATb.
Abstract: {The conjugate gradient method applied to the normal equations ATAx=ATb (CGLS) is often used for solving large sparse linear least squares problems. The mathematically equivalent algorithm LSQR based on the Lanczos bidiagonalization process is an often recommended alternative. In this paper, the achievable accuracy of different conjgate gradient and Lanczos methods in finite precision is studied. It is shown that an implementation of algorithm CGLS in which the residual sk=AT(b-Axk) of the normal equations is recurred will not in general achieve accurate solutions. The same conclusion holds for the method based on Lanczos bidiagonalization with starting vector ATb. For the preferred implementation of CGLS we bound the error ||r-rk|| of the computed residual rk. Numerical tests are given that confirm a conjecture of backward stability. The achievable accuracy of LSQR is shown to be similar. The analysis essentially also covers the preconditioned case.

80 citations


Journal ArticleDOI
Ji-guang Sun1
TL;DR: In this article, new perturbation results for the two different algebraic Riccati equations (continuous time and discrete time) are derived in a uniform manner, illustrated by numerical examples.
Abstract: New perturbation results for the two different algebraic Riccati equations (continuous time and discrete time) are derived in a uniform manner. The new results are illustrated by numerical examples.

Journal ArticleDOI
TL;DR: In this paper, the problem of the regularization of singular systems by derivative and proportional output feedback is studied, and necessary and sufficient conditions are given to guarantee the existence of a derivative and output feedback such that the closed-loop system is regular and of index at most 1.
Abstract: The problem of the regularization of singular systems by derivative and proportional output feedback is studied. Necessary and sufficient conditions are given to guarantee the existence of a derivative and proportional output feedback such that the closed-loop system is regular and of index at most 1. It is also shown that the closed-loop system becomes strongly controllable and observable by using this feedback.

Journal ArticleDOI
TL;DR: Pade approximations of the hyperbolic tangent lead to a Schur--Frechet algorithm for the logarithm that avoids problems associated with the standard "inverse scaling and squaring" method.
Abstract: The Schur--Frechet method of evaluating matrix functions consists of putting the matrix in upper triangular form, computing the scalar function values along the main diagonal, and then using the Frechet derivative of the function to evaluate the upper diagonals. This approach requires a reliable method of computing the Frechet derivative. For the logarithm this can be done by using repeated square roots and a hyperbolic tangent form of the logarithmic Frechet derivative. Pade approximations of the hyperbolic tangent lead to a Schur--Frechet algorithm for the logarithm that avoids problems associated with the standard "inverse scaling and squaring" method. Inverting the order of evaluation in the logarithmic Frechet derivative gives a method of evaluating the derivative of the exponential. The resulting Schur--Frechet algorithm for the exponential gives superior results compared to standard methods on a set of test problems from the literature.

Journal ArticleDOI
Abstract: The weighted generalized inverses have several important applications in researching the singular matrices, regularization methods for ill-posed problems, optimization problems, and statistics problems. In this paper we establish some sufficient and necessary conditions for inverse order rule of weighted generalized inverse.

Journal ArticleDOI
TL;DR: A new perturbation theory for the matrix sign function, the conditioning of its computation, the numerical stability of the divide-and-conquer algorithm, and iterative refinement schemes are presented.
Abstract: The matrix sign function has several applications in system theory and matrix computations. However, the numerical behavior of the matrix sign function, and its associated divide-and-conquer algorithm for computing invariant subspaces, are still not completely understood. In this paper, we present a new perturbation theory for the matrix sign function, the conditioning of its computation, the numerical stability of the divide-and-conquer algorithm, and iterative refinement schemes. Numerical examples are also presented. An extension of the matrix-sign-function-based algorithm to compute left and right deflating subspaces for a regular pair of matrices is also described.

Journal ArticleDOI
TL;DR: It is found that minimum local fill produces significantly better orderings than minimum degree, albeit at a greatly increased runtime, and two simple modifications are described that further improve ordering quality.
Abstract: The minimum degree and minimum local fill algorithms are two bottom-up heuristics for reordering a sparse matrix prior to factorization. Minimum degree chooses a node of least degree to eliminate next; minimum local fill chooses a n ode whose elimination creates the least fill. Contrary to popular belief, we find that minimum local fill produces significantly better orderings than minimum degree, albeit at a greatly increased runtime. We describe two simple modifications to this strategy that further improve ordering quality. We also describe a simple modification to minimum degree, which we term approximate minimum mean local fill, that reduces factorization work by roughly 25% with only a small increase in runtime.

Journal ArticleDOI
TL;DR: In this article, it was shown that a lower triangular matrix of dimension n whose diagonal entries are fixed at 1 with the subdiagonal entries taken as independent N(0, 1) variables is also exponentially ill conditioned with the 2-norm condition number kn of such a matrix satisfying 1.305683410
Abstract: Let Ln be a lower triangular matrix of dimension n each of whose nonzero entries is an independent N(0,1) variable, i.e., a random normal variable of mean 0 and variance 1. It is shown that kn, the 2-norm condition number of Ln, satisfies \begin{equation*} \sqrt[n]{\kn} \rightarrow 2 \:\:\: \text{\it almost surely} \end{equation*} as $n\rightarrow\infty$. This exponential growth of kn with n is in striking contrast to the linear growth of the condition numbers of random dense matrices with n that is already known. This phenomenon is not due to small entries on the diagonal (i.e., small eigenvalues) of Ln. Indeed, it is shown that a lower triangular matrix of dimension $n$ whose diagonal entries are fixed at 1 with the subdiagonal entries taken as independent N(0,1) variables is also exponentially ill conditioned with the 2-norm condition number kn of such a matrix satisfying \begin{equation*} \sqrt[n]{\kn}\rightarrow 1.305683410\ldots \:\:\:\text{\it almost surely} \end{equation*} as $n\rightarrow\infty$. A similar pair of results about complex random triangular matrices is established. The results for real triangular matrices are generalized to triangular matrices with entries from any symmetric, strictly stable distribution.

Journal ArticleDOI
TL;DR: In this article, statistical condition estimation is applied to the linear least squares problem and the method obtains componentwise condition estimates via the Frechet derivative, which is as computationally efficient as normwise condition estimation methods, and is easily adapted to respect structural constraints on perturbations of the input data.
Abstract: Statistical condition estimation is applied to the linear least squares problem. The method obtains componentwise condition estimates via the Frechet derivative. A rigorous statistical theory exists that determines the probability of accuracy in the estimates. The method is as computationally efficient as normwise condition estimation methods, and it is easily adapted to respect structural constraints on perturbations of the input data. Several examples illustrate the method.

Journal ArticleDOI
TL;DR: In this article, an overview of frequency domain total least squares (TLS) estimators for rational transfer function models of linear time-invariant multivariable systems is given.
Abstract: This paper gives an overview of frequency domain total least squares (TLS) estimators for rational transfer function models of linear time-invariant multivariable systems. The statistical performance of the different approaches are analyzed through their equivalent cost functions. Both generalized and bootstrapped total least squares (GTLS and BTLS) methods require the exact knowledge of the noise covariance matrix. The paper also studies the asymptotic (the number of data points going to infinity) behavior of the GTLS and BTLS estimators when the exact noise covariance matrix is replaced by the sample noise covariance matrix obtained from a (small) number of independent data sets. Even if only two independent repeated observations are available, it is shown that the estimates are still strongly consistent without any increase in the asymptotic uncertainty.

Journal ArticleDOI
TL;DR: A stable and fast solver for nonsymmetric linear systems of equations with shift structured coefficient matrices (e.g., Toeplitz, quasi-Toeplitzer, and product of two Toe Plitz matrices) is derived.
Abstract: We derive a stable and fast solver for nonsymmetric linear systems of equations with shift structured coefficient matrices (e.g., Toeplitz, quasi-Toeplitz, and product of two Toeplitz matrices). The algorithm is based on a modified fast QR factorization of the coefficient matrix and relies on a stabilized version of the generalized Schur algorithm for matrices with displacement structure. All computations can be done in O(n2) operations, where n is the matrix dimension, and the algorithm is backward stable.

Journal ArticleDOI
TL;DR: In this article, the Schur complement technique was used to prove stronger and more general bounds on the eigenvalues of Hermitian matrices with singular and generalized eigen values.
Abstract: Let \[ A = \left[ \begin{array}{cc} M & R \\ R^{\ast} & N \end{array} \right] {\rm and } \tilde{A} = \left[ \begin{array}{cc} M & 0 \\ 0 & N \end{array} \right] \] be Hermitian matrices. Stronger and more general $O(\|R\|^2)$ bounds relating the eigenvalues of A and A are proved using a Schur complement technique. These results extend to singular values, to eigenvalues of non-Hermitian matrices, and to generalized eigenvalues.

Journal ArticleDOI
TL;DR: The truncated RQ (TRQ) iteration as discussed by the authors is a subspace iteration of the rational Krylov method for large-scale eigenvalue problems that is able to accelerate the convergence through an inexact (iterative) solution to a shift-invert equation.
Abstract: We introduce a new Krylov subspace iteration for large scale eigenvalue problems that is able to accelerate the convergence through an inexact (iterative) solution to a shift-invert equation. The method also takes full advantage of exact solutions when they can be obtained with sparse direct method. We call this new iteration the truncated RQ (TRQ) iteration. It is based upon a recursion that develops in the leading k columns of the implicitly shifted RQ iteration for dense matrices. Inverse-iteration-like convergence to a partial Schur decomposition occurs in the leading k columns of the updated basis vectors and Hessenberg matrices. The TRQ iteration is competitive with the rational Krylov method of Ruhe when the shift-invert equations can be solved directly and with the Jacobi--Davidson method of Sleijpen and Van der Vorst when these equations are solved inexactly with a preconditioned iterative method. The TRQ iteration is related to both of these but is derived directly from the RQ iteration and thus inherits the convergence properties of that method. Existing RQ deflation strategies may be employed directly in the TRQ iteration.

Journal ArticleDOI
TL;DR: In this paper, the steepest descent flow in reducing the distance between isospectral matrices and nonnegative matrices, represented in terms of some general coordinates, is described.
Abstract: The inverse stochastic spectrum problem involves the construction of a stochastic matrix with a prescribed spectrum. The problem could be solved by first constructing a nonnegative matrix with the same prescribed spectrum. A differential equation aimed to bring forth the steepest descent flow in reducing the distance between isospectral matrices and nonnegative matrices, represented in terms of some general coordinates, is described. The flow is further characterized by an analytic singular value decomposition to maintain the numerical stability and to monitor the proximity to singularity. This flow approach can be used to design Markov chains with specified structure. Applications are demonstrated by numerical examples.

Journal ArticleDOI
TL;DR: In this article, the stability of hyperbolic Householder transformations has been analyzed and it has been shown that two distinct implementations of the individual transformations are relationally stable and that pivoting is required for the entire triangularization algorithm to be stable.
Abstract: This paper treats the problem of triangularizing a matrix by hyperbolic Householder transformations. The stability of this method, which finds application in block updating and fast algorithms for Toeplitz-like matrices, has been analyzed only in special cases. Here we give a general analysis which shows that two distinct implementations of the individual transformations are relationally stable. The analysis also shows that pivoting is required for the entire triangularization algorithm to be stable.

Journal ArticleDOI
TL;DR: In this paper, a numerical method for computing the square root of a symmetric positive definite matrix is developed based on the Pade approximation of $\sqrt{1+x}$ in the prime fraction form.
Abstract: A numerical method for computing the square root of a symmetric positive definite matrix is developed in this paper. It is based on the Pade approximation of $\sqrt{1+x}$ in the prime fraction form. A precise analysis allows us to determine the minimum number of terms required in the Pade approximation for a given error tolerance. Theoretical studies and numerical experiments indicate that the method is more efficient than the standard method based on the spectral decomposition, unless the condition number is very large.

Journal ArticleDOI
TL;DR: In this paper, the primitivity of a positive matrix pair (A,B) is introduced as a strict positivity constraint on the asymptotic behavior of the associated two-dimensional (2D) state model.
Abstract: In this paper the primitivity of a positive matrix pair (A,B) is introduced as a strict positivity constraint on the asymptotic behavior of the associated two-dimensional (2D) state model. The state evolution is first considered under the assumption of periodic initial conditions. In this case the system evolves according to a one-dimensional (1D) state updating equation, described by a block circulant matrix. Strict positivity of the asymptotic dynamics is equivalent to the primitivity of the circulant matrix, a property that can be restated as a set of conditions on the spectra of $A + e^{i \omega} B$, for suitable real values of $\omega$. The theory developed in this context provides a foundation whose analytical ideas may be generalized to nonperiodic initial conditions. To this purpose the spectral radius and the maximal modulus eigenvalues of the matrices $e^{i \theta} A + e^{i \omega} B$, $\theta$ and $\omega \in \hbox{{\bbb R}},$ are related to the characteristic polynomial of the pair (A,B) as well as to the structure of the graphs associated with A and B and to the factorization properties of suitable integer matrices. A general description of primitive positive matrix pairs is finally derived, including both spectral and combinatorial conditions on the pair.

Journal ArticleDOI
TL;DR: In this paper, the condition number of a single-input-single-output (SIPP) problem is estimated and improved if the poles are allowed to vary in specific regions in the complex plane.
Abstract: We discuss the single-input pole placement problem (SIPP) and analyze how the conditioning of the problem can be estimated and improved if the poles are allowed to vary in specific regions in the complex plane. Under certain assumptions we give formulas as well as bounds for the norm of the feedback gain and the condition number of the closed loop matrix. Via several numerical examples we demonstrate how these results can be used to estimate the condition number of a given SIPP problem and also demonstrate how to select the poles to improve the conditioning.

Journal ArticleDOI
TL;DR: The use of the Dulmage--Mendelsohn decomposition and network flow on bipartite graphs to improve a graph bisection partition is considered and the utility of these improvement techniques is demonstrated on a set of sparse test matrices.
Abstract: In this paper, we consider the use of the Dulmage--Mendelsohn decomposition and network flow on bipartite graphs to improve a graph bisection partition. Given a graph partition [B, W, S] with a vertex separator S and two disconnected components B and W, different strategies are considered based on the Dulmage--Mendelsohn decomposition to reduce the separator size |S| and/or the imbalance between B and W. For the case when the vertices are weighted, we relate this to the bipartite network flow problem. A further enhancement to improve a partition is to generalize the bipartite network to a general network and then solve a max-flow problem. We demonstrate the utility of these improvement techniques on a set of sparse test matrices, where we find top-level separators, nested dissection, and multisection orderings.

Journal ArticleDOI
TL;DR: This work presents one more algorithm to compute the condition number (for inversion) of an n X n tridiagonal matrix J in O(n) time and is as efficient as the earlier algorithms.
Abstract: We present one more algorithm to compute the condition number (for inversion) of an n X n tridiagonal matrix J in O(n) time. Previous O(n) algorithms for this task given by Higham [SIAM J. Sci. Statist. Comput., 7 (1986), pp. 150--165] are based on the tempting compact representation of the upper (lower) triangle of J-1 as the upper (lower) triangle of a rank-one matrix. However they suffer from severe overflow and underflow problems, especially on diagonally dominant matrices. Our new algorithm avoids these problems and is as efficient as the earlier algorithms.

Journal ArticleDOI
TL;DR: In this article, the authors classified the ranks and inertias of hermitian completion for the partially specified 3 x 3 block band hermitians (also known as a "bordered matrix") using a block generalization of the Dym-Gohberg algorithm.
Abstract: This paper classifies the ranks and inertias of hermitian completion for the partially specified 3 x 3 block band hermitian matrix (also known as a "bordered matrix") P=\pmatrix{A&B&?\cr B^*&C&D\cr ?&D^*&E}. The full set of completion inertias is described in terms of seven linear inequalities involving inertias and ranks of specified submatrices. The minimal completion rank for P is computed. We study the completion inertias of partially specified hermitian block band matrices, using a block generalization of the Dym--Gohberg algorithm. At each inductive step, we use our classification of the possible inertias for hermitian completions of bordered matrices. We show that when all the maximal specified submatrices are invertible, any inertia consistent with Poincare's inequalities is obtainable. These results generalize the nonblock band results of Dancis [SIAM J. Matrix Anal. Appl., 14 (1993), pp. 813--829]. All our results remain valid for real symmetric completions.