scispace - formally typeset
Search or ask a question

Showing papers in "SIAM Journal on Matrix Analysis and Applications in 2018"


Journal ArticleDOI
TL;DR: The CANDECOMP/PARAFAC (CP) decomposition is a leading method for the analysis of multiway data as discussed by the authors, which involves a series of alternating least squares algorithms.
Abstract: The CANDECOMP/PARAFAC (CP) decomposition is a leading method for the analysis of multiway data. The standard alternating least squares algorithm for the CP decomposition (CP-ALS) involves a series ...

158 citations


Journal ArticleDOI
TL;DR: This work investigates the optimal model reduction problem for large-scale quadratic-bilinear (QB) control systems and discusses the variational analysis and the Voltecker analysis.
Abstract: We investigate the optimal model reduction problem for large-scale quadratic-bilinear (QB) control systems. Our contributions are threefold. First, we discuss the variational analysis and the Volte...

82 citations


Journal ArticleDOI
TL;DR: This work introduces a procedure for adapting an existing subspace based on information from the least-squares problem that underlies the approximation problem of interest such that the associated least-Squares residual vanishes exactly.
Abstract: In many scientific applications, including model reduction and image processing, subspaces are used as ansatz spaces for the low-dimensional approximation and reconstruction of the state vectors of interest. We introduce a procedure for adapting an existing subspace based on information from the least-squares problem that underlies the approximation problem of interest such that the associated least-squares residual vanishes exactly. The method builds on a Riemmannian optimization procedure on the Grassmann manifold of low-dimensional subspaces, namely the Grassmannian Rank-One Update Subspace Estimation (GROUSE). We establish for GROUSE a closed-form expression for the residual function along the geodesic descent direction. Specific applications of subspace adaptation are discussed in the context of image processing and model reduction of nonlinear partial differential equation systems.

61 citations


Journal ArticleDOI
TL;DR: Optimization on Riemannian manifolds widely arises in eigenvalue computation, density functional theory, Bose--Einstein condensates, low rank nearest correlation, image registration, signal process...
Abstract: Optimization on Riemannian manifolds widely arises in eigenvalue computation, density functional theory, Bose--Einstein condensates, low rank nearest correlation, image registration, signal process...

52 citations


Journal ArticleDOI
TL;DR: In this paper, a wide class of matrix pencils connected with dissipative Hamiltonian descriptor systems is investigated, and the following properties are shown: all eigenvalues are in the closed left h...
Abstract: A wide class of matrix pencils connected with dissipative Hamiltonian descriptor systems is investigated. In particular, the following properties are shown: all eigenvalues are in the closed left h...

50 citations


Journal ArticleDOI
TL;DR: This work first provides existence and uniqueness conditions for the solvability of an algebraic eigenvalue problem with eigenvector nonlinearity and presents a local and global convergence analysis of this problem.
Abstract: We first provide existence and uniqueness conditions for the solvability of an algebraic eigenvalue problem with eigenvector nonlinearity We then present a local and global convergence analysis fo

50 citations


Journal ArticleDOI
TL;DR: In this paper, a model order reduced dynamical system that evolves a modal decomposition to approximate the discretized solution of a stochastic PDE can be related to a vector field tangent to the manifold of a manifold.
Abstract: Any model order reduced dynamical system that evolves a modal decomposition to approximate the discretized solution of a stochastic PDE can be related to a vector field tangent to the manifold of f...

49 citations


Journal ArticleDOI
TL;DR: Several block preconditioners for Krylov subspace methods are described and analyzed and the iterative solution of a class of linear systems with double saddle point structure is considered.
Abstract: We consider the iterative solution of a class of linear systems with double saddle point structure. Several block preconditioners for Krylov subspace methods are described and analyzed. We derive s...

48 citations


Journal ArticleDOI
TL;DR: In this article, a substantial tightening of the error bound for the DEIM oblique interpolation method was proposed, and a numerical implementation of DEIM was also proposed for discrete empirical interpolation.
Abstract: New contributions are offered to the theory and numerical implementation of the discrete empirical interpolation method (DEIM). A substantial tightening of the error bound for the DEIM oblique proj...

38 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a fixed-precision algorithm for low-rank matrix approximation, with the emphasis on the fixed precision problem and computational efficiency for handling large matrices.
Abstract: Randomized algorithms for low-rank matrix approximation are investigated, with the emphasis on the fixed-precision problem and computational efficiency for handling large matrices. The algorithms are based on the so-called QB factorization, where Q is an orthonormal matrix. First, a mechanism for calculating the approximation error in the Frobenius norm is proposed, which enables efficient adaptive rank determination for a large and/or sparse matrix. It can be combined with any QB-form factorization algorithm in which B's rows are incrementally generated. Based on the blocked randQB algorithm by Martinsson and Voronin, this results in an algorithm called randQB_EI. Then, we further revise the algorithm to obtain a pass-efficient algorithm, randQB_FP, which is mathematically equivalent to the existing randQB algorithms and also suitable for the fixed-precision problem. Especially, randQB_FP can serve as a single-pass algorithm for calculating leading singular values, under a certain condition. With large a...

37 citations


Journal ArticleDOI
TL;DR: A new symmetric method, belonging to the class of Uzawa smoothers, is introduced, which unify the analysis of the smoothing properties, which is an important part in the multigrid convergence theory.
Abstract: We discuss several Uzawa-type iterations as smoothers in the context of multigrid schemes for saddle point problems. A unified framework to analyze the smoothing properties is presented. The introd...

Journal ArticleDOI
TL;DR: The results in this paper establish a rigorous foundation for the numerical computation of the complete structure of zeros and poles, both finite and at infinity, of any rational matrix by applying any well known backward stable algorithm for generalized eigenvalue problems to any of the strong linearizations explicitly constructed in this work.
Abstract: This paper defines for the first time strong linearizations of arbitrary rational matrices, studies in depth properties and diferent characterizations of such linear matrix pencils, and develops infinitely many examples of strong linearizations that can be explicitly and easily constructed from a minimal state-space realization of the strictly proper part of the considered rational matrix and the coefficients of the polynomial part. As a consequence, the results in this paper establish a rigorous foundation for the numerical computation of the complete structure of zeros and poles, both finite and at infinity, of any rational matrix by applying any well known backward stable algorithm for generalized eigenvalue problems to any of the strong linearizations explicitly constructed in this work. Since the results of this paper require to use several concepts that are not standard in matrix computations, a considerable effort has been done to make the paper as self-contained as possible.

Journal ArticleDOI
TL;DR: A new network centrality measure based on the concept of nonbacktracking walks, that is, walks not containing subsequences of the form uvu where u and v are any distinct connected vertices of the underlying graph, is introduced and studied.
Abstract: We introduce and study a new network centrality measure based on the concept of nonbacktracking walks, that is, walks not containing subsequences of the form uvu where u and v are any distinct connected vertices of the underlying graph. We argue that this feature can yield more meaningful rankings than traditional walk-based centrality measures. We show that the resulting Katz-style centrality measure may be computed via the so-called deformed graph Laplacian---a quadratic matrix polynomial that can be associated with any graph. By proving a range of new results about this matrix polynomial, we gain insights into the behavior of the algorithm with respect to its Katz-like parameter. The results also inform implementation issues. In particular we show that, in an appropriate limit, the new measure coincides with the nonbacktracking version of eigenvector centrality introduced by Martin, Zhang, and Newman in 2014. Rigorous analysis on star and star-like networks illustrates the benefits of the new approach,...

Journal ArticleDOI
TL;DR: In this article, the condition number for general join decompositions is defined as a distance to a set of ill-posed points in a supplementary product of Grassmannians, which can be computed efficiently as the smallest singular value of an auxiliary matrix.
Abstract: The join set of a finite collection of smooth embedded submanifolds of a mutual vector space is defined as their Minkowski sum. Join decompositions generalize some ubiquitous decompositions in multilinear algebra, namely, tensor rank, Waring, partially symmetric rank, and block term decompositions. This paper examines the numerical sensitivity of join decompositions to perturbations; specifically, we consider the condition number for general join decompositions. It is characterized as a distance to a set of ill-posed points in a supplementary product of Grassmannians. We prove that this condition number can be computed efficiently as the smallest singular value of an auxiliary matrix. For some special join sets, we characterized the behavior of sequences in the join set converging to the latter's boundary points. Finally, we specialize our discussion to the tensor rank and Waring decompositions and provide several numerical experiments confirming the key results.

Journal ArticleDOI
TL;DR: This paper considers a family of Jacobi-type algorithms for a simultaneous orthogonal diagonalization problem of symmetric tensors and proposes and proves a newJacobi-based algorithm in the general setting and proves its global convergence for sufficiently smooth functions.
Abstract: In this paper, we consider a family of Jacobi-type algorithms for a simultaneous orthogonal diagonalization problem of symmetric tensors. For the Jacobi-based algorithm of [M. Ishteva, P.-A. Absil, and P. Van Dooren, SIAM J. Matrix Anal. Appl., 34 (2013), pp. 651--672], we prove its global convergence for simultaneous orthogonal diagonalization of symmetric matrices and 3rd-order tensors. We also propose a new Jacobi-based algorithm in the general setting and prove its global convergence for sufficiently smooth functions.

Journal ArticleDOI
TL;DR: A connection between the (non)existence of real orthogonal tensors of order three and the classical Hurwitz problem on composition algebras can be established: existence of Orthogonal Tensors of size is equivalent to the admissibility of the triple $[\ell,m,n]$ to the Hurwitzproblem.
Abstract: As is well known, the smallest possible ratio between the spectral norm and the Frobenius norm of an $m \times n$ matrix with $m \le n$ is $1/\sqrt{m}$ and is (up to scalar scaling) attained only b...

Journal ArticleDOI
TL;DR: A new generalized matrix inverse is derived which is consistent with respect to arbitrary nonsingular diagonal transformations, e.g., it preserves units associated with variables under state space under matrix inverse transformations.
Abstract: A new generalized matrix inverse is derived which is consistent with respect to arbitrary nonsingular diagonal transformations, e.g., it preserves units associated with variables under state space ...

Journal ArticleDOI
TL;DR: In this paper, a tensor train algorithm for the computation of a singular value decomposition (SVD) low-rank approximation of a matrix in the matrix product operator (MPO) format was proposed.
Abstract: We propose a new algorithm for the computation of a singular value decomposition (SVD) low-rank approximation of a matrix in the matrix product operator (MPO) format, also called the tensor train m...

Journal ArticleDOI
TL;DR: In this article, a tensorized Krylov subspaces are projected onto tensors of matrix-vector multiplications with the objective of computing the exact convergence of the matrix function.
Abstract: We consider the task of updating a matrix function $f(A)$ when the matrix $A\in\mathbb{C}^{n \times n}$ is subject to a low-rank modification. In other words, we aim at approximating $f(A+D)-f(A)$ for a matrix $D$ of rank $k \ll n$. The approach proposed in this paper attains efficiency by projecting onto tensorized Krylov subspaces produced by matrix-vector multiplications with $A$ and $A^*$. We prove the approximations obtained from $m$ steps of the proposed methods are exact if $f$ is a polynomial of degree at most $m$ and use this as a basis for proving a variety of convergence results, in particular for the matrix exponential and for Markov functions. We illustrate the performance of our method by considering various examples from network analysis, where our approach can be used to cheaply update centrality and communicability measures.

Journal ArticleDOI
TL;DR: A new forward error bound for Pade approximants that for highly nonnormal matrices can be much smaller than the classical bound of Kenney and Laub is derived.
Abstract: Two algorithms are developed for computing the matrix logarithm in floating point arithmetic of any specified precision The backward error-based approach used in the state of the art inverse scaling and squaring algorithms does not conveniently extend to a multiprecision environment, so instead we choose algorithmic parameters based on a forward error bound We derive a new forward error bound for Pad\'{e} approximants that for highly nonnormal matrices can be much smaller than the classical bound of Kenney and Laub One of our algorithms exploits a Schur decomposition while the other is transformation-free and uses only the computational kernels of matrix multiplication and the solution of multiple right-hand side linear systems For double precision computations the algorithms are competitive with the state of the art algorithm of Al-Mohy, Higham, and Relton implemented in \texttt{logm} in MATLAB\@ They are intended for computing environments providing multiprecision floating point arithmetic, such as Julia, MATLAB via the Symbolic Math Toolbox or the Multiprecision Computing Toolbox, or Python with the mpmath or SymPy packages We show experimentally that the algorithms behave in a forward stable manner over a wide range of precisions, unlike existing alternatives

Journal ArticleDOI
TL;DR: A subspace approach that converts the original problem into a small scale one by means of orthogonal projections and restrictions to certain subspace, and that gradually expands these subspaces based on the optimal solutions of small scale problems is described.
Abstract: We consider the minimization or maximization of the $J$th largest eigenvalue of an analytic and Hermitian matrix-valued function, and build on Mengi, Yildirim, and Kilic [SIAM J. Matrix Anal. Appl., 35, pp. 699--724, 2014]. This work addresses the setting when the matrix-valued function involved is very large. We describe subspace procedures that convert the original problem into a small-scale one by means of orthogonal projections and restrictions to certain subspaces, and that gradually expand these subspaces based on the optimal solutions of small-scale problems. Global convergence and superlinear rate-of-convergence results with respect to the dimensions of the subspaces are presented in the infinite dimensional setting, where the matrix-valued function is replaced by a compact operator depending on parameters. In practice, it suffices to solve eigenvalue optimization problems involving matrices with sizes on the scale of tens, instead of the original problem involving matrices with sizes on the scale...

Journal ArticleDOI
TL;DR: The main contribution of this work are fast algorithms for the computation of the Toeplitz matrix exponential that have provable quadratic complexity if the spectrum is real, or sectorial, or more generally, if the imaginary parts of the rightmost eigenvalues do not vary too much.
Abstract: The computation of the matrix exponential is a ubiquitous operation in numerical mathematics, and for a general, unstructured $n\times n$ matrix it can be computed in $\mathcal{O}(n^3)$ operations. An interesting problem arises if the input matrix is a Toeplitz matrix, for example as the result of discretizing integral equations with a time invariant kernel. In this case it is not obvious how to take advantage of the Toeplitz structure, as the exponential of a Toeplitz matrix is, in general, not a Toeplitz matrix itself. The main contribution of this work are fast algorithms for the computation of the Toeplitz matrix exponential. The algorithms have provable quadratic complexity if the spectrum is real, or sectorial, or, more generally, if the imaginary parts of the rightmost eigenvalues do not vary too much. They may be efficient even outside these spectral constraints. They are based on the scaling and squaring framework, and their analysis connects classical results from rational approximation theory t...

Journal ArticleDOI
TL;DR: In this paper, the problem of efficiently solving Sylvester and Lyapunov equations of medium and large scale, in case of rank-structured data, was considered, where the coefficient matrices and the right-hand siamese coefficients were used.
Abstract: We consider the problem of efficiently solving Sylvester and Lyapunov equations of medium and large scale, in case of rank-structured data, i.e., when the coefficient matrices and the right-hand si...

Journal ArticleDOI
TL;DR: The impact of local round-off effects on the attainable accuracy of the pipelined CG algorithm is analyzed, the gap between the true residual and the recursively computed residual used in the algorithm is estimated, and an automated residual replacement strategy is suggested to reduce the loss of attainability accuracy on the final iterative solution.
Abstract: Pipelined Krylov subspace methods typically offer improved strong scaling on parallel HPC hardware compared to standard Krylov subspace methods for large and sparse linear systems. In pipelined met...

Journal ArticleDOI
TL;DR: This paper revisits the problem of finding the best rank-1 approximation to a symmetric tensor and makes three contributions.
Abstract: This paper revisits the problem of finding the best rank-1 approximation to a symmetric tensor and makes three contributions. First, in contrast to the many long and lingering arguments in the lite...

Journal ArticleDOI
TL;DR: The multidimensional heat equation, along with its more general version known as the (linear) anisotropic diffusion equation, is discretized by a discontinuous Galerkin (DG) method in time and space.
Abstract: The multidimensional heat equation, along with its more general version known as the (linear) anisotropic diffusion equation, is discretized by a discontinuous Galerkin (DG) method in time and a fi...

Journal ArticleDOI
TL;DR: The results presented here form the structural foundation for the analysis of randomized Krylov space methods, a combination of traditional Lanczos convergence analysis with optimal approximations via least squares problems.
Abstract: This paper is concerned with approximating the dominant left singular vector space of a real matrix $A$ of arbitrary dimension, from block Krylov spaces generated by the matrix ${A}{A}^T$ and the b...

Journal ArticleDOI
TL;DR: The present paper provides equivalence results about primitive sets of matrices without zero rows and columns, denoted by $\mathscr{NZ}$, due to its intriguing connections to the Cerný conjecture.
Abstract: A set of nonnegative matrices $\mathcal{M}=\{M_1, M_2, \ldots, M_k\}$ is called primitive if there exist possibly equal indices $i_1, i_2, \ldots, i_m$ such that $M_{i_1} M_{i_2} \cdots M_{i_m}$ is entrywise positive. The length of the shortest such product is called the exponent of $\mathcal{M}$. Recently, connections between synchronizing automata and primitive sets of matrices were established. In the present paper, we strengthen these links by providing equivalence results, both in terms of combinatorial characterization and computational complexity. We pay special attention to the set of matrices without zero rows and columns, denoted by $\mathscr{NZ}$, due to its intriguing connections to the Cerný conjecture. We rely on synchronizing automata theory to derive a number of results about primitive sets of matrices. Making use of an asymptotic estimate by Rystsov [Cybernetics, 16 (1980), pp. 194--198], we show that the maximal exponent $\exp(n)$ of primitive sets of $n \times n$ matrices satisfy $\lim_...

Journal ArticleDOI
TL;DR: Improved error bounds for small-sample statistical estimation of the matrix Frobenius norm are derived and it is established that small- sample estimators provide reliable order-of-magnitude estimates of norms and condition numbers, for matrices of arbitrary rank, even whenvery few random samples are used.
Abstract: We derive improved error bounds for small-sample statistical estimation of the matrixFrobenius norm. The bounds rigorously establish that small-sample estimators provide reliable order-of-magnitude estimates of norms and condition numbers, for matrices of arbitrary rank, even whenvery few random samples are used.

Journal ArticleDOI
TL;DR: In the presented framework, the concept of a border basis is generalized by relaxing the conditions on the set of basis elements, which allows for algorithms to adapt the choice of basis in order to enhance the numerical stability.
Abstract: We consider the problem of finding the isolated common roots of a set of polynomial functions defining a zero-dimensional ideal I in a ring R of polynomials over C. We propose a general algebraic framework to find the solutions and to compute the structure of the quotient ring R/I from the null space of a Macaulay-type matrix. The affine dense, affine sparse, homogeneous and multi-homogeneous cases are treated. In the presented framework, the concept of a border basis is generalized by relaxing the conditions on the set of basis elements. This allows for algorithms to adapt the choice of basis in order to enhance the numerical stability. We present such an algorithm and show numerical results.