scispace - formally typeset
Search or ask a question

Showing papers in "SIAM Journal on Matrix Analysis and Applications in 2021"


Journal ArticleDOI
TL;DR: In this paper, the concept of adaptive sampling rules was generalized to the sketch-and-project method for solving linear systems, and the adaptive sampling rule was analyzed in the sketch and project setting.
Abstract: We generalize the concept of adaptive sampling rules to the sketch-and-project method for solving linear systems. Analyzing adaptive sampling rules in the sketch-and-project setting yields converge...

34 citations


Journal ArticleDOI
TL;DR: The main aim of this paper is to develop the quaternions generalized minimal residual method (QGMRES) for solving quaternion linear systems.
Abstract: The main aim of this paper is to develop the quaternion generalized minimal residual method (QGMRES) for solving quaternion linear systems. Quaternion linear systems arise from three-dimensional or...

23 citations


Journal ArticleDOI
TL;DR: In this article, the numerical solution of subspace optimization problems, consisting of minimizing a smooth functional over the set of orthogonal projectors of fixed rank, is studied. But this paper is not concerned with the optimization of subspaces.
Abstract: This article is concerned with the numerical solution of subspace optimization problems, consisting of minimizing a smooth functional over the set of orthogonal projectors of fixed rank. Such probl...

23 citations


Journal ArticleDOI
TL;DR: It is pointed out that the arising sequence $(x_k)_{k=1}^{\infty}$ tends to converge to the solution $x$ in an interesting way: generically, as $k \rightarrow \infty$, x_k - x tends to the singular vector of $A$ corresponding to the smallest singular value.
Abstract: Randomized Kaczmarz is a simple iterative method for finding solutions of linear systems $Ax = b$. We point out that the arising sequence $(x_k)_{k=1}^{\infty}$ tends to converge to the solution $x...

22 citations


Journal ArticleDOI
TL;DR: The block version of the classical Gram--Schmidt method is often employed to efficiently compute orthogonal bases for Krylov subspace methods and eigenvalue solvers, but a rigorous proof is not always possible.
Abstract: The block version of the classical Gram--Schmidt (\tt BCGS) method is often employed to efficiently compute orthogonal bases for Krylov subspace methods and eigenvalue solvers, but a rigorous proof...

15 citations


Journal ArticleDOI
TL;DR: A novel semidefinite programming (SDP) relaxation is introduced, and it is shown that the SDP solves the problem exactly in the low noise regime, i.e., when $\theta$ is close to be rank deficient.
Abstract: Given an affine space of matrices $\mathcal{L}$ and a matrix $\Theta\in \mathcal{L}$, consider the problem of computing the closest rank deficient matrix to $\Theta$ on $\mathcal{L}$ with respect t...

15 citations


Journal ArticleDOI
Abstract: In this paper new general modewise Johnson--Lindenstrauss (JL) subspace embeddings are proposed that can be both generated much faster and stored more easily than traditional JL embeddings when wor...

13 citations


Journal ArticleDOI
TL;DR: In this article, the spectral properties of kernel matrices in the so-called flat limit are studied. But the spectral property of kernel matrix in the flat limit is not discussed. And it is not shown that the flat-limit spectral properties can be improved.
Abstract: Kernel matrices are of central importance to many applied fields. In this manuscript, we focus on spectral properties of kernel matrices in the so-called “flat limit,” which occurs when points are ...

11 citations


Journal ArticleDOI
TL;DR: Three methodologies are developed that bound the compressibility of a tensor: Algebraic structure, Smoothness, and Displacement structure, which derive bounds on storage costs that partially explain the abundance of compressible tensors in applied mathematics.
Abstract: Tensors are often compressed by expressing them in data-sparse tensor formats, where storage costs in such formats are less than those in the original structure. In this paper, we develop three met...

11 citations


Journal ArticleDOI
TL;DR: Quaternion matrices are employed successfully in many color image processing applications as discussed by the authors, in particular, a pure quaternion matrix can be used to represent red, green, and blue channels of color im...
Abstract: Quaternion matrices are employed successfully in many color image processing applications. In particular, a pure quaternion matrix can be used to represent red, green, and blue channels of color im...

9 citations


Journal ArticleDOI
TL;DR: The Schur-Parlett algorithm as discussed by the authors evaluates an analytic function at an n-times n$ matrix argument by using the Schur decomposition and a block recurrence of P....
Abstract: The Schur--Parlett algorithm, implemented in MATLAB as \textttfunm, evaluates an analytic function $f$ at an $n\times n$ matrix argument by using the Schur decomposition and a block recurrence of P...

Journal ArticleDOI
TL;DR: In practical applications, incremental tensors are very common: only a portion of tensor data is available, and new data are arriving in the next time step or continuously over time.
Abstract: In practical applications, incremental tensors are very common: only a portion of tensor data is available, and new data are arriving in the next time step or continuously over time. To handle this...

Journal ArticleDOI
TL;DR: In particular, for large and sparse matrices, probing is an important but challenging task, in particular for large-scale matrices as discussed by the authors, where the trace of matrix functions is computationally challenging.
Abstract: The computation of matrix functions $f(A)$, or related quantities like their trace, is an important but challenging task, in particular, for large and sparse matrices $A$. In recent years, probing ...

Journal ArticleDOI
TL;DR: In this article, the authors considered linear port-Hamiltonian differential algebraic equations and showed that the linear port Hamiltonian equations can be solved by van der Schaft and Maschke's approach.
Abstract: We consider linear port-Hamiltonian differential-algebraic equations. Inspired by the geometric approach of van der Schaft and Maschke [System Control Lett., 121 (2018), pp. 31--37] and the linear ...

Journal ArticleDOI
TL;DR: A few matrix-vector multiplications with random vectors are often sufficient to obtain reasonably good estimates for the norm of a general matrix or the trace of a symmetric positive semi-definite matrix as discussed by the authors.
Abstract: A few matrix-vector multiplications with random vectors are often sufficient to obtain reasonably good estimates for the norm of a general matrix or the trace of a symmetric positive semi-definite ...

Journal ArticleDOI
TL;DR: This work makes progress towards addressing issues by implicitly generating the sketched system and solving it simultaneously through an iterative procedure, and exploits a connection between random sketching methods and randomized iterative solvers to produce a stronger, more precise convergence theory.
Abstract: Randomized linear system solvers have become popular as they have the potential to reduce floating point complexity while still achieving desirable convergence rates. One particularly promising cla...

Journal ArticleDOI
TL;DR: In this paper, several polynomial Krylov methods for the approximation of the action of a Stieltjes matrix function of a l... matrix function with a limited amount of memory and a target accuracy were compared.
Abstract: Given a limited amount of memory and a target accuracy, we propose and compare several polynomial Krylov methods for the approximation of $f(A){b}$, the action of a Stieltjes matrix function of a l...

Journal ArticleDOI
TL;DR: An effective power based parallel preconditioner is proposed for general large sparse linear systems that combines a power series expansion method with some low-rank correction techniques, where the Sherman-Morrison-Woodbury formula is utilized.
Abstract: A parallel preconditioner is proposed for general large sparse linear systems that combines a power series expansion method with low-rank correction techniques. To enhance convergence, a power seri...

Journal ArticleDOI
TL;DR: It is shown more generally that a rank-$1$ perturbation to an orthogonal matrix producing large growth for any form of pivoting also generates large growth under reasonable assumptions, and it is demonstrated that GMRES-based iterative refinement can provide stable solutions to $Ax = b$ when large growth occurs in low precision LU factors, even when standard iteratives cannot.
Abstract: We identify a class of random, dense, $n\times n$ matrices for which LU factorization with any form of pivoting produces a growth factor typically of size at least $n/(4 \log n)$ for large $n$. The...

Journal ArticleDOI
TL;DR: The CUR decomposition is a factorization of a low-rank matrix obtained by selecting certain column and row submatrices of it as discussed by the authors. But it is not an exact decomposition of the matrix.
Abstract: The CUR decomposition is a factorization of a low-rank matrix obtained by selecting certain column and row submatrices of it. We perform a thorough investigation of what happens to such decompositi...

Journal ArticleDOI
Tim Mitchell1
TL;DR: A new approach to computing global minimizers of singular value functions in two real variables with a $\mathcal{O}(kn^3)$ work complexity and lower memory requirements is proposed.
Abstract: We propose a new approach to computing global minimizers of singular value functions in two real variables. Specifically, we present new algorithms to compute the Kreiss constant of a matrix and th...

Journal ArticleDOI
TL;DR: In this paper, a unified scheme for perturbation analysis of the Schur decomposition is presented, which allows one to obtain new local (asymptotic) componentwis...
Abstract: This paper presents a unified scheme for perturbation analysis of the Schur decomposition $A = UTU^{{H}}$ of an $n$th order matrix $A$ which allows one to obtain new local (asymptotic) componentwis...

Journal ArticleDOI
TL;DR: In this paper, a fast entry-wise evaluation of the Gauss-Newton Hessian (GNH) matrix for the fully connected feed-forward neural network is presented.
Abstract: We introduce a fast algorithm for entrywise evaluation of the Gauss--Newton Hessian (GNH) matrix for the fully connected feed-forward neural network. The algorithm has a precomputation step and a s...

Journal ArticleDOI
TL;DR: In this article, a complete theory for walk-based centrality indices in complex networks defined in terms of Mittag-Leffler functions is described, including subgraph centrality and Katz centrality.
Abstract: We describe a complete theory for walk-based centrality indices in complex networks defined in terms of Mittag–Leffler functions. This overarching theory includes as special cases well known centrality measures like subgraph centrality and Katz centrality. The indices we introduce are parametrized by two numbers; by letting these vary, we show that Mittag–Leffler centralities interpolate between degree and eigenvector centrality, as well as between resolvent-based and exponential-based indices. We further discuss modelling and computational issues, and provide guidelines on parameter 10 selection. The theory is then extended to the case of networks that evolve over time. Numerical experiments on synthetic and real-world networks are provided.

Journal ArticleDOI
TL;DR: The optimal complex relaxation parameters minimizing smoothing factors associated with multigrid using red-black successive overrelaxation or damped Jacobi smoothing applied to a class of liners are derived.
Abstract: We derive optimal complex relaxation parameters minimizing smoothing factors associated with multigrid using red-black successive overrelaxation or damped Jacobi smoothing applied to a class of lin...

Journal ArticleDOI
TL;DR: In this article, a tensor decomposition for third-order tensors is proposed, which decomposes a third order tensor to three third order factor tensors, each of which is a Tucker decomposition.
Abstract: Motivated by the Tucker decomposition, in this paper we introduce a new tensor decomposition for third order tensors, which decomposes a third order tensor to three third order factor tensors. Each...

Journal ArticleDOI
TL;DR: In this paper, the problem of nonnegative matrix factorization (NMF) was introduced, where the objective is to approximate an input non-negative matrix as the product of two smaller nonnegative matrices, i.e., W$ and H$.
Abstract: Nonnegative matrix factorization (NMF) is the problem of approximating an input nonnegative matrix, $V$, as the product of two smaller nonnegative matrices, $W$ and $H$. In this paper, we introduce...

Journal ArticleDOI
TL;DR: It is shown that in order to verify that two tensors are generated by the same (possibly scaled) terms it is not necessary to compute the individual decompositions and under some assumptions the verification can be reduced to a comparison of both the column and row spaces of the corresponding matrix representations of the tensors.
Abstract: Decompositions of higher-order tensors into sums of simple terms are ubiquitous We show that in order to verify that two tensors are generated by the same (possibly scaled) terms it is not necessa

Journal ArticleDOI
TL;DR: In this article, the GMRES residuals of a linear bounded operator on a Hilbert space are computed such that the residuals satisfy the following GMRES conditions: r_k\|\leq M_B\|r_{k-1}\...
Abstract: Suppose that a linear bounded operator $B$ on a Hilbert space exhibits at least linear GMRES convergence, i.e., there exists $M_B<1$ such that the GMRES residuals fulfill $\|r_k\|\leq M_B\|r_{k-1}\...

Journal ArticleDOI
TL;DR: Nonnegative matrix factorizations are often encountered in data mining applications where they are used to explain datasets by a small number of parts as mentioned in this paper, and for many of these applications it is desirabl
Abstract: Nonnegative matrix factorizations are often encountered in data mining applications where they are used to explain datasets by a small number of parts For many of these applications it is desirabl