scispace - formally typeset
Search or ask a question

Showing papers in "SIAM Journal on Matrix Analysis and Applications in 2015"


Journal ArticleDOI
TL;DR: A novel, fundamental and surprisingly simple randomized iterative method for solving consistent linear systems, which allows for a much wider selection of these two parameters, which leads to a number of new specific methods.
Abstract: We develop a novel, fundamental, and surprisingly simple randomized iterative method for solving consistent linear systems. Our method has six different but equivalent interpretations: sketch-and-project, constrain-and-approximate, random intersect, random linear solve, random update, and random fixed point. By varying its two parameters---a positive definite matrix (defining geometry), and a random matrix (sampled in an independent and identically distributed fashion in each iteration)---we recover a comprehensive array of well-known algorithms as special cases, including the randomized Kaczmarz method, randomized Newton method, randomized coordinate descent method, and random Gaussian pursuit. We naturally also obtain variants of all these methods using blocks and importance sampling. However, our method allows for a much wider selection of these two parameters, which leads to a number of new specific methods. We prove exponential convergence of the expected norm of the error in a single theorem, from w...

249 citations


Journal ArticleDOI
TL;DR: In this article, the authors provided a unified theory of the Kaczmarz and Gauss-Seidel methods, their variants for these different settings, and drew connections between both approaches.
Abstract: The Kaczmarz and Gauss--Seidel methods both solve a linear system $\boldsymbol{X}{\boldsymbol{\beta}} = \boldsymbol{y}$ by iteratively refining the solution estimate. Recent interest in these methods has been sparked by a proof of Strohmer and Vershynin which shows the randomized Kaczmarz method converges linearly in expectation to the solution. Lewis and Leventhal then proved a similar result for the randomized Gauss--Seidel algorithm. However, the behavior of both methods depends heavily on whether the system is underdetermined or overdetermined, and whether it is consistent or not. Here we provide a unified theory of both methods, their variants for these different settings, and draw connections between both approaches. In doing so, we also provide a proof that an extended version of randomized Gauss--Seidel converges linearly to the least norm solution in the underdetermined case (where the usual randomized Gauss--Seidel fails to converge). We detail analytically and empirically the convergence proper...

162 citations


Journal ArticleDOI
TL;DR: This work considers a broad class of walk-based, parameterized node centrality measures for network analysis and shows that the parameter can be “tuned" to interpolate between degree and eigenvector centralities, which appear as limiting cases.
Abstract: We consider a broad class of walk-based, parameterized node centrality measures for network analysis. These measures are expressed in terms of functions of the adjacency matrix and generalize various well-known centrality indices, including Katz and subgraph centralities. We show that the parameter can be “tuned" to interpolate between degree and eigenvector centralities, which appear as limiting cases. Our analysis helps explain certain correlations often observed between the rankings obtained using different centrality measures and provides some guidance for the tuning of parameters. We also highlight the roles played by the spectral gap of the adjacency matrix and by the number of triangles in the network. Our analysis covers both undirected and directed networks, including weighted ones. A brief discussion of PageRank is also given.

133 citations


Journal ArticleDOI
TL;DR: This paper explains how to obtain a coupled CPD from one of the individual CPDs and presents an algorithm that directly takes the coupling between several C PDs into account, and extends the methods to single and coupled decompositions in multilinear rank terms.
Abstract: The coupled canonical polyadic decomposition (CPD) is an emerging tool for the joint analysis of multiple data sets in signal processing and statistics. Despite their importance, linear algebra based algorithms for coupled CPDs have not yet been developed. In this paper, we first explain how to obtain a coupled CPD from one of the individual CPDs. Next, we present an algorithm that directly takes the coupling between several CPDs into account. We extend the methods to single and coupled decompositions in multilinear rank-$(L_{{r,n}},L_{{r,n}},1)$ terms. Finally, numerical experiments demonstrate that linear algebra based algorithms can provide good results at a reasonable computational cost.

106 citations


Journal ArticleDOI
TL;DR: The CORK family of rational Krylov methods exploits the structure of the linearization pencils by using a generalization of the compact Arnoldi decomposition, so that the extra memory and orthogonalization costs due to the linearizations of the original eigenvalue problem are negligible for large-scale problems.
Abstract: We propose a new uniform framework of compact rational Krylov (CORK) methods for solving large-scale nonlinear eigenvalue problems $A(\lambda) x = 0$. For many years, linearizations were used for solving polynomial and rational eigenvalue problems. On the other hand, for the general nonlinear case, $A(\lambda)$ can first be approximated by a (rational) matrix polynomial and then a convenient linearization is used. However, the major disadvantage of linearization-based methods is the growing memory and orthogonalization costs with the iteration count, i.e., in general they are proportional to the degree of the polynomial. Therefore, the CORK family of rational Krylov methods exploits the structure of the linearization pencils by using a generalization of the compact Arnoldi decomposition. In this way, the extra memory and orthogonalization costs due to the linearization of the original eigenvalue problem are negligible for large-scale problems. Furthermore, we prove that each CORK step breaks down into an ...

80 citations


Journal ArticleDOI
TL;DR: The conditions that guarantee that a decomposition of a generic third-order tensor in a minimal number of rank-$1$ tensors (canonical polyadic decomposition (CPD) is unique up to a permutation of rank-1 tensors are found.
Abstract: We find conditions that guarantee that a decomposition of a generic third-order tensor in a minimal number of rank-$1$ tensors (canonical polyadic decomposition (CPD)) is unique up to a permutation of rank-$1$ tensors. Then we consider the case when the tensor and all its rank-$1$ terms have symmetric frontal slices (INDSCAL). Our results complement the existing bounds for generic uniqueness of the CPD and relax the existing bounds for INDSCAL. The derivation makes use of algebraic geometry. We stress the power of the underlying concepts for proving generic properties in mathematical engineering.

80 citations


Journal ArticleDOI
TL;DR: In this article, a new framework for interpolatory model reduction of large-scale bilinear systems is introduced, where multipoint interpolation of the underlying Volterra series is enforced.
Abstract: In this paper, we focus on model reduction of large-scale bilinear systems. The main contributions are threefold. First, we introduce a new framework for interpolatory model reduction of bilinear systems. In contrast to the existing methods where interpolation is forced on some of the leading subsystem transfer functions, the new framework shows how to enforce multipoint interpolation of the underlying Volterra series. Then, we show that the first-order conditions for optimal $\mathcal{H}_2$ model reduction of bilinear systems require multivariate Hermite interpolation in terms of the new Volterra series interpolation framework; thus we extend the interpolation-based first-order necessary conditions for $\mathcal{H}_2$ optimality of LTI systems to the bilinear case. Finally, we show that multipoint interpolation on the truncated Volterra series representation of a bilinear system leads to an asymptotically optimal approach to $\mathcal{H}_2$ optimal model reduction, leading to an efficient model reduction...

76 citations


Journal ArticleDOI
TL;DR: A rational Krylov method for rational least squares fitting is developed and an implicit Q theorem forrational Krylov spaces is presented.
Abstract: Generalized rational Krylov decompositions are matrix relations which, under certain conditions, are associated with rational Krylov spaces. We study the algebraic properties of such decompositions and present an implicit Q theorem for rational Krylov spaces. Transformations on rational Krylov decompositions allow for changing the poles of a rational Krylov space without recomputation, and two algorithms are presented for this task. Using such transformations we develop a rational Krylov method for rational least squares fitting. Numerical experiments indicate that the proposed method converges fast and robustly. A MATLAB toolbox with implementations of the presented algorithms and experiments is provided.

74 citations


Journal ArticleDOI
TL;DR: Roth-type theorems for systems of matrix equations including an arbitrary mix of Sylvester and $\star$-Sylvester equations, in which the transpose or conjugate transpose of the unknown matrices also appear are proved.
Abstract: We prove Roth-type theorems for systems of matrix equations including an arbitrary mix of Sylvester and $\star$-Sylvester equations, in which the transpose or conjugate transpose of the unknown matrices also appear. In full generality, we derive consistency conditions by proving that such a system has a solution if and only if the associated set of $2 \times 2$ block matrix representations of the equations are block diagonalizable by (linked) equivalence transformations. Various applications leading to several particular cases have already been investigated in the literature, some recently and some long ago. Solvability of these cases follow immediately from our general consistency theory. We also show how to apply our main result to systems of Stein-type matrix equations.

61 citations


Journal ArticleDOI
TL;DR: This paper extends several classical results from matrices or matrix pairs to tensor pairs, such as the Gershgorin circle theorem, the Collatz--Wielandt formula, the Bauer--Fike theorem, and the Rayleigh--Ritz theorem, to generalized tensor eigenvalue problems.
Abstract: This paper is devoted to generalized tensor eigenvalue problems. We focus on the properties and perturbations of the spectra of regular tensor pairs. Employing different techniques, we extend several classical results from matrices or matrix pairs to tensor pairs, such as the Gershgorin circle theorem, the Collatz--Wielandt formula, the Bauer--Fike theorem, the Rayleigh--Ritz theorem, backward error analysis, the componentwise distance of a nonsingular tensor to singularity, etc. Some of these results preserve their original forms, while others change when being extended.

57 citations


Journal ArticleDOI
TL;DR: A stable algorithm to compute the roots of polynomials by computing the eigenvalues of the associated companion matrix by Francis's implicitly shifted QR algorithm is presented.
Abstract: A stable algorithm to compute the roots of polynomials is presented. The roots are found by computing the eigenvalues of the associated companion matrix by Francis's implicitly shifted QR algorithm. A companion matrix is an upper Hessenberg matrix that is unitary-plus-rank-one, that is, it is the sum of a unitary matrix and a rank-one matrix. These properties are preserved by iterations of Francis's algorithm, and it is these properties that are exploited here. The matrix is represented as a product of 3n-1 Givens rotators plus the rank-one part, so only $O(n)$ storage space is required. In fact, the information about the rank-one part is also encoded in the rotators, so it is not necessary to store the rank-one part explicitly. Francis's algorithm implemented on this representation requires only O(n) flops per iteration and thus $O(n^{2})$ flops overall. The algorithm is described, normwise backward stability is proved, and an extensive set of numerical experiments is presented. The algorithm is shown to...

Journal ArticleDOI
TL;DR: In this paper, decay bounds for completely monotonic functions of Hermitian matrices, where the matrix argument is banded or a Kronecker sum of banded matrices are presented.
Abstract: We present decay bounds for completely monotonic functions of Hermitian matrices, where the matrix argument is banded or a Kronecker sum of banded matrices. This class includes the exponential, the negative fractional roots, and other functions that are important in applications. Besides being significantly tighter than previous estimates, the new bounds closely capture the actual (nonmonotonic) decay behavior of the entries of functions of matrices with Kronecker sum structure. We also discuss extensions to more general sparse matrices.

Journal ArticleDOI
TL;DR: The formulation of a Riemannian Newton algorithm for solving a class of nonlinear eigenvalue problems by minimizing a total energy function subject to the orthogonality constraint is given.
Abstract: We give the formulation of a Riemannian Newton algorithm for solving a class of nonlinear eigenvalue problems by minimizing a total energy function subject to the orthogonality constraint. Under some mild assumptions, we establish the global and quadratic convergence of the proposed method. Moreover, the positive definiteness condition of the Riemannian Hessian of the total energy function at a solution is derived. Some numerical tests are reported to illustrate the efficiency of the proposed method for solving large-scale problems.

Journal ArticleDOI
TL;DR: The proof developed for the existence of a matrix polynomial when its degree, its finite and infinite elementary divisors, and its left and right minimal indices are prescribed is constructive and solves a very general inverse problem for Matrix polynomials with prescribed complete eigenstructure.
Abstract: We present necessary and sufficient conditions for the existence of a matrix polynomial when its degree, its finite and infinite elementary divisors, and its left and right minimal indices are prescribed These conditions hold for arbitrary infinite fields and are determined mainly by the “index sum theorem,” which is a fundamental relationship between the rank, the degree, the sum of all partial multiplicities, and the sum of all minimal indices of any matrix polynomial The proof developed for the existence of such polynomial is constructive and, therefore, solves a very general inverse problem for matrix polynomials with prescribed complete eigenstructure This result allows us to fix the problem of the existence of $\ell$-ifications of a given matrix polynomial, as well as to determine all their possible sizes and eigenstructures

Journal ArticleDOI
TL;DR: For a Monte Carlo matrix multiplication algorithm by Drineas et al. as discussed by the authors, the bounds depend on the stable rank or the rank of the matrix, but not on the matrix dimensions.
Abstract: Given a real matrix $\mathbf{A}$ with $n$ columns, the problem is to approximate the Gram product $\mathbf{A}\mathbf{A}^T$ by $c\ll n$ weighted outer products of columns of $\mathbf{A}$. Necessary and sufficient conditions for the exact computation of $\mathbf{A}\mathbf{A}^T$ (in exact arithmetic) from $c\geq \mathrm{rank}(\mathbf{A})$ columns depend on the right singular vector matrix of $\mathbf{A}$. For a Monte Carlo matrix multiplication algorithm by Drineas et al. that samples outer products, we present probabilistic bounds for the two-norm relative error due to randomization. The bounds depend on the stable rank or the rank of $\mathbf{A}$, but not on the matrix dimensions. Numerical experiments illustrate that the bounds are informative, even for stringent success probabilities and matrices of small dimension. We also derive bounds for the smallest singular value and the condition number of matrices obtained by sampling rows from orthonormal matrices.

Journal ArticleDOI
TL;DR: This paper constructs counterexamples to the DJL conjecture for all $n\ge {12}$ and shows the largest possible cp-rank of an completely positive matrix, p_n, is defined.
Abstract: Let $p_n$ denote the largest possible cp-rank of an $n\times n$ completely positive matrix. This matrix parameter has its significance both in theory and applications, as it sheds light on the geometry and structure of the solution set of hard optimization problems in their completely positive formulation. Known bounds for $p_n$ are $s_n=\binom{n+1}2-4$, the current best upper bound, and the Drew--Johnson--Loewy (DJL) lower bound $d_n=\lfloor\frac{n^2}4\rfloor$. The famous DJL conjecture (1994) states that $p_n=d_n$. Here we show $p_n=\frac {n^2}2 +{\mathcal O}(n^{3/2}) = 2d_n+{\mathcal O}(n^{3/2}){{p_n=\frac {n^2}2 +{\mathcal O}(n^{3/2}) = 2d_n+{\mathcal O}(n^{3/2})}}$, and construct counterexamples to the DJL conjecture for all $n\ge {12}$ (for orders seven through eleven counterexamples were already given in [I. M. Bomze, W. Schachinger, and R. Ullrich, Linear Algebra Appl., 459 (2014), pp. 208--221].

Journal ArticleDOI
TL;DR: In this paper, it was shown that a larger departure from normality can actually correspond to faster decay of singular values: if the singular values decay slowly, the numerical range cannot extend far into the right half-plane.
Abstract: Lyapunov equations with low-rank right-hand sides often have solutions whose singular values decay rapidly, enabling iterative methods that produce low-rank approximate solutions. All previously known bounds on this decay involve quantities that depend quadratically on the departure of the coefficient matrix from normality: these bounds suggest that the larger the departure from normality, the slower the singular values will decay. We show this is only true up to a threshold, beyond which a larger departure from normality can actually correspond to faster decay of singular values: if the singular values decay slowly, the numerical range cannot extend far into the right half-plane.

Journal ArticleDOI
TL;DR: In this paper, the first-order information of the polynomials in a set of sampling points is captured by the Jacobian matrix evaluated at the sampling points, and the canonical polyadic decomposition of the three-way tensor of Jacobian matrices directly returns the unknown linear relations as well as the necessary information to reconstruct the univariate polynomial coefficients.
Abstract: We present a method to decompose a set of multivariate real polynomials into linear combinations of univariate polynomials in linear forms of the input variables. The method proceeds by collecting the first-order information of the polynomials in a set of sampling points, which is captured by the Jacobian matrix evaluated at the sampling points. The canonical polyadic decomposition of the three-way tensor of Jacobian matrices directly returns the unknown linear relations as well as the necessary information to reconstruct the univariate polynomials. The conditions under which this decoupling procedure works are discussed, and the method is illustrated on several numerical examples.

Journal ArticleDOI
TL;DR: The conditioning and the spectral distribution in the Weyl sense are investigated, and the so-called (spectral) symbol describing the asymptotic spectrum is determined.
Abstract: We study the spectral properties of the stiffness matrices coming from the approximation of a $d$-dimensional second order elliptic differential problem by the $\mathbb Q_{\boldsymbol p}$ Lagrangian finite element method (FEM); here, ${\boldsymbol p}=(p_1,\ldots,p_d)$ and $p_j$ represents the polynomial approximation degree in the $j$th direction. After presenting a construction of these matrices, we investigate the conditioning and the spectral distribution in the Weyl sense, and we determine the so-called (spectral) symbol describing the asymptotic spectrum. We also study the properties of the symbol, which turns out to be a $d$-variate function taking values in the space of $N({\boldsymbol p})\times N({\boldsymbol p})$ Hermitian matrices, where $N({\boldsymbol p})=\prod_{j=1}^d p_j$. Unlike the stiffness matrices coming from the ${\boldsymbol p}$-degree B-spline isogeometric analysis approximation of the same differential problem, where a unique $d$-variate real-valued function describes all the spectr...

Journal ArticleDOI
TL;DR: By the simple device of reordering, a circulant preconditioned short recurrence Krylov subspace iterative method of minimum residual type for nonsymmetric (and possibly highly nonnormal) Toeplitz systems is rigorously established.
Abstract: Circulant preconditioning for symmetric Toeplitz linear systems is well established; theoretical guarantees of fast convergence for the conjugate gradient method are descriptive of the convergence seen in computations. This has led to robust and highly efficient solvers based on use of the fast Fourier transform exactly as originally envisaged in [G. Strang, Stud. Appl. Math., 74 (1986), pp. 171--176]. For nonsymmetric systems, the lack of generally descriptive convergence theory for most iterative methods of Krylov type has provided a barrier to such a comprehensive guarantee, though several methods have been proposed and some analysis of performance with the normal equations is available. In this paper, by the simple device of reordering, we rigorously establish a circulant preconditioned short recurrence Krylov subspace iterative method of minimum residual type for nonsymmetric (and possibly highly nonnormal) Toeplitz systems. Convergence estimates similar to those in the symmetric case are established.

Journal ArticleDOI
TL;DR: The conventional high-order power method is modified to address the desirable orthogonality via the polar decomposition and it is shown that for almost all tensors the orthogonal alternating least squares method converges globally.
Abstract: With the notable exceptions of two cases---that tensors of order 2, namely, matrices, always have best approximations of arbitrary low ranks and that tensors of any order always have the best rank-1 approximation, it is known that high-order tensors may fail to have best low rank approximations. When the condition of orthogonality is imposed, even under the modest assumption of semiorthogonality where only one set of components in the decomposed rank-1 tensors is required to be mutually perpendicular, the situation is changed completely---orthogonal low rank approximations always exist. The purpose of this paper is to discuss the best low rank approximation subject to semiorthogonality. The conventional high-order power method is modified to address the desirable orthogonality via the polar decomposition. Algebraic geometry technique is employed to show that for almost all tensors the orthogonal alternating least squares method converges globally.

Journal ArticleDOI
TL;DR: A new uniqueness condition is presented for a polyadic decomposition (PD) where one of the factor matrices is assumed to be known and this result can be used to obtain a new overall uniqueness condition for the CPD.
Abstract: The uniqueness properties of the canonical polyadic decomposition (CPD) of higher-order tensors make it an attractive tool for signal separation. However, CPD uniqueness is not yet fully understood. In this paper, we first present a new uniqueness condition for a polyadic decomposition (PD) where one of the factor matrices is assumed to be known. We also show that this result can be used to obtain a new overall uniqueness condition for the CPD. In signal processing the CPD factor matrices are often constrained. Building on the preceding results, we provide a new uniqueness condition for a CPD with a columnwise orthonormal factor matrix, representing uncorrelated signals. We also obtain a new uniqueness condition for a CPD with a partial Hermitian symmetry, useful for tensors in which covariance matrices are stacked, which are common in statistical signal processing. We explain that such constraints can lead to more relaxed uniqueness conditions. Finally, we provide an inexpensive algorithm for computing a...

Journal ArticleDOI
TL;DR: A general convergence theory for the generalized minimal residual method preconditioned by inner iterations for solving least squares problems is developed and numerical experiments show that the proposed methods are more robust and efficient compared to previous methods for some rank-deficient problems.
Abstract: We develop a general convergence theory for the generalized minimal residual method preconditioned by inner iterations for solving least squares problems. The inner iterations are performed by stationary iterative methods. We also present theoretical justifications for using specific inner iterations such as the Jacobi and SOR-type methods. The theory improves previous work [K. Morikuni and K. Hayami, SIAM J. Matrix Anal. Appl., 34 (2013), pp. 1--22], particularly in the rank-deficient case. We also characterize the spectrum of the preconditioned coefficient matrix by the spectral radius of the iteration matrix for the inner iterations and give a convergence bound for the proposed methods. Finally, numerical experiments show that the proposed methods are more robust and efficient compared to previous methods for some rank-deficient problems.

Journal ArticleDOI
TL;DR: A method is provided that allows us to compute to high relative accuracy the eigenvalues and singular values of the q-Bernstein--Vandermonde matrices, as well as their inverses.
Abstract: The q-Bernstein--Vandermonde matrices are the collocation matrices of the q-Bernstein basis. This paper provides a method that allows us to compute to high relative accuracy the eigenvalues and singular values of these matrices, as well as their inverses.

Journal ArticleDOI
TL;DR: The results provide the basis for analyzing the effect of Gaubert and Sharify's tropical scalings for P(\lambda) on (a) the conditioning of linearizations of tropically scaled $P(\ Lambda)$ and (b) the backward stability of eigensolvers based on linearized versions of Tropically scaled P(\ lambda)$.
Abstract: The tropical roots of $tp(x) = \max_{0\le j\le d}\|A_j\|x^j$ are points at which the maximum is attained at least twice. These roots, which can be computed in only $O(d)$ operations, can be good approximations to the moduli of the eigenvalues of the matrix polynomial $P(\lambda)=\sum_{j=0}^d \lambda^j A_j$, in particular when the norms of the matrices $A_j$ vary widely. Our aim is to investigate this observation and its applications. We start by providing annuli defined in terms of the tropical roots of $tp(x)$ that contain the eigenvalues of $P(\lambda)$. Our localization results yield conditions under which tropical roots offer order of magnitude approximations to the moduli of the eigenvalues of $P(\lambda)$. Our tropical localization of eigenvalues are less tight than eigenvalue localization results derived from a generalized matrix version of Pellet's theorem but they are easier to interpret. Tropical roots are already used to determine the starting points for matrix polynomial eigensolvers based on scalar polynomial root solvers such as the Ehrlich-Aberth method and our results further justify this choice. Our results provide the basis for analyzing the effect of Gaubert and Sharify's tropical scalings for $P(\lambda)$ on (a) the conditioning of linearizations of tropically scaled $P(\lambda)$ and (b) the backward stability of eigensolvers based on linearizations of tropically scaled $P(\lambda)$. We anticipate that the tropical roots of $tp(x)$, on which the tropical scalings are based, will help designing polynomial eigensolvers with better numerical properties than standard algorithms for polynomial eigenvalue problems such as that implemented in the MATLAB function \texttt{polyeig}.

Journal ArticleDOI
TL;DR: In this article, the authors proposed new algorithms for singular value decomposition (SVD) of large-scale matrices based on a low-rank tensor approximation technique called the tensor train (TT) format.
Abstract: We propose new algorithms for singular value decomposition (SVD) of large-scale matrices based on a low-rank tensor approximation technique called the tensor train (TT) format. The proposed algorithms can compute a few extreme (i.e., largest or smallest) singular values and corresponding singular vectors for large-scale structured matrices given in a TT format. The computational complexity of the proposed methods scales logarithmically with the matrix size under the assumption that both the matrix and the singular vectors admit approximate low-rank TT decompositions. The proposed methods, which are called the alternating least squares SVD (ALS-SVD) and modified alternating least squares SVD (MALS-SVD), compute the left and right singular vectors approximately through block TT decompositions. A large-scale optimization problem is reduced to sequential small-scale optimization problems, and each core tensor of the block TT decompositions can be updated by applying any standard SVD method. The optimal ranks ...

Journal ArticleDOI
TL;DR: It is proved that the stationary distribution of a system of reacting species with a weakly reversible reaction network of zero deficiency admits the tensor-structured approximation in the quantized tensor train (QTT) format.
Abstract: We prove that the stationary distribution of a system of reacting species with a weakly reversible reaction network of zero deficiency in the sense of Feinberg and separable propensity functions admits the tensor-structured approximation in the quantized tensor train (QTT) format The complexity of the approximation scales linearly with respect to the number of species and logarithmically in the maximum copy numbers as well as in the desired accuracy Our result covers the classical mass-action and Michaelis--Menten kinetics which correspond to two widely used classes of propensity functions and also to arbitrary combinations of those New rank bounds for tensor-structured approximations of the probability density function (PDF) of a truncated one-dimensional Poisson distribution are an auxiliary result of the present paper which might be of independent interest The present work complements recent results obtained by the authors jointly with Khammash and Nip on the tensor-structured numerical simulation

Journal ArticleDOI
TL;DR: In this article, the authors considered the problem of efficiently computing the eigenvalues of limited-memory quasi-Newton matrices that exhibit a compact formulation, and proposed a compact formula for quasiNewton matrix generated by any member of the Broyden convex class of updates.
Abstract: In this paper, we consider the problem of efficiently computing the eigenvalues of limited-memory quasi-Newton matrices that exhibit a compact formulation. In addition, we produce a compact formula for quasi-Newton matrices generated by any member of the Broyden convex class of updates. Our proposed method makes use of efficient updates to the QR factorization that substantially reduce the cost of computing the eigenvalues after the quasi-Newton matrix is updated. Numerical experiments suggest that the proposed method is able to compute eigenvalues to high accuracy. Applications for this work include modified quasi-Newton methods and trust-region methods for large-scale optimization, the efficient computation of condition numbers and singular values, and sensitivity analysis.

Journal ArticleDOI
TL;DR: In this paper, a nonasymptotic analysis on the singular vector distribution under Gaussian noise was performed and sufficient conditions on a matrix for its first few singular vectors to have near normal distribution were provided.
Abstract: We perform a nonasymptotic analysis on the singular vector distribution under Gaussian noise. In particular, we provide sufficient conditions on a matrix for its first few singular vectors to have near normal distribution. Our result can be used to facilitate the error analysis in linear dimension reduction.

Journal ArticleDOI
TL;DR: An algorithm that employs the Schur decomposition and blocks the triangular form in such a way that Newton's method can be used on each diagonal block, with a starting matrix depending on the block is proposed.
Abstract: An algorithm is proposed for computing primary matrix Lambert $W$ functions of a square matrix $A$, which are solutions of the matrix equation $We^W = A$. The algorithm employs the Schur decomposition and blocks the triangular form in such a way that Newton's method can be used on each diagonal block, with a starting matrix depending on the block. A natural simplification of Newton's method for the Lambert $W$ function is shown to be numerically unstable. By reorganizing the iteration a new Newton variant is constructed that is proved to be numerically stable. Numerical experiments demonstrate that the algorithm is able to compute the branches of the matrix Lambert $W$ function in a numerically reliable way.