scispace - formally typeset
Search or ask a question

Showing papers on "Matrix (mathematics) published in 2014"


Book
01 Jan 2014
TL;DR: In this paper, the existence theorems for combinatorially constrained matrices are given for instance matrices, digraphs, bigraphs and Latin squares, as well as some special graphs.
Abstract: 1. Incidence matrices 2. Matrices and graphs 3. Matrices and digraphs 4. Matrices and bigraphs 5. Combinatorial matrix algebra 6. Existence theorems for combinatorially constrained matrices 7. Some special graphs 8. The permanent 9. Latin squares.

1,073 citations


Journal ArticleDOI
TL;DR: In this paper, a sparsity-promoting variant of the standard dynamic mode decomposition (DMD) algorithm is developed, where sparsity is induced by regularizing the least-squares deviation between the matrix of snapshots and the linear combination of DMD modes with an additional term that penalizes the l 1-norm of the vector of the DMD amplitudes.
Abstract: Dynamic mode decomposition (DMD) represents an effective means for capturing the essential features of numerically or experimentally generated flow fields. In order to achieve a desirable tradeoff between the quality of approximation and the number of modes that are used to approximate the given fields, we develop a sparsity-promoting variant of the standard DMD algorithm. Sparsity is induced by regularizing the least-squares deviation between the matrix of snapshots and the linear combination of DMD modes with an additional term that penalizes the l1-norm of the vector of DMD amplitudes. The globally optimal solution of the resulting regularized convex optimization problem is computed using the alternating direction method of multipliers, an algorithm well-suited for large problems. Several examples of flow fields resulting from numerical simulations and physical experiments are used to illustrate the effectiveness of the developed method.

678 citations


Journal ArticleDOI
TL;DR: In this article, the authors calculate the complete order ycffff 2 and ycffff 4 terms of the 59 × 59 one-loop anomalous dimension matrix for the dimension-six operators of the Standard Model effective field theory, where y is a generic Yukawa coupling.
Abstract: We calculate the complete order y 2 and y 4 terms of the 59 × 59 one-loop anomalous dimension matrix for the dimension-six operators of the Standard Model effective field theory, where y is a generic Yukawa coupling. These terms, together with the terms of order λ, λ 2 and λy 2 depending on the Standard Model Higgs self-coupling λ which were calculated in a previous work, yield the complete one-loop anomalous dimension matrix in the limit of vanishing gauge couplings. The Yukawa contributions result in non-trivial flavor mixing in the various operator sectors of the Standard Model effective theory.

585 citations


Book
David P. Woodruff1
14 Nov 2014
TL;DR: A survey of linear sketching algorithms for numeric allinear algebra can be found in this paper, where the authors consider least squares as well as robust regression problems, low rank approximation, and graph sparsification.
Abstract: This survey highlights the recent advances in algorithms for numericallinear algebra that have come from the technique of linear sketching,whereby given a matrix, one first compresses it to a much smaller matrixby multiplying it by a (usually) random matrix with certain properties.Much of the expensive computation can then be performed onthe smaller matrix, thereby accelerating the solution for the originalproblem. In this survey we consider least squares as well as robust regressionproblems, low rank approximation, and graph sparsification.We also discuss a number of variants of these problems. Finally, wediscuss the limitations of sketching methods.

584 citations


Journal ArticleDOI
TL;DR: Empirical evidence suggests that performance improvement over TSVD and other popular shrinkage rules can be substantial, for different noise distributions, even in relatively small n.
Abstract: We consider recovery of low-rank matrices from noisy data by hard thresholding of singular values, in which empirical singular values below a threshold λ are set to 0. We study the asymptotic mean squared error (AMSE) in a framework, where the matrix size is large compared with the rank of the matrix to be recovered, and the signal-to-noise ratio of the low-rank piece stays constant. The AMSE-optimal choice of hard threshold, in the case of n-by-n matrix in white noise of level σ, is simply (4/√3)√nσ ≈ 2.309√nσ when σ is known, or simply 2.858 · y med when σ is unknown, where y med is the median empirical singular value. For nonsquare, m by n matrices with m ≠ n the thresholding coefficients 4/√3 and 2.858 are replaced with different provided constants that depend on m/n. Asymptotically, this thresholding rule adapts to unknown rank and unknown noise level in an optimal manner: it is always better than hard thresholding at any other value, and is always better than ideal truncated singular value decomposition (TSVD), which truncates at the true rank of the low-rank matrix we are trying to recover. Hard thresholding at the recommended value to recover an n-by-n matrix of rank r guarantees an AMSE at most 3 nrσ 2 . In comparison, the guarantees provided by TSVD, optimally tuned singular value soft thresholding and the best guarantee achievable by any shrinkage of the data singular values are 5 nrσ 2 , 6 nrσ 2 , and 2 nrσ 2 , respectively. The recommended value for hard threshold also offers, among hard thresholds, the best possible AMSE guarantees for recovering matrices with bounded nuclear norm. Empirical evidence suggests that performance improvement over TSVD and other popular shrinkage rules can be substantial, for different noise distributions, even in relatively small n.

516 citations


Proceedings ArticleDOI
Laurent Massoulié1
31 May 2014
Abstract: Decelle et al. [1] conjectured the existence of a sharp threshold on model parameters for community detection in sparse random graphs drawn from the stochastic block model. Mossel, Neeman and Sly [2] established the negative part of the conjecture, proving impossibility of non-trivial reconstruction below the threshold. In this work we solve the positive part of the conjecture. To that end we introduce a modified adjacency matrix B which counts self-avoiding paths of a given length e between pairs of nodes. We then prove that for logarithmic length e, the leading eigenvectors of this modified matrix provide a non-trivial reconstruction of the underlying structure, thereby settling the conjecture. A key step in the proof consists in establishing a weak Ramanujan property of the constructed matrix B. Namely, the spectrum of B consists in two leading eigenvalues ρ(B), λ2 and n -- 2 eigenvalues of a lower order O(ne √ρ(B) for all e 0, ρ(B) denoting B's spectral radius.

430 citations


Journal ArticleDOI
TL;DR: This work poses the problem of fitting a union of subspaces to a collection of data points drawn from one or more subspaced and corrupted by noise and/or gross errors as a non-convex optimization problem, and solves the problem using an alternating minimization approach.

421 citations


Journal ArticleDOI
TL;DR: New analysis tools for excited states are introduced including state averaged natural transition orbitals, which give a compact description of a number of states simultaneously, and natural difference orbitals (defined as the eigenvectors of the difference density matrix), which reveal details about orbital relaxation effects.
Abstract: A variety of density matrix based methods for the analysis and visualization of electronic excitations are discussed and their implementation within the framework of the algebraic diagrammatic construction of the polarization propagator is reported. Their mathematical expressions are given and an extensive phenomenological discussion is provided to aid the interpretation of the results. Starting from several standard procedures, e.g., population analysis, natural orbital decomposition, and density plotting, we proceed to more advanced concepts of natural transition orbitals and attachment/detachment densities. In addition, special focus is laid on information coded in the transition density matrix and its phenomenological analysis in terms of an electron-hole picture. Taking advantage of both the orbital and real space representations of the density matrices, the physical information in these analysis methods is outlined, and similarities and differences between the approaches are highlighted. Moreover, new analysis tools for excited states are introduced including state averaged natural transition orbitals, which give a compact description of a number of states simultaneously, and natural difference orbitals (defined as the eigenvectors of the difference density matrix), which reveal details about orbital relaxation effects.

364 citations


Journal ArticleDOI
David P. Woodruff1
TL;DR: This survey highlights the recent advances in algorithms for numericallinear algebra that have come from the technique of linear sketching, and considers least squares as well as robust regression problems, low rank approximation, and graph sparsification.
Abstract: This survey highlights the recent advances in algorithms for numerical linear algebra that have come from the technique of linear sketching, whereby given a matrix, one first compresses it to a much smaller matrix by multiplying it by a (usually) random matrix with certain properties. Much of the expensive computation can then be performed on the smaller matrix, thereby accelerating the solution for the original problem. In this survey we consider least squares as well as robust regression problems, low rank approximation, and graph sparsification. We also discuss a number of variants of these problems. Finally, we discuss the limitations of sketching methods.

335 citations


Journal ArticleDOI
TL;DR: This paper considers compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery.
Abstract: This paper considers compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool, which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while yielding sharp results. It is shown that for any given constant t ≥ 4/3, in compressed sensing, δtkA 0, δtkA <; √(t-1/t) + e is not sufficient to guarantee the exact recovery of all k-sparse signals for large k. Similar results also hold for matrix recovery. In addition, the conditions δtkA <; √((t-)1/t) and δtrM <; √((t-1)/t) are also shown to be sufficient, respectively, for stable recovery of approximately sparse signals and low-rank matrices in the noisy case.

325 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the tree-level S -matrix for quantum gravity in four-dimensional Minkowski space has a Virasoro symmetry which acts on the conformal sphere at null infinity.
Abstract: It is shown that the tree-level $$ \mathcal{S} $$ -matrix for quantum gravity in four-dimensional Minkowski space has a Virasoro symmetry which acts on the conformal sphere at null infinity.

Posted Content
TL;DR: Linear dimensionality reduction methods have been developed with a variety of names and motivations in many fields, and perhaps as a result the connections between all these methods have not been highlighted as discussed by the authors.
Abstract: Linear dimensionality reduction methods are a cornerstone of analyzing high dimensional data, due to their simple geometric interpretations and typically attractive computational properties. These methods capture many data features of interest, such as covariance, dynamical structure, correlation between data sets, input-output relationships, and margin between data classes. Methods have been developed with a variety of names and motivations in many fields, and perhaps as a result the connections between all these methods have not been highlighted. Here we survey methods from this disparate literature as optimization programs over matrix manifolds. We discuss principal component analysis, factor analysis, linear multidimensional scaling, Fisher's linear discriminant analysis, canonical correlations analysis, maximum autocorrelation factors, slow feature analysis, sufficient dimensionality reduction, undercomplete independent component analysis, linear regression, distance metric learning, and more. This optimization framework gives insight to some rarely discussed shortcomings of well-known methods, such as the suboptimality of certain eigenvector solutions. Modern techniques for optimization over matrix manifolds enable a generic linear dimensionality reduction solver, which accepts as input data and an objective to be optimized, and returns, as output, an optimal low-dimensional projection of the data. This simple optimization framework further allows straightforward generalizations and novel variants of classical methods, which we demonstrate here by creating an orthogonal-projection canonical correlations analysis. More broadly, this survey and generic solver suggest that linear dimensionality reduction can move toward becoming a blackbox, objective-agnostic numerical technology.

Journal ArticleDOI
TL;DR: This paper establishes a theoretical guarantee for the factorization based formulation to correctly recover the underlying low-rank matrix, and is the first one that provides exact recovery guarantee for many standard algorithms such as gradient descent, SGD and block coordinate gradient descent.
Abstract: Matrix factorization is a popular approach for large-scale matrix completion. The optimization formulation based on matrix factorization can be solved very efficiently by standard algorithms in practice. However, due to the non-convexity caused by the factorization model, there is a limited theoretical understanding of this formulation. In this paper, we establish a theoretical guarantee for the factorization formulation to correctly recover the underlying low-rank matrix. In particular, we show that under similar conditions to those in previous works, many standard optimization algorithms converge to the global optima of a factorization formulation, and recover the true low-rank matrix. We study the local geometry of a properly regularized factorization formulation and prove that any stationary point in a certain local region is globally optimal. A major difference of our work from the existing results is that we do not need resampling in either the algorithm or its analysis. Compared to other works on nonconvex optimization, one extra difficulty lies in analyzing nonconvex constrained optimization when the constraint (or the corresponding regularizer) is not "consistent" with the gradient direction. One technical contribution is the perturbation analysis for non-symmetric matrix factorization.

Journal ArticleDOI
TL;DR: In this paper, the problem of matrix completion was extended to the case of 1-bit observations, and a new theory was proposed for matrix completion in the context of recommender systems, where each rating consists of a single bit representing a positive or negative rating.
Abstract: The problem of recovering a matrix from an incomplete sampling of its entries—also known as matrix completion—arises in a wide variety of practical situations, including collaborative filtering, system identification, sensor localization, rank aggregation, and many more. While many of these applications have a relatively long history, recent advances in the closely related field of compressed sensing have enabled a burst of progress in the last few years, and we now have a strong base of theoretical results concerning matrix completion. A typical result from this literature is that a generic d × d matrix of rank r can be exactly recovered from O(r dpolylog(d)) randomly chosen entries. Similar results can be established in the case of noisy observations and approximately low-rank matrices. See [1] and references therein for further details. Although these results are quite impressive, there is an important gap between the statement of the problem as considered in the matrix completion literature and many of the most common applications discussed therein. As an example, consider collaborative filtering and the now-famous “Netflix problem.” In this setting, we assume that there is some unknown matrix whose entries each represent a rating for a particular user on a particular movie. Since any user will rate only a small subset of possible movies, we are only able to observe a small fraction of the total entries in the matrix, and our goal is to infer the unseen ratings from the observed ones. If the rating matrix is low-rank, then this would seem to be the exact problem studied in the matrix completion literature. However, there is a subtle difference: the theory developed in this literature generally assumes that observations consist of (possibly noisy) continuous-valued entries of the matrix, whereas in the Netflix problem the observations are “quantized” to the set of integers between 1 and 5. If we believe that it is possible for a user’s true rating for a particular movie to be, for example, 4.5, then we must account for the impact of this “quantization noise” on our recovery. Of course, one could potentially treat quantization simply as a form of bounded noise, but this is somewhat unsatisfying because the ratings aren’t just quantized — there are also hard limits placed on the minimum and maximum allowable ratings. (Why should we suppose that a movie given a rating of 5 could not have a true underlying rating of 6 or 7 or 10?) The inadequacy of standard matrix completion techniques in dealing with this effect is particularly pronounced when we consider recommender systems where each rating consists of a single bit representing a positive or negative rating (consider for example rating music on Pandora, the relevance of advertisements on Hulu, or posts on Reddit or MathOverflow). Similar situations arise in nearly every application that has been proposed for matrix completion, including the analysis of incomplete survey data, the recovery of pairwise distance matrices (multidimensional scaling), quantum state tomography, and many others. In such cases, the assumptions made in the existing theory of matrix completion do not apply, standard algorithms are ill-posed, and a new theory is required. In this work we describe the approach we take in [1] to extend the theory of matrix completion to the case of 1-bit observations. We consider a statistical model for such data where a binary output is generated according to a probability distribution which is parameterized by the corresponding entry of the unknown matrix M . The central question we ask is: “Given observations of this form, can we recover the underlying matrix?” Several new challenges arise when trying to develop a theory for 1-bit matrix completion. First, matrix completion is in some sense a more challenging problem than compressed sensing. Specifically, some additional difficulty arises because the set of low-rank matrices is “coherent” with single entry measurements—there will always be certain (sparse) low-rank matrices that we cannot hope to recover without essentially sampling every entry of the matrix. The typical way to deal with this possibility is to consider a reduced set of lowrank matrices by placing restrictions on the entry-wise maximum of the matrix or its singular vectors—informally, we require that the matrix is not too “spiky”. However, we introduce an entirely new dimension of ill-posedness by restricting ourselves to 1-bit observations. An example of this is described in detail in [1] and shows that in the case where we simply observe Y = sign(M), the problem of recovering M is illposed. To see this, let M = uv∗ for any vectors u,v ∈ R, and for simplicity assume that there are no zero entries in u or v. Now let ũ and ṽ be any vectors with the same sign pattern as u and v respectively. It is apparent that either M or M = ũṽ will yield the same observations Y , and thus M and M are indistinguishable. Note that while it is obvious that this 1-bit measurement process will destroy any information we have regarding the scaling of M , this ill-posedness remains even if we knew something about the scaling a priori (such as the Frobenius norm of M ). For any given set of observations, there will always be radically different possible matrices that are all consistent with observed measurements. After considering this example, the problem might seem hopeless. However, an interesting surprise is that when we add noise to the problem (that is, when we observe a subset of the matrix Y = sign(M + Z) where Z 6= 0 is an appropriate stochastic matrix) the picture completely changes—this noise has a “dithering” effect and the problem becomes well-posed. In fact, we will show that in this setting we can sometimes recover M to the same degree of accuracy that is possible when given access to completely unquantized measurements! In particular, under appropriate conditions, O(rd) measurements are sufficient to accurately recover M . We will provide an overview of these results and discuss a number of practical applications.

Posted Content
TL;DR: This paper critically analyze this method and its properties, and shows how it can be viewed as a type of approximate 2nd-order optimization method, where the Fisher information matrix can be view as an approximation of the Hessian.
Abstract: Natural gradient descent is an optimization method traditionally motivated from the perspective of information geometry, and works well for many applications as an alternative to stochastic gradient descent. In this paper we critically analyze this method and its properties, and show how it can be viewed as a type of 2nd-order optimization method, with the Fisher information matrix acting as a substitute for the Hessian. In many important cases, the Fisher information matrix is shown to be equivalent to the Generalized Gauss-Newton matrix, which both approximates the Hessian, but also has certain properties that favor its use over the Hessian. This perspective turns out to have significant implications for the design of a practical and robust natural gradient optimizer, as it motivates the use of techniques like trust regions and Tikhonov regularization. Additionally, we make a series of contributions to the understanding of natural gradient and 2nd-order methods, including: a thorough analysis of the convergence speed of stochastic natural gradient descent (and more general stochastic 2nd-order methods) as applied to convex quadratics, a critical examination of the oft-used "empirical" approximation of the Fisher matrix, and an analysis of the (approximate) parameterization invariance property possessed by natural gradient methods (which we show also holds for certain other curvature, but notably not the Hessian).

Journal ArticleDOI
TL;DR: A new image compression–encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized.
Abstract: The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression–encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.

Journal ArticleDOI
TL;DR: Uniqueness aspects of NMF are revisited here from a geometrical point of view, and a new algorithm for symmetric NMF is proposed, which is very different from existing ones.
Abstract: Non-negative matrix factorization (NMF) has found numerous applications, due to its ability to provide interpretable decompositions. Perhaps surprisingly, existing results regarding its uniqueness properties are rather limited, and there is much room for improvement in terms of algorithms as well. Uniqueness aspects of NMF are revisited here from a geometrical point of view. Both symmetric and asymmetric NMF are considered, the former being tantamount to element-wise non-negative square-root factorization of positive semidefinite matrices. New uniqueness results are derived, e.g., it is shown that a sufficient condition for uniqueness is that the conic hull of the latent factors is a superset of a particular second-order cone. Checking this condition is shown to be NP-complete; yet this and other results offer insights on the role of latent sparsity in this context. On the computational side, a new algorithm for symmetric NMF is proposed, which is very different from existing ones. It alternates between Procrustes rotation and projection onto the non-negative orthant to find a non-negative matrix close to the span of the dominant subspace. Simulation results show promising performance with respect to the state-of-art. Finally, the new algorithm is applied to a clustering problem for co-authorship data, yielding meaningful and interpretable results.

Journal ArticleDOI
TL;DR: The potential of using spin-adaptation to extend the range of DMRG calculations in complex transition metal problems is demonstrated by demonstrating that spin- Adaptation with the Wigner-Eckart theorem and using singlet embedding can yield up to an order of magnitude increase in computational efficiency.
Abstract: We extend the spin-adapted density matrix renormalization group (DMRG) algorithm of McCulloch and Gulacsi [Europhys. Lett.57, 852 (2002)] to quantum chemical Hamiltonians. This involves two key modifications to the non-spin-adapted DMRG algorithm: the use of a quasi-density matrix to ensure that the renormalised DMRG states are eigenvalues of $S^2$ , and the use of the Wigner-Eckart theorem to greatly reduce the overall storage and computational cost. We argue that the advantages of the spin-adapted DMRG algorithm are greatest for low spin states. Consequently, we also implement the singlet-embedding strategy of Nishino et al [Phys. Rev. E61, 3199 (2000)] which allows us to target high spin states as a component of a mixed system which is overall held in a singlet state. We evaluate our algorithm on benchmark calculations on the Fe$_2$S$_2$ and Cr$_2$ transition metal systems. By calculating the full spin ladder of Fe$_2$S$_2$ , we show that the spin-adapted DMRG algorithm can target very closely spaced spin states. In addition, our calculations of Cr$_2$ demonstrate that the spin-adapted algorithm requires only roughly half the number of renormalised DMRG states as the non-spin-adapted algorithm to obtain the same accuracy in the energy, thus yielding up to an order of magnitude increase in computational efficiency.

Posted Content
TL;DR: This article develops a software package softlmpute in R for implementing the two approaches for large matrix factorization and completion, and develops a distributed version for very large matrices using the Spark cluster programming environment.
Abstract: The matrix-completion problem has attracted a lot of attention, largely as a result of the celebrated Netflix competition. Two popular approaches for solving the problem are nuclear-norm-regularized matrix approximation (Candes and Tao, 2009, Mazumder, Hastie and Tibshirani, 2010), and maximum-margin matrix factorization (Srebro, Rennie and Jaakkola, 2005). These two procedures are in some cases solving equivalent problems, but with quite different algorithms. In this article we bring the two approaches together, leading to an efficient algorithm for large matrix factorization and completion that outperforms both of these. We develop a software package "softImpute" in R for implementing our approaches, and a distributed version for very large matrices using the "Spark" cluster programming environment.

Journal ArticleDOI
TL;DR: This paper derives the Bilinear G-AMP (BiG-AMP) algorithm as an approximation of the sum-product belief propagation algorithm in the high-dimensional limit, where central-limit theorem arguments and Taylor-series approximations apply, and under the assumption of statistically independent matrix entries with known priors.
Abstract: In this paper, we extend the generalized approximate message passing (G-AMP) approach, originally proposed for high-dimensional generalized-linear regression in the context of compressive sensing, to the generalized-bilinear case, which enables its application to matrix completion, robust PCA, dictionary learning, and related matrix-factorization problems. Here, in Part I of a two-part paper, we derive our Bilinear G-AMP (BiG-AMP) algorithm as an approximation of the sum-product belief propagation algorithm in the high-dimensional limit, where central-limit theorem arguments and Taylor-series approximations apply, and under the assumption of statistically independent matrix entries with known priors. In addition, we propose an adaptive damping mechanism that aids convergence under finite problem sizes, an expectation-maximization (EM)-based method to automatically tune the parameters of the assumed priors, and two rank-selection strategies. In Part II of the paper, we will discuss the specializations of EM-BiG-AMP to the problems of matrix completion, robust PCA, and dictionary learning, and we will present the results of an extensive empirical study comparing EM-BiG-AMP to state-of-the-art algorithms on each problem.

Journal ArticleDOI
TL;DR: The demonstrated convergence is comparable to or even better than the one of the DMRG algorithm, and the proposed algorithms are also efficient for non-SPD systems, for example, those arising from the chemical master equation describing the gene regulatory model at the mesoscopic scale.
Abstract: We propose algorithms for the solution of high-dimensional symmetrical positive definite (SPD) linear systems with the matrix and the right-hand side given and the solution sought in a low-rank format. Similarly to density matrix renormalization group (DMRG) algorithms, our methods optimize the components of the tensor product format subsequently. To improve the convergence, we expand the search space by an inexact gradient direction. We prove the geometrical convergence and estimate the convergence rate of the proposed methods utilizing the analysis of the steepest descent algorithm. The complexity of the presented algorithms is linear in the mode size and dimension, and the demonstrated convergence is comparable to or even better than the one of the DMRG algorithm. In the numerical experiment we show that the proposed methods are also efficient for non-SPD systems, for example, those arising from the chemical master equation describing the gene regulatory model at the mesoscopic scale.

Journal ArticleDOI
TL;DR: A Hadamard product induced bias matrix model is proposed, which only requires the use of the data in the original matrix to identify and adjust the cardinally inconsistent element(s) in a PCM and significantly enhances matrix consistency and improves the reliability of PCM decision making.

Journal ArticleDOI
TL;DR: The Eigenvalue soLvers for Petascale Applications (ELPA) as discussed by the authors is a library for solving symmetric and Hermitian eigenvalue problems for dense matrices that have real-valued and complex-valued matrix entries.
Abstract: Obtaining the eigenvalues and eigenvectors of large matrices is a key problem in electronic structure theory and many other areas of computational science. The computational effort formally scales as O(N(3)) with the size of the investigated problem, N (e.g. the electron count in electronic structure theory), and thus often defines the system size limit that practical calculations cannot overcome. In many cases, more than just a small fraction of the possible eigenvalue/eigenvector pairs is needed, so that iterative solution strategies that focus only on a few eigenvalues become ineffective. Likewise, it is not always desirable or practical to circumvent the eigenvalue solution entirely. We here review some current developments regarding dense eigenvalue solvers and then focus on the Eigenvalue soLvers for Petascale Applications (ELPA) library, which facilitates the efficient algebraic solution of symmetric and Hermitian eigenvalue problems for dense matrices that have real-valued and complex-valued matrix entries, respectively, on parallel computer platforms. ELPA addresses standard as well as generalized eigenvalue problems, relying on the well documented matrix layout of the Scalable Linear Algebra PACKage (ScaLAPACK) library but replacing all actual parallel solution steps with subroutines of its own. For these steps, ELPA significantly outperforms the corresponding ScaLAPACK routines and proprietary libraries that implement the ScaLAPACK interface (e.g. Intel's MKL). The most time-critical step is the reduction of the matrix to tridiagonal form and the corresponding backtransformation of the eigenvectors. ELPA offers both a one-step tridiagonalization (successive Householder transformations) and a two-step transformation that is more efficient especially towards larger matrices and larger numbers of CPU cores. ELPA is based on the MPI standard, with an early hybrid MPI-OpenMPI implementation available as well. Scalability beyond 10,000 CPU cores for problem sizes arising in the field of electronic structure theory is demonstrated for current high-performance computer architectures such as Cray or Intel/Infiniband. For a matrix of dimension 260,000, scalability up to 295,000 CPU cores has been shown on BlueGene/P.

Journal ArticleDOI
TL;DR: In this article, a review of non-perturbative instanton effects in quantum theories, with a focus on large N gauge theories and matrix models, is presented, and two applications of these techniques in string theory are considered.
Abstract: In these lectures I present a review of non-perturbative instanton effects in quantum theories, with a focus on large N gauge theories and matrix models. I first consider the structure of these effects in the case of ordinary differential equations, which provide a model for more complicated theories, and I introduce in a pedagogical way some technology from resurgent analysis, like trans-series and the resurgent version of the Stokes phenomenon. After reviewing instanton effects in quantum mechanics and quantum field theory, I address general aspects of large N instantons, and then present a detailed review of non-perturbative effects in matrix models. Finally, I consider two applications of these techniques in string theory.

Journal ArticleDOI
TL;DR: The proposed D-LS algorithm does not require computing the covariance matrices with large sizes and matrix inverses in each recursion step, and thus has a higher computational efficiency than the RLS algorithm.

Book
06 Feb 2014
TL;DR: In this paper, the authors present a functional calculus and derivation of matrix monotone functions and convexity of matrix means and inequalities, majorization and singular values, and some applications.
Abstract: Fundamentals of operators and matrices.- Mappings and algebras.- Functional calculus and derivation.- Matrix monotone functions and convexity.- Matrix means and inequalities.- Majorization and singular values.- Some applications.

Proceedings ArticleDOI
31 May 2014
TL;DR: A new randomized method for computing the min-plus product of two n × n matrices is presented, yielding a faster algorithm for solving the all-pairs shortest path problem (APSP) in dense n-node directed graphs with arbitrary edge weights.
Abstract: We present a new randomized method for computing the min-plus product (a.k.a., tropical product) of two n × n matrices, yielding a faster algorithm for solving the all-pairs shortest path problem (APSP) in dense n-node directed graphs with arbitrary edge weights. On the real RAM, where additions and comparisons of reals are unit cost (but all other operations have typical logarithmic cost), the algorithm runs in time [EQUATION] and is correct with high probability. On the word RAM, the algorithm runs in [EQUATION] + n2+o(1) logM time for edge weights in ([0,M]∩Z)∪{∞}. Prior algorithms took either O(n3/logc n) time for various c ≤ 2, or O(Mαnβ) time for various α > 0 and β > 2. The new algorithm applies a tool from circuit complexity, namely the Razborov-Smolensky polynomials for approximately representing AC0[p] circuits, to efficiently reduce a matrix product over the (min, +) algebra to a relatively small number of rectangular matrix products over F2, each of which are computable using a particularly efficient method due to Coppersmith. We also give a deterministic version of the algorithm running in n[EQUATION] time for some δ > 0, which utilizes the Yao-Beigel-Tarui translation of AC0[m] circuits into "nice" depth-two circuits.

Journal ArticleDOI
TL;DR: This paper is concerned with global asymptotic stability for a class of generalized neural networks with interval time-varying delays by constructing a new Lyapunov-Krasovskii functional which includes some integral terms in the form of ∫(t-h)(t)(h-t-s)(j)ẋ(T)(s)Rj(s)ds(j=1,2,3).

Journal ArticleDOI
TL;DR: New structural statistical matrices which are gray level size zone matrix (SZM) texture descriptor variants which characterizes the DNA organization during the mitosis, according to zone intensities radial distribution are presented.
Abstract: This paper presents new structural statistical matrices which are gray level size zone matrix (SZM) texture descriptor variants. The SZM is based on the cooccurrences of size/intensity of each flat zone (connected pixels with the same gray level). The first improvement increases the information processed by merging multiple gray-level quantizations and reduces the required parameter numbers. New improved descriptors were especially designed for supervised cell texture classification. They are illustrated thanks to two different databases built from quantitative cell biology. The second alternative characterizes the DNA organization during the mitosis, according to zone intensities radial distribution. The third variant is a matrix structure generalization for the fibrous texture analysis, by changing the intensity/size pair into the length/orientation pair of each region.

Book
03 Sep 2014
TL;DR: The present book contains the basic results of the author over index matrices and some of its open problems with the aim to stimulating more researchers to start working in this area.
Abstract: This book presents the very concept of an index matrix and its related augmented matrix calculus in a comprehensive form. It mostly illustrates the exposition with examples related to the generalized nets and intuitionistic fuzzy sets which are examples of an extremely wide array of possible application areas. The present book contains the basic results of the author over index matrices and some of its open problems with the aim to stimulating more researchers to start working in this area.